The fomula, 'offset = time / rate', is correct only if
the rate is never changed. When the rate is changed,
the offset should be re-calculated based on the previous
offset.
https://bugzilla.gnome.org/show_bug.cgi?id=791269
Because audioconvert can now convert between interleaved and non-interleaved,
this pipeline fails on the upstream capsfilter not being able to fixate its
output caps. This is unavoidable.
e4bf9ed8f0 was not quite right and changed
the wrong thing. Intead we needed to change the multiplication order
and should have kept the previous to/from matrices as is done in this
patch.
The matrices were converting the wrong values with non-diagonal-only matrices.
e.g. a typical yflip matrix in [-1,1]^3 such as
1 0 0 0
0 -1 0 0
0 0 1 0
0 0 0 1
Would have actually required a matrix like this in [0,1]^3
1 0 0 0
0 -1 0 0
0 0 1 0
0 -2 0 1
Which is
1. not consistent with our multiplication convention and would require
transposing matrices or changing our multiplication order (from what is
generally used on opengl matrix guides/tutorials).
2. Produces incorrect values when input with actual vertices accounting for
the difference in multiplication order. e.g. some vertices multiplied by
the yflip matrix using vertex * yflip(== transpose(yflip) * vertex):
vertex: -> result: expected:
vec4(1,0,1,1) -> vec4(1,-2,1,1) vec4(1,1,1,1)
vec4(1,1,1,1) -> vec4(1,-3,1,1) vec4(1,0,1,1)
With the updated values, we now get the expected values.
Includes a test for this behaviour and the example above
Also yield common options to the outer project (gst-build in our case)
so that they don't have to be set manually and use array types for some
options.
When frames are dropped or reordered then the serialized events are
collected and pushed with the next frame. This test verifies that the
order is preserved.
https://bugzilla.gnome.org/show_bug.cgi?id=794192
The default implementation for packet loss handling previously
always sent a gap event.
While this is correct as long as we know the packet that was
lost was actually a media packet, with ULPFEC this becomes
a bit more complicated, as we do not know whether the packet
that was lost was a FEC packet, in which case it is better
to not actually send any gap events in the default implementation.
Some payloaders can be more clever about, for example VP8 can
use the picture-id, and the M and S bits to determine whether
the missing packet was inside an encoded frame or outside,
and thus whether if it was a media packet or a FEC packet,
which is why ulpfecdec still lets these lost events go through,
though stripping them of their seqnum, and appending a new
"might-have-been-fec" field to them.
This is all a bit terrible, but necessary to have ULPFEC
integrate properly with the rest of our RTP stack.
https://bugzilla.gnome.org/show_bug.cgi?id=794909
The newly-added build definitions for test/icles relied
on dependencies that were only defined when the examples
are enabled, thus breaking meson build -Ddisable_examples=true
As we load that symbol dynamically, valgrind gets confused
when it leaks and reports the leak against an unrelated library
and an unknown (??) symbol.
To address that, put the loading and calling of that symbol
in a separate function, and ignore any malloc leak happening
in that function.
test_negotiation would occasionally time out, for unknown reasons.
Simplify the test setup and get rid of the main loop, busses, and
notify signals. With this I can no longer easily reproduce the
timeout. Fingers crossed.
These are very much artificial of course, but got to
measure something. appsink one contains lots of buffer
creation/free overhead, while appsrc one does not.
We can either receive an element that is floating or not and need to
accomodate that in the signal return values. Do so by removing the
floating flag.
https://bugzilla.gnome.org/show_bug.cgi?id=792597
It does not timeout anymore, even though it's a very slow test. For the
context, this test runs routines for a fixes amount of time and prints
the throughput. Which means the test takes more time everytime a pixel
format is added. If that becomes a problem again, we should disable the
benchmarks by default.
Need to add gio-unix-2.0 dep to pipelines/tcp test otherwise it
won't find the gio/gunixfdmessage.h header which is not in the
same dir as the other gio headers. This issue was masked before
because we didn't include config.h so HAVE_GIO_UNIX_2_0
wasn't defined.
In many cases the unistd.h includes weren't actually needed.
Don't build tests that need it on windows with MSVC
(multifdsink, multisocketsink, pipelines/tcp).
Preparation for making tests work on Windows with MSVC.
Some GL platforms (EGL, WGL) require deactivating the OpenGL context in
one thread before it can be used in another thread which this test
currently violates and would e.g. result in EGL_BAD_ACCESS errors from
gst_gl_context_activate().
Fix by moving the object creation into the GL thread instead and not
requiring additional gst_gl_context_activate() calls.
https://bugzilla.gnome.org/show_bug.cgi?id=792158
If timestamp goes forwards more than allowed, we consider that the
timestamp belongs to the previous counting, so the extended timestamp
is unwrapped.
https://bugzilla.gnome.org/show_bug.cgi?id=783443
Tests and documentation will follow separately.
The mixer elements in the opengl plugin need to stay
in -bad for now since they use GstVideoAggregator.
https://bugzilla.gnome.org/show_bug.cgi?id=754094
The client-draw callback is running on the GL Thread, which will
be required to map the buffer. Map early, and pass the mapped
frame instead. On top of that, make sure to signal any pending
draw before trying to push EOS, as some pad locks might be taken.
This is the cost of using the same thread to control GStreamer and
to render GL.
Except for gst/gl/gstglfuncs.h
It is up to the client app to include these headers.
It is coherent with the fact that gstreamer-gl.pc does not
require any egl.pc/gles.pc. I.e. it is the responsability
of the app to search these headers within its build setup.
For example gstreamer-vaapi includes explicitly EGL/egl.h
and search for it in its configure.ac.
For example with this patch, if an app includes the headers
gst/gl/egl/gstglcontext_egl.h
gst/gl/egl/gstgldisplay_egl.h
gst/gl/egl/gstglmemoryegl.h
it will *no longer* automatically include EGL/egl.h and GLES2/gl2.h.
Which is good because the app might want to use the gstgl api only
without the need to bother about gl headers.
Also added a test: cd tests/check && make libs/gstglheaders.check
https://bugzilla.gnome.org/show_bug.cgi?id=784779
All code interacting with Objective-C objects should now use Automated
Reference Counting rather than manual memory management or Garbage
Collection. Because ARC prohibits C-structs from containing
references to Objective-C objects, all such fields are now typed
'gpointer'. Setting and gettings Objective-C fields on such a
struct now uses explicit __bridge_* calls to tell ARC about
object lifetimes.
https://bugzilla.gnome.org/show_bug.cgi?id=777847
Create our own instead as the default framebuffer may require special
fiddling (like having a visible window) to correctly display/be renderable.
Fixes the remaining GL library tests on OS X
gstgl doesn't undo/overwrite what GL state the examples are changing
anymore. As such, the examples need to reset the GL state themselves
to be able to play nice with libgstgl
The spec allows the core/compatibility profiles to be used
with #version 150.
Also tighten up the tests to check for default profiles being chosen
correctly.
The tests were broken since 91fea30, which changed glupload to return
GST_GL_UPLOAD_RECONFIGURE if the texture target in the input buffers doesn't
match the texture-target configured in the output caps.
This commit fixes that and adds more checks for the new behaviour.