It does not timeout anymore, even though it's a very slow test. For the
context, this test runs routines for a fixes amount of time and prints
the throughput. Which means the test takes more time everytime a pixel
format is added. If that becomes a problem again, we should disable the
benchmarks by default.
The source offset (soff) was not incremented for each component and then
each group of 3 components were inverted. This was causing a staircase
effect combined with some noise.
https://bugzilla.gnome.org/show_bug.cgi?id=789876
This adds a 10 bit variant for NV16 packed into 32 bits little endian
words. The MSB 2 bits are padding. This format is used on Xilinx SoC and
identified with the FOURCC XV20.
https://bugzilla.gnome.org/show_bug.cgi?id=789876
This add a 10bit variant of gray scale packed into 32bits little endian
words. The MSB 2 bits are padding and should be ignored. This format is
used on Xilinx SoC and is identified with the FOURCC XV10.
https://bugzilla.gnome.org/show_bug.cgi?id=789876
This adds a 10bit variant for NV12 which packs 3 10bit components
into little endian 32bit words. The MSB 2 bits are padding and should be
ignored. This format is used on Xilinx SoC and is identified with there
with the FOURCC XV15
https://bugzilla.gnome.org/show_bug.cgi?id=789876
With playbin the last subtitle chunk would not get displayed
if the last chunk was missing a newline at the end. This is
because streamsynchronizer will hold back the EOS event until
the audio and video streams are finished too, so subparse
would never forcefully push out the last chunk until the very
end when it is too late.
We get a STREAM_GROUP_DONE event from streamsynchronizer however,
so handle that like EOS and force out any remaining text then.
https://bugzilla.gnome.org/show_bug.cgi?id=771853
Alsasrc introduced delay_lock in commit 519f85a43e73efb8f3fb2c7be45226e
because alsa-lib is not thread safe for the same handle.
Alsasrc uses the same threading pattern, it should be locked too.
https://bugzilla.gnome.org/show_bug.cgi?id=746015
Need to add gio-unix-2.0 dep to pipelines/tcp test otherwise it
won't find the gio/gunixfdmessage.h header which is not in the
same dir as the other gio headers. This issue was masked before
because we didn't include config.h so HAVE_GIO_UNIX_2_0
wasn't defined.
In many cases the unistd.h includes weren't actually needed.
Don't build tests that need it on windows with MSVC
(multifdsink, multisocketsink, pipelines/tcp).
Preparation for making tests work on Windows with MSVC.
(yes, this has never worked since it was introduced, don't worry)
If we want to actually detect layer/channels/samplerate changes,
it would be better to:
* not reset the various prev_* variables at every iteration.
* and actually store the values when they change
CID #206079
CID #206080
CID #206081
Some GL platforms (EGL, WGL) require deactivating the OpenGL context in
one thread before it can be used in another thread which this test
currently violates and would e.g. result in EGL_BAD_ACCESS errors from
gst_gl_context_activate().
Fix by moving the object creation into the GL thread instead and not
requiring additional gst_gl_context_activate() calls.
https://bugzilla.gnome.org/show_bug.cgi?id=792158
We can pass string constants here to g_strdup_printf(),
so do so and re-enable the -Wformat-nonliteral warning
we had to disable when merging the opengl libs.
The AM_CONDITIONAL always need to be evaluated, regardless of
whether we are building with or without gl plugins (the actual
checks are only called in AG_GST_GL_PLUGIN_CHECKS).