Position queries with GST_FORMAT_TIME are supposed to return stream
time.
gst_base_sink_get_position() estimates the current stream time on its
own instead of using gst_segment_to_stream_time(), but the algorithm
used was not taking segment.offset into account, resulting in invalid
values when this field was set to a non-zero value.
https://bugzilla.gnome.org/show_bug.cgi?id=792434
Occasionally this test would fail, especially if the system is under load,
because the position query would pick up the last position from the
last buffer timestamp which has a lower timestamp than what we're
looking for. The sleep is long enough, however. It's unclear to me why
exactly this happens but there seems to be some kind of scheduling
issue going on as the streaming thread floods the sink with buffers.
Let's throttle the fakesrc to 100 buffers per second and make the sink
sync to the clock to restore some sanity. It should be totally sufficient
to test what we want to test, and seems to make things reliable here.
WARNING: Trying to compare values of different types (str, int).
The result of this is undefined and will become a hard error
in a future Meson release.
This can be used to identify buffers for which a higher percentage
of redundancy should be allocated when performing forward error
correction, or to prevent still video frames from being dropped by
elements due to QoS.
https://bugzilla.gnome.org/show_bug.cgi?id=793008
As we do that for serialized events as well, and the subclass will
most likely need to access pad->segment to make its decisions,
doing that from the sinkpad's streaming threads was racy.
The following case can happen when two thread try to activate and
deactivate a pad at the same time:
T1: starts to deactivate, calls pre_activate(), sets in_activation
to TRUE and carries on
T2: starts to activate, calls pre_activate(), in_activation is TRUE
so it waits on the GCond
T1: calls post_activate(), tries to acquire the streaming lock ..
but can't because T2 is currently holding it
With this patch, the deadlock will no longer happen but does not
solve the problem that:
T2: will resume activation of the pad, set the pad mode to the target
one (PUSH or PULL) and eventually the streaming lock gets released.
T1: is able to finish calling post_activate() ... but ... the pad
wasn't deactivated (T2 was the last one to "activate" the pad.
https://bugzilla.gnome.org/show_bug.cgi?id=792341
Set up all ten pipelines and preroll them first, and only set
them to playing to run wild after they're all set up. If we set
them to PLAYING directly and let those threads run wild, then
it might take ages (many seconds) for the other pipelines to
even get up and running, especially on machines with only one
or two cores, and operating systems that suck at scheduling.
Now the fakesink test takes 19 secs instead of 71 secs on a
single-cpu windows machine.
This is a better fit given that the function docs say this
should (only) be used for interval measurements, but also
this seems to give much better granularity on Windows
systems, where before this change there would often be
10-20 lines of debug log with the same timestamp up front.
Scale the number of threads used in the stress tests according to
the number of cores/cpus. We want some contention, but we also
don't want too much contention, as some operating systems are
better at handling 100 threads running wild on a single core
than others.
Fix refcounting issue when plugin was loaded already.
gst_plugin_load() is supposed to return a ref, so it
must always return a ref.
This also fixes the gstplugin unit test on windows where
fork is not available and where test_load_coreelements()
would unref a plugin ref it didn't get and then mess up
the internal registry plugin list state for the next test,
in case where the test registry does not exist yet.