Otherwise we'll get an assertion if the object behind the weak pointer
was already destroyed in the meantime as we would pass NULL as first
argument to g_object_remove_weak_pointer().
When printing a GstStructure property (e.g. the "stats" property in
rtpsession) the first field is printed on the same line of the type
description, and this is both inconsistent compared to how Enum values
are printed and confusing as the reader might miss the first field.
To fix this, add a newline before printing GstStructure fields in
properties.
NOTE: this does not change the existing inconsistent behavior of an
extra newline *after* a GstStructure property, but the latter is not as
annoying and it would take more effort to fix because GstStructure
fields are printed in CAPS descriptions too.
There is a deadlock if any thread from the pool tries to push
a new task while other thread is waiting for the pool of threads
to finish. With this patch the thread will get an error when it
tries to add a new task while the taskpool is being cleaned up.
Between getting the GSource with the mutex and destroying it, something
else might've destroyed it already and we would have a dangling pointer.
Keep an additional reference just in case.
Signal watches are reference counted and gst_bus_remove_watch() would
immediately remove it, breaking the reference counting. Only
gst_bus_remove_signal_watch() should be used for removing signal
watches.
GstDeviceProvider has a started_count private variable counter,
and the gst_device_provider_start() documentation emphasizes the
importance of balancing the start and stop calls.
However, when starting a provider that is already started, the
current code will never increment the counter more than once.
So you start it twice, but it will have start_count 1, which is the
maximum value it will ever see.
Then when you stop it twice, on the 2nd stop, after decrementing the
counter in gst_device_provider_stop():
else if (provider->priv->started_count < 1) {
g_critical
("Trying to stop a GstDeviceProvider %s which is already stopped",
GST_OBJECT_NAME (provider));
and the program is killed.
Fix this by incrementing the counter when starting a device provider that
was already started.
Fixes flaky appsrc unit test where depending on scheduling
the submitted list might not be writable if submitted via
an action signal from the application thread.
Fixes gst-plugins-base#522
Fix corruption of meta list head when removing metas at the beginning
during iteration. Linked list handling in gst_buffer_foreach_meta
failed to track the previous entry and update the correct next pointer
when removing items from beyond the head of the list, resulting in
arbitrary list pointer corruption.
Closes#332
The post tracer hooks have a GstQuery argument which was truncated from
the trace. As the post hook is the one that contains the useful data,
this bug was hiding the important information from that trace.
A pointer to a hook in this list can easily not be unique, given both
the slice-allocator reusing memory, and the OS re-using freed blocks
in malloc.
By doing many repeated add and remove of probes, this becomes very easily
reproduced.
Instead use hook_id, which *is* unique for a added GHook.
If a segment has stop == -1, then gst_segment_to_running_time()
would refuse to calculate a running time for negative rates,
but gst_segment_do_seek() allows this scenario and uses a
valid duration for calculations.
Make the 2 functions consistent by using any configured duration
to calculate a running time too in that case.
https://bugzilla.gnome.org/show_bug.cgi?id=796559
Since we use full signed running times, we no longer need to clamp
the buffer time.
This avoids having the position of single queues not advancing for
buffers that are out of segment and never waking up non-linked
streams (resulting in an apparent "deadlock").
The follow-up and delay-resp messages carry precise
timestamps for the arrival at the clock master, but
the local return time is unimportant, so we should be very
lenient in accepting them late. Some PTP masters don't
prioritise sending those packets, and we reject all the
responses and never sync - or take forever to do so.
Increase the tolerance to 20x the mean path delay.
Also fix a typo in one debug output that would print
the absolute time of the delay-resp message, not the offset
from the delay-req that it's actually being compared against.
Previously, with opportunistic sync we'd track a master
clock as soon as we see a SYNC message, and hence sync up
faster, but then we'd announce we're synched before seeing
the ANNOUNCE, leaving the clock details like grandmaster-clock
empty.
A better way is to start tracking the clock opportunistically,
but not announce we're synched until we've also seen the ANNOUNCE.
If we ever get a GST_FLOW_EOS from downstream, we might retry
pushing new data. But if pushing that data doesn't return a
GstFlowReturn (such as pushing events), we would end up returning
the previous GstFlowReturn (i.e. EOS).
Not properly resetting it would cause cases where queue2 would
stop pushing on the first GstEvent stored (even if there is more
data contained within).
Using a rate of 1.1 in the test is causing the test to
fail on 32-bit because ceil(1.1 * 10) can round to 12.
Instead use a rate 2.0 that can be expressed as floating
point number and doesn't trigger the problem.
https://bugzilla.gnome.org/show_bug.cgi?id=797154
Fixes for gst_segment_position_from_running_time_full() when
converting running_times that precede the segment start (or
stop in a negative rate segment)
The return value was incorrectly negated in those cases.
Add some more unit test checks for those cases, and especially
for segments with offsets.
gst_element_post_message() takes ownership of the message so we need to increase
its refcount until we no longer require access to its data (context_type).
https://bugzilla.gnome.org/show_bug.cgi?id=797099
The avg_bitrate is an unsigned int, so the gst_util_uin64_scale() function can't
be used for it, as it expects signed integers for the fraction parts arguments.
https://bugzilla.gnome.org/show_bug.cgi?id=797054
And only ever use the non-live values if all pads are non-live,
otherwise only use the results of all live pads.
It's unclear what one would use the values for in the non-live case, but
by this we at least pass them through correctly then.
This is a follow-up for 794944f779, which
causes wrong latency calculations if the first pad is non-live but a
later pad is actually live. In that case the live values would be
accumulated together with the values of the non-live first pad,
generally causing wrong min/max latencies to be calculated.
IDLE probes that are directly called when being added will increase /
decrease the "number of IDLE probes running" counter around the call,
but when running from the streaming thread this won't happen.
This has the effect that when running from a streaming thread it is
possible to push serialized events or data out of the pad without
problems, but otherwise it would deadlock because serialized data would
wait for the IDLE probe to finish first (it is blocking after all!).
With this change it will now always consistently deadlock instead of
just every once in a while, which should make it obvious why this
happens and prevent racy deadlocks in application code.
https://bugzilla.gnome.org/show_bug.cgi?id=796895
And make use of it in the typefind element. It's useful to distinguish
between the different errors why typefinding can fail, and especially to
not consider GST_FLOW_FLUSHING as an actual error.
https://bugzilla.gnome.org/show_bug.cgi?id=796894