When deactivating pads, we need to ensure that the streaming threads
going through the pads we wish to deactivate can cleanly return.
Failure to do that would result in the streaming locks of those
pads never being released. The end result would be a deadlock
when stopping decodebin2.
In order to avoid that situation, release the "dyn" lock around
the deactivation code. And refactor the code to cope with the
list of blocked pads having potentially changed when re-acquiring
the lock.
We have a dedicated one-shot thread to handle cleanup of old groups.
While this is a good idea. It's an even better idea to make sure
that thread is *completed* before the parsebin element to which
it is related isn't freed/gone.
* There can only be one cleanup thread happening at any point in time.
If there is already one, we wait for the previous one to finish.
* When shutting down (NULL=>READY) make sure the thread is finished
https://bugzilla.gnome.org/show_bug.cgi?id=790007
We have a dedicated one-shot thread to handle cleanup of old groups.
While this is a good idea. It's an even better idea to make sure
that thread is *completed* before the decodebin2 element to which
it is related isn't freed/gone.
* There can only be one cleanup thread happening at any point in time.
If there is already one, we wait for the previous one to finish.
* When shutting down (NULL=>READY) make sure the thread is finished
https://bugzilla.gnome.org/show_bug.cgi?id=790007
Instead of emitting 'drained' whenever every single chain is drained
(which would result in plenty of signal emission, and would also
occur when switching groups), only emit it when the top-level chain
is drained.
Furthermore, mark unknown (and therefore unexposed) pads as drained
since we'll never get EOS on them.
https://bugzilla.gnome.org/show_bug.cgi?id=787367
There were still some races going on where seeking events wouldn't
be properly intercepted/executed by this thread.
* Instead of always waiting for the GCond to be emitted, first just
check if there is an event available
* Take ownership of the event *while* the lock is taken and not
after releasing/reacquiring it
* Finally acquire lock at the very top and release it at the end
to make it a bit more streamlined
This removes the remaining issues with seeks not being executed
The previous branch will release the lock in the call to
gst_ogg_demux_seek_back_after_push_duration_check_unlock()
Only unlock it if we didn't call that function
When calculating duration in push-mode we seek to a certain position
and discard any data until we get data from that requested position.
The problem is that basing ourselves solely on offset to determine
whether we reached the target offset is wrong since the source might
be fast enough to send us that target position *before* it processed
the requested seek.
This would end up in a situation where:
* We think we're done with duration estimate
* We fire a seek back to "0" in the loop thread
* We resume normal processing
* ... except that we're still getting data from too far ahead which
we decide to process.
* And we start doing totally wrong granule/time/duration calculation
and pushing wrong data.
Instead of this confusion, wait until we receive data from the requested
seek. We do that by using the fact that the seqnum in
seek_event_drop_til will be non-zero until the SEGMENT corresponding
to the requested SEEK has been received.
Bonus: makes startup slightly faster
Code using the push_loop_thread (using for sending seeks) assumes
that the thread was properly started, except that this isn't always
true and the thread might not have completely started.
Instead wait for the thread to properly start before doing anything
else.
If we are going to return a (potentially) 64bit integer, don't use
a 32bit one for calculation, otherwise we could end up exceeding
the maximum size of a 32bit int.
This would result in a lot of warnings regarding elements not being
in NULL state when removed, or even leaked elements.
Instead make sure we take the lock and check whether we are processing
or not before allocating or adding anything to the pipeline
For stream mappers that don't set a specific granuleshift, it will
have the default value of -1.
Protect the code for that and return the granule value as-is
The attribute can be defined without value regardless session-level
or media-level.
Although `gst_sdp_message_insert_attribute` can be used to set NULL,
it would be easier if `gst_sdp_message_add_attribute` accepts NULL.
https://bugzilla.gnome.org/show_bug.cgi?id=789841
If we can expose the main chain, recheck whether we are shutting
down or not.
decodebin2 might have been set to READY/NULL during the attempt
to expose, which would cause it to fail ... but it is not a fatal
issue.
Since the default value of a GstOggPad.map.map was 0 ... we would
end up using wrong functions from mappers() if the stream wasn't
initialized yet.
Instead of that, use a default blank/empty first entry.
In some corner cases we end up with the building chain not being
properly tracked (and therefore not properly freed).
Add a FIXME so it can later be fixed, but for now just fix the leak
RTCP XR provides supplements information of the report blocks
from SR and RR. This patch is for downgrading warnings when
XR is detected before implementing entire block types of RFC3611.
https://bugzilla.gnome.org/show_bug.cgi?id=789743