The `query` argument of gst_pad_query is "transfer none".
Query objects are "borrowed" by the pad query handlers and those
should never unref them.
This was leading to double freed queries in a very racy way with nested
GESTimelines.
Since we use full signed running times, we no longer need to clamp
the buffer time.
This avoids having the position of single queues not advancing for
buffers that are out of segment and never waking up non-linked
streams (resulting in an apparent "deadlock").
Start task on new source pads added at runtime after they
have been added to the element, not during activation.
This ensures the pads can post their CREATE stream-status
messages and the application can set thread priorities.
https://bugzilla.gnome.org/show_bug.cgi?id=756867
When queue-like elements are in "EOS" situation (received GST_FLOW_EOS
from downstream or EOS was pushed), they drain buffers/events that
wouldn't be processed anyway and let through events that might
modify the EOS situation.
Previously only GST_EVENT_EOS and GST_EVENT_SEGMENT events were let
through, but we also need to allow GST_EVENT_STREAM_START to go
through since it resets the EOS state of pads since 1.6
https://bugzilla.gnome.org/show_bug.cgi?id=786034
... and use the biggest interleave value among streaming threads.
This is to optimize multiqueue size adaptation on adaptive streaming
use case with "use-interleave" property.
https://bugzilla.gnome.org/show_bug.cgi?id=784448
When running in sync-by-running-time mode, pad groups
that have exactly 1 pad and it's not-linked might never
wake up after computing a high time, as the per-pad-group
high time was only recomputed when a pad in the group
advances.
Wake those up using the global multiqueue high-time across
all other groups instead.
https://bugzilla.gnome.org/show_bug.cgi?id=774322
low/high-watermark are of type double, and given in range 0.0-1.0. This
makes it possible to set low/high watermarks with greater resolution,
which is useful with large multiqueue max sizes and watermarks like 0.5%.
Also adding a test to check the fill and watermark level behavior.
https://bugzilla.gnome.org/show_bug.cgi?id=770628
To make the code clearer, and to facilitate future improvements, introduce
a distinction between the buffering level and the buffering percentage.
Buffering level: the queue's current fill level. The low/high watermarks
are in this range.
Buffering percentage: percentage relative to the low/high watermarks
(0% = low watermark, 100% = high watermark).
To that end, get_percentage() is renamed to get_buffering_level(). Also,
low/high_percent are renamed to low/high_watermark to avoid confusion.
mq->buffering_percent values are now normalized in the 0..100 range for
buffering messages inside update_buffering(), and not just before sending
the buffering message. Finally the buffering level range is parameterized
by adding a new constant called MAX_BUFFERING_LEVEL.
https://bugzilla.gnome.org/show_bug.cgi?id=770628
When calculating the high_time, cache the group value in each singlequeue.
This fixes the issue by which wake_up_next_non_linked() would use the global
high-time to decide whether to wake-up a waiting thread, instead of the group
one, resulting in those threads constantly spinning.
Tidy up a bit the waiting logic while we're at it.
With this patch, we go from 212% playing a 8 audio / 8 video file down to less
than 10% (most of it being the video decoding).
https://bugzilla.gnome.org/show_bug.cgi?id=770225
This is an update on c9b6848885
multiqueue: Fix not-linked pad handling at EOS
While that commit did fix the behaviour if upstream sent a GST_EVENT_EOS,
it would break the same issue when *downstream* returns GST_FLOW_EOS
(which can happen for example when downstream decoders receive data
from after the segment stop).
GST_PAD_IS_EOS() is only TRUE when a GST_EVENT_EOS has flown through it
and not when a GST_EVENT_EOS has gone through it.
In order to handle both cases, also take into account the last flow
return.
https://bugzilla.gnome.org/show_bug.cgi?id=763770
When syncing by running time, multiqueue will throttle unlinked streams
based on a global "high-time" and the pending "next_time" of a stream.
The idea is that we don't want unlinked streams to be "behind" the global
running time of linked streams, so that if/when they get linked (like when
switching tracks) decoding/playback can resume from the same position as
the other streams.
The problem is that it assumes elements downstream will have a more or less
equal buffering/latency ... which isn't the case for streams of different
type. Video decoders tend to have higher latency (and therefore consume more
from upstream to output a given decoded frame) compared to audio ones, resulting
in the computed "high_time" being at the position of the video stream,
much further than the audio streams.
This means the unlinked audio streams end up being quite a bit after the linked
audio streams, resulting in gaps when switching streams.
In order to mitigate this issue, this patch adds a new "group-id" pad property
which allows users to "group" streams together. Calculating the high-time will
now be done not only globally, but also per group. This ensures that within
a given group unlinked streams will be throttled by that group's high-time
instead.
This fixes gaps when switching downstream elements (like switching audio tracks).
Basically, sq->max_size.visible is never increased for sparse streams in
overruncb when empty queue has been found;
If the queue is sparse it just skip the entire logic determining whether
max_size.visible should be increased, deadlocking the demuxer.
What should be done instead is that when determining if limits have been
reached, to ignore time for sparse streams, as the buffer may be far in the
future.
https://bugzilla.gnome.org/show_bug.cgi?id=765736
This ensures the following special case is handled properly:
1. Queue is empty
2. Data is pushed, fill level is below the current high-threshold
3. high-threshold is set to a level that is below the current fill level
Since mq->percent wasn't being recalculated in step #3 properly, this
caused the multiqueue to switch off its buffering state when new data is
pushed in, and never post a 100% buffering message. The application will
have received a <100% buffering message from step #2, but will never see
100%.
Fix this by recalculating the current fill level percentage during
high-threshold property changes in the same manner as it is done when
use-buffering is modified.
https://bugzilla.gnome.org/show_bug.cgi?id=763757
Ensure that not-linked pads will drain out at EOS by
correctly detecting the EOS condition based on the EOS
pad flag (which indicates we actually pushed an EOS),
and make sure that not-linked pads are woken when doing
EOS processing on linked pads.
https://bugzilla.gnome.org/show_bug.cgi?id=763770
segment.position is meant for internal usage only, but the various
GST_EVENT_SEGMENT creationg/parsing functions won't clear that field.
Use the appropriate segment boundary as an initial value instead
When synchronizing the output by time, there are some use-cases (like
allowing gapless playback downstream) where we want the unlinked streams
to stay slightly behind the linked streams.
The "unlinked-cache-time" property allows the user to specify by how
much time the unlinked streams should wait before pushing again.
Multiqueue should only be used to cope with:
* decoupling upstream and dowstream threading (i.e. having separate threads
for elementary streams).
* Ensuring individual queues have enough space to cope with upstream interleave
(distance in stream time between co-located samples). This is to guarantee
that we have enough room in each individual queues to provide new data in
each, without being blocked.
* Limit the queue sizes to that interleave distance (and an extra minimal
buffering size). This is to ensure we don't consume too much memory.
Based on that, multiqueue now continuously calculates the input interleave
(per incoming streaming thread). Based on that, it calculates a target
interleave (currently 1.5 x real_interleave + 250ms padding).
If the target interleave is greater than the current max_size.time, it will
update it accordingly (to allow enough margin to not block).
If the target interleave goes down by more than 50%, we re-adjust it once
we know we have gone past a safe distance (2 x current max_size.time).
This mode can only be used for incoming streams that are guaranteed to be
properly timestamped.
Furthermore, we ignore sparse streams when calculating interleave and maximum
size of queues.
For the simplest of use-cases (single stream), multiqueue acts as a single
queue with a time limit of 250ms.
If there are multiple inputs, but each come from a different streaming thread,
the maximum time limit will also end up being 250ms.
On regular files (more than one input stream from the same upstream streaming
thread), it can reduce the total memory used as much as 10x, ending up with
max_size.time around 500ms.
Due to the adaptive nature, it can also cope with changing interleave (which
can happen commonly on some files at startup/pre-roll time)
This will mean a much lower delay before a subtitles track changes take
effect. Also avoids excessive memory usage in many cases.
This will also consider sparse streams as (individually) never full, so
as to avoid blocking all playback due to one sparse stream.
https://bugzilla.gnome.org/show_bug.cgi?id=600648
* Avoid the computation completely if we know we don't need it (not in
sync time mode)
* Make sure we don't override highest time with GST_CLOCK_TIME_NONE on
unlinked pads
* Ensure the high_time gets properly updated if all pads are not linked
* Fix the comparision in the loop whether the target high time is the same
as the current time
* Split wake_up_next_non_linked method to avoid useless calculation
https://bugzilla.gnome.org/show_bug.cgi?id=757353
In order to accurately determine the amount (in time) of data
travelling in queues, we should use an increasing value.
If buffers are encoded and potentially reordered, we should be
using their DTS (increasing) and not PTS (reordered)
https://bugzilla.gnome.org/show_bug.cgi?id=756507
Previously this code was just blindly setting the cached flow return
of downstream to GST_FLOW_OK when we get a SEGMENT.
The problem is that this can not be done blindly. If downstream was
not linked, the corresponding sinqlequeue source pad thread might be
waiting for the next ID to be woken up upon.
By blindly setting the cached return value to GST_FLOW_OK, and if that
stream was the only one that was NOT_LINKED, then the next time we
check (from any other thread) to see if we need to wake up a source pad
thread ... we won't even try, because none of the cached flow return
are equal to GST_FLOW_NOT_LINKED.
This would result in that thread never being woken up
https://bugzilla.gnome.org/show_bug.cgi?id=756645
There is no reason I can see to set mq->buffering = TRUE when
use_buffering is set; the code here also calls update_buffering(), which
will set mq->buffering = TRUE if this is warranted because of low buffer
levels.
https://bugzilla.gnome.org/show_bug.cgi?id=745937
multiqueue's queues stored percent value is the percentage from 0
to 100 (max-size-*) and should be compared with the requested limit
(high_percentage) set by the user and not with 100% to check if
buffering should stop. Otherwise we are only stopping buffering when the
queue gets completely full.
If we are pushing a serialized query into a queue and the queue is
filled, we will end in a deadlock. We need to release the lock before
pushing and acquire it again afterward.
https://bugzilla.gnome.org/show_bug.cgi?id=737794