When enabled, this property will make the allocation query fail. This is
the same as one could have done using a tee before the tee started
implementing the allocation query.
https://bugzilla.gnome.org/show_bug.cgi?id=730758
Otherwise we might try unscheduling a clock id (that does not exist
yet), then the streaming thread waits for id and the state change never
continues because the streaming thread is blocked.
Also shutting down and flushing and similar should return FLUSHING, not
EOS. The stream is not over, we're just not accepting any buffers
anymore.
After EOS, it is possible for a pad to be resetted by sending
either a STREAM_START or SEGMENT event
Mimic the same behaviour when receiving STREAM_START/SEGMENT events
in queue if we are EOS'd
https://bugzilla.gnome.org/show_bug.cgi?id=786056
After EOS, it is possible for a pad to be resetted by sending
either a STREAM_START or SEGMENT event
Mimic the same behaviour when receiving STREAM_START/SEGMENT events
in queue2 if we are EOS'd
https://bugzilla.gnome.org/show_bug.cgi?id=786056
When queue-like elements are in "EOS" situation (received GST_FLOW_EOS
from downstream or EOS was pushed), they drain buffers/events that
wouldn't be processed anyway and let through events that might
modify the EOS situation.
Previously only GST_EVENT_EOS and GST_EVENT_SEGMENT events were let
through, but we also need to allow GST_EVENT_STREAM_START to go
through since it resets the EOS state of pads since 1.6
https://bugzilla.gnome.org/show_bug.cgi?id=786034
When downstream returns NOT_LINKED, we return the buffering level
as being 100%.
Since the queue is no longer being consumed/used downstream, we
want applications to essentially "ignore" this queue for buffering
purposes.
If other streams are still being used, those stream buffering levels
will be used. If none are used, upstream will post an error message
on the bus indicating no streams are used.
https://bugzilla.gnome.org/show_bug.cgi?id=785799
... and use the biggest interleave value among streaming threads.
This is to optimize multiqueue size adaptation on adaptive streaming
use case with "use-interleave" property.
https://bugzilla.gnome.org/show_bug.cgi?id=784448
Return the correct flow return instead of returning always flushing.
This would cause queue to convert not-linked to flushing and making
upstream elements stop.
Based on the previous patch for queue2.
https://bugzilla.gnome.org/show_bug.cgi?id=776999
Return the correct flow return instead of returning always flushing.
This would cause queue2 to convert not-linked to flushing and making
upstream elements stop.
https://bugzilla.gnome.org/show_bug.cgi?id=776999
active-pad switch causes reconfigure event with lock taken,
and upstream element might query the current position or duration
before returning the reconfigure event.
Meanwhile, gst_input_selector_get_linked_pad() is used to get srcpad
inside of default query handle, and it takes also lock.
Since inputselector is still locked by active-pad switch, and so the query
cannot be handled further.
https://bugzilla.gnome.org/show_bug.cgi?id=775445
Before emitting have-type, switch to NORMAL
mode, as part of the have-type processing sends
the caps event downstream, which might trigger
actions like downstream autoplugging or
flushing seeks - and the latter are only
passed upstream if we've set typefind to NORMAL
mode.
It might happen that the srcpad task function is never called at all, in
which case unlocking everything from there will never happen.
Make sure to unlock everything another time after the task function is
definitely stopped.
https://bugzilla.gnome.org/show_bug.cgi?id=776039
The correct behaviour of anything stuck in the ->render() function
between ->unlock() and ->unlock_stop() is to call
gst_base_sink_wait_preroll() and only return an error if this returns an
error, otherwise, it must continue where it left off!
https://bugzilla.gnome.org/show_bug.cgi?id=773912
When running in sync-by-running-time mode, pad groups
that have exactly 1 pad and it's not-linked might never
wake up after computing a high time, as the per-pad-group
high time was only recomputed when a pad in the group
advances.
Wake those up using the global multiqueue high-time across
all other groups instead.
https://bugzilla.gnome.org/show_bug.cgi?id=774322
When subtracting queued data sizes from upstream queries
in queue, queue2, downloadbuffer and typefind, clamp the
result to not go negative, in case upstream returned
a nonsense value that's too small (as could happen if
upstream is estimating, or just broken)
Implement handling in basesink to not unconditionally discard
out-of-segment buffers and expose it as a new property on fakesink
(not unconditionally in all basesink based sinks).
The property defaults to FALSE.
https://bugzilla.gnome.org/show_bug.cgi?id=765734
Otherwise downstream will have an inconsistent set of sticky events at this
point, e.g. when a TAG event is pushed and downstream wants to relate it to
the stream by looking at the current STREAM_START event.
https://bugzilla.gnome.org/show_bug.cgi?id=768526
On the first buffer, it's possible that sink_segment is set but
src_segment has not been set yet. If this is the case, we should not
calculate cur_level.time since sink_segment.position may be large and
src_segment.position default is 0, with the resulting diff being larger
than max-size-time, causing the queue to start leaking (if
leaky=downstream).
One potential consequence of this is that the segment event may be
stored on the srcpad before the caps event is pushed downstream, causing
a g_warning ("Sticky event misordering, got 'segment' before 'caps'").
https://bugzilla.gnome.org/show_bug.cgi?id=773096
low/high-watermark are of type double, and given in range 0.0-1.0. This
makes it possible to set low/high watermarks with greater resolution,
which is useful with large multiqueue max sizes and watermarks like 0.5%.
Also adding a test to check the fill and watermark level behavior.
https://bugzilla.gnome.org/show_bug.cgi?id=770628
To make the code clearer, and to facilitate future improvements, introduce
a distinction between the buffering level and the buffering percentage.
Buffering level: the queue's current fill level. The low/high watermarks
are in this range.
Buffering percentage: percentage relative to the low/high watermarks
(0% = low watermark, 100% = high watermark).
To that end, get_percentage() is renamed to get_buffering_level(). Also,
low/high_percent are renamed to low/high_watermark to avoid confusion.
mq->buffering_percent values are now normalized in the 0..100 range for
buffering messages inside update_buffering(), and not just before sending
the buffering message. Finally the buffering level range is parameterized
by adding a new constant called MAX_BUFFERING_LEVEL.
https://bugzilla.gnome.org/show_bug.cgi?id=770628