When we want to perform a downstream bitrate query, just
set the reconfigure flag on the srcpad and get the streaming
thread to do it. That avoids emitting a downstream query
when receiving the upstream RECONFIGURE event - which can
lead to deadlocks if downstream is sending the event from
within a lock - e.g. input-selector.
If querying the downstream bitrate changes the cached
value, then make sure to update our buffering state
and potentially post a BUFFERING message to the application.
Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/566
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/501>
When posting a buffering message succesfully:
* Remember the *actual* percentage value that was posted
* Make sure we only reset the percent_changed variable if the value we just
posted is indeed different from the current value
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/511>
This is a follow up to review comments in !297
+ The posting of the buffering message in READY_TO_PAUSED isn't
needed, removing it made the test fail, but the correct fix
was simply to link elements together
+ Move code to relock the queue and set last_posted_buffering_percent
and percent_changed inside the buffering_post_lock in create_write().
This makes locking consistent with post_buffering()
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/297>
This fixes a bug that occurs when an attempt is made to post a buffering
message before the queue2 was assigned a bus. One common situation where
this happens is when the use-buffering property is set to TRUE before the
queue2 was added to a bin.
If the result of gst_element_post_message() is not checked, and the
aforementioned situation occurs, then last_posted_buffering_percent and
percent_changed will still be updated, as if posting the message succeeded.
Later attempts to post again will not do anything because the code then
assumes that a message with the same percentage was previously posted
successfully and posting again is redundant.
Updating these variables only if posting succeed and explicitely
posting a buffering message in the READY->PAUSED state change ensure that
a buffering message is posted as early as possible.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/297>
It is not explicitly specified anywhere in the docs that 0% buffering is
at low-watermark and 100% buffering is at high-watermark. It was
specified only in the sources.
`g_object_notify()` actually takes a global lock to look up the
`GParamSpec` that corresponds to the given property name. It's not a
huge performance hit, but it's easily avoidable by using the
`_by_pspec()` variant.
Allows determining from downstream what the expected bitrate of a stream
may be which is useful in queue2 for setting time based limits when
upstream does not provide timing information.
Implement bitrate query handling in queue2
https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/issues/60
If upstream is pushing buffers larger than our limits, only 1 buffer
is ever in the queue at a time. Once that single buffer has left the
queue, a 0% buffering message would be posted followed immediately by a
100% buffering message when the next buffer was inserted into the queue
a very short time later. As per the recommendations, This would result
in the application pausing for a short while causing the appearance of
a short stutter.
The first step of a solution involves not posting a buffering message if
there is still data waiting on the sink pad for insertion into the queue.
This successfully drops the 0% messages from being posted however a
message is still posted on each transition to 100% when the new buffer
arrives resulting in a string of 100% buffering messages. We silence
these by storing the last posted buffering percentage and only posting a
new message when it is different from or last posted message.
If we ever get a GST_FLOW_EOS from downstream, we might retry
pushing new data. But if pushing that data doesn't return a
GstFlowReturn (such as pushing events), we would end up returning
the previous GstFlowReturn (i.e. EOS).
Not properly resetting it would cause cases where queue2 would
stop pushing on the first GstEvent stored (even if there is more
data contained within).
When using queue2 as a queue it was using GQueue with
individually allocated queue items, so two allocs for
each item. With GstQueueArray we can avoid those.
https://bugzilla.gnome.org/show_bug.cgi?id=796483
After EOS, it is possible for a pad to be resetted by sending
either a STREAM_START or SEGMENT event
Mimic the same behaviour when receiving STREAM_START/SEGMENT events
in queue2 if we are EOS'd
https://bugzilla.gnome.org/show_bug.cgi?id=786056
When queue-like elements are in "EOS" situation (received GST_FLOW_EOS
from downstream or EOS was pushed), they drain buffers/events that
wouldn't be processed anyway and let through events that might
modify the EOS situation.
Previously only GST_EVENT_EOS and GST_EVENT_SEGMENT events were let
through, but we also need to allow GST_EVENT_STREAM_START to go
through since it resets the EOS state of pads since 1.6
https://bugzilla.gnome.org/show_bug.cgi?id=786034
When downstream returns NOT_LINKED, we return the buffering level
as being 100%.
Since the queue is no longer being consumed/used downstream, we
want applications to essentially "ignore" this queue for buffering
purposes.
If other streams are still being used, those stream buffering levels
will be used. If none are used, upstream will post an error message
on the bus indicating no streams are used.
https://bugzilla.gnome.org/show_bug.cgi?id=785799
Return the correct flow return instead of returning always flushing.
This would cause queue2 to convert not-linked to flushing and making
upstream elements stop.
https://bugzilla.gnome.org/show_bug.cgi?id=776999
It might happen that the srcpad task function is never called at all, in
which case unlocking everything from there will never happen.
Make sure to unlock everything another time after the task function is
definitely stopped.
https://bugzilla.gnome.org/show_bug.cgi?id=776039
When subtracting queued data sizes from upstream queries
in queue, queue2, downloadbuffer and typefind, clamp the
result to not go negative, in case upstream returned
a nonsense value that's too small (as could happen if
upstream is estimating, or just broken)
low/high-watermark are of type double, and given in range 0.0-1.0. This
makes it possible to set low/high watermarks with greater resolution,
which is useful with large queue2 max sizes and watermarks like 0.5%.
Also adding a test to check the fill and watermark level behavior.
https://bugzilla.gnome.org/show_bug.cgi?id=769449
To make the code clearer, and to facilitate future improvements, introduce
a distinction between the buffering level and the buffering percentage.
Buffering level: the queue's current fill level. The low/high watermarks
are in this range.
Buffering percentage: percentage relative to the low/high watermarks
(0% = low watermark, 100% = high watermark).
To that end, get_buffering_percent() is renamed to get_buffering_level(),
and the code at the end that transforms to the buffering percentage is
factored out into a new convert_to_buffering_percent() function. Also,
the buffering level range is parameterized by adding a new constant called
MAX_BUFFERING_LEVEL.
https://bugzilla.gnome.org/show_bug.cgi?id=769449
In ringbuffer mode we need to make sure we post buffering messages *before*
blocking to wait for data to be drained.
Without this, we would end up in situations like this:
* pipeline is pre-rolling
* Downstream demuxer/decoder has pushed data to all sinks, and demuxer thread
is blocking downstream (i.e. not pulling from upstream/queue2).
* Therefore pipeline has pre-rolled ...
* ... but queue2 hasn't filled up yet, therefore the application waits for
the buffering 100% messages before setting the pipeline to PLAYING
* But queue2 can't post that message, since the 100% message will be posted
*after* there is room available for that last buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=769802
When dealing with small-ish input data coming into queue2, such as
adaptivedemux fragments, we would never take into account the last
<200ms of data coming in.
The problem is that usually on TCP connection the download rate
gradually increases (i.e. the rate is lower at the beginning of a
download than it is later on). Combined with small download time (less
than a second) we would end up with a computed average input rate
which was sometimes up to 30-50% off from the *actual* average input
rate for that fragment.
In order to fix this, force the average input rate calculation when
we receive an EOS so that we take into account that final window
of data.
https://bugzilla.gnome.org/show_bug.cgi?id=768649
Ensure we do not attempt to destroy the current range. Doing so
causes the current one to be left dangling, and it may be dereferenced
later, leading to a crash.
This can happen with a very small queue2 ring buffer (10000 bytes)
and 4 kB buffers.
repro case:
gst-launch-1.0 fakesrc sizetype=2 sizemax=4096 ! \
queue2 ring-buffer-max-size=1000 ! fakesink sync=true
https://bugzilla.gnome.org/show_bug.cgi?id=767688
... when flushing and deactivating pads. Otherwise downstream might have a
query that was already unreffed by upstream, causing crashes or other
interesting effects.
https://bugzilla.gnome.org/show_bug.cgi?id=763496
They can fail for various reasons.
For non-fatal cases (such as the dump feature of identiy and fakesink),
we just silently skip it.
For other cases post an error message.
https://bugzilla.gnome.org/show_bug.cgi?id=728326
The use-tags-bitrate property makes queue2 look at
tag events in the stream and extract a bitrate for the
stream to use when calculating a duration for buffers
that don't have one explicitly set.
This lets queue2 sensibly buffer to a time threshold
for any bytestream for which the general bitrate is known.
When preparing a buffering message, don't report 0% if there
is any bytes left in the queue at all. We still have something
to push, so don't tell the app to start buffering - maybe
we'll get more data before actually running dry.
The input of queue/queue2 might have DTS set, in which cas we want
to take that into account (instead of the PTS) to calculate position
and queue levels.
https://bugzilla.gnome.org/show_bug.cgi?id=756507
This caused problems with oggdemux when queue2 was
operating in queue mode and the souphttpsrc upstream
is not seekable because the server doesn't support
range requests. It would then still claim seekability
and then things go wrong from there.
This reverts commit 7b0b93dafe.
https://bugzilla.gnome.org/show_bug.cgi?id=753887
upstream_size can be negative but queue->upstream_size is unsigned type.
to get a chance to update queue->upstream_size in gst_queue2_get_range()
it should keep the default value.
https://bugzilla.gnome.org/show_bug.cgi?id=753011
Add QUERY_SEEKING handling to queue2, so RTMP live streams become
seekable when a queue2 in download or ringbuffer mode is inserted:
rtmpsrc ! queue2 ! flvdemux
https://bugzilla.gnome.org/show_bug.cgi?id=733351
It might still be waiting for a query to be handled, or the queue to become
empty again for the next item. Also if downstream returns FLUSHING, flush the
queue like we do in queue and multiqueue.
When comparing percentage values, compare with 0-100 scale as it
has already been made relative to 0-high_percent, otherwise we mark
the queue as not buffering and report a 50% to the user. This leads to
a buffering stall as the user assumes the queue is still buffering but
it thinks it isn't.
https://bugzilla.gnome.org/show_bug.cgi?id=736969