When posting a buffering message succesfully:
* Remember the *actual* percentage value that was posted
* Make sure we only reset the percent_changed variable if the value we just
posted is indeed different from the current value
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/632>
This is a follow up to review comments in !297
+ The posting of the buffering message in READY_TO_PAUSED isn't
needed, removing it made the test fail, but the correct fix
was simply to link elements together
+ Move code to relock the queue and set last_posted_buffering_percent
and percent_changed inside the buffering_post_lock in create_write().
This makes locking consistent with post_buffering()
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/297>
This fixes a bug that occurs when an attempt is made to post a buffering
message before the queue2 was assigned a bus. One common situation where
this happens is when the use-buffering property is set to TRUE before the
queue2 was added to a bin.
If the result of gst_element_post_message() is not checked, and the
aforementioned situation occurs, then last_posted_buffering_percent and
percent_changed will still be updated, as if posting the message succeeded.
Later attempts to post again will not do anything because the code then
assumes that a message with the same percentage was previously posted
successfully and posting again is redundant.
Updating these variables only if posting succeed and explicitely
posting a buffering message in the READY->PAUSED state change ensure that
a buffering message is posted as early as possible.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/297>
This reverts a96002bb28, which is not
necessary anymore. If we release the pad after removing it then none of
the deactivation code will actually be called because the pad has no
parent anymore, and we require a parent on the pad for deactivation to
happen.
This can then, among other things, cause a streaming thread to be still
stuck in a pad probe because the pad was never flushed, and waiting
there forever because now the pad will actually never be flushed anymore.
If a pad is currently being released we don't want to forward the
FLUSHING flow return but instead consider it as NOT_LINKED. FLUSHING
would also cause upstream to be FLUSHING.
This part was missed in a3c4a3201a and
resulted in a different (and wrong) workaround in
a96002bb28.
Otherwise we're not guaranteed to read the very latest value that
another thread might've written in there when the pad was released, and
could instead work with an old value.
If the element before the sink needs $n buffers to produce one output
buffer, we were reffing $n events and unreffing only one.
Prevent this by using g_object_set_qdata_full() to handle the event
unreffing so we're sure no ref will be lost.
The records are static and so appear as false positives when using those
tracers with the leaks tracer as well.
The leaks tracer was already setting this flag on its record so let's
set it on the other ones as well.
In gst_download_buffer_wait_for_data(), when a seek is made with
perform_seek_to_offset() the `qlock` is released temporarily. Therefore,
the flushing condition can be set during this period and should be
checked.
This was not being checked before, causing occasional deadlocks when
GST_DOWNLOAD_BUFFER_WAIT_ADD_CHECK() was called.
GST_DOWNLOAD_BUFFER_WAIT_ADD_CHECK() assumes that the caller has already
checked that we're not flushing before, since this is done when
acquiring the lock; so if we release it temporarily somewhere, we need
to check for flush again.
Without that check, the function would keep waiting for the condition
variable to be notified before checking for flushing condition again,
and that may very well never happen. This was reproduced when during pad
deactivation when running WebKit in gdb.
sync=TRUE implementation changes the latency query of a non-live
upstream into live, though it wrongly set the upstream max latency to 0.
As non-live sources won't loose data if we wait longer, this should have
been reported as have no max latency limite (-1).
The `query` argument of gst_pad_query is "transfer none".
Query objects are "borrowed" by the pad query handlers and those
should never unref them.
This was leading to double freed queries in a very racy way with nested
GESTimelines.
Otherwise when seeking backwards we would keep the last_stop at the last
position we saw until playback passed the seek position again, and if
switching to the next pad happens in the meantime we would set the wrong
offset in the outgoing segment.
The pad name sotred in the latency event has no longer the name of the element,
so we have to get the element Id, element name and pad name values from the data
structure and compare all 3 values.
First, the event would be leaved, but also when an element takes
several buffers before producing one, we want the reported latency to be
the aggregation, so the distance from the oldest buffer.
This sets back the default to trace only pipeline latency, and add flags
to enabled element tracing. It is now possible to only trace element
latency, only trace pipeline latency, trace both or none.
This removes the passing of pad inside of a GstEvent. While this is not
a bug, it may affect the live time of the pad, hense change the pipeline
behaviour.
I copied `error-after` to make the `eos-after` property, but it turned
out there were some problems with that one, so this patch: adds
separate counters (so setting to NULL and reusing the element will
still work); clarifies the properties' min values; and reports an
error when both are set.
Using `num-buffers` can be unpredictable as buffer sizes are often
arbitrary (filesrc, multifilesrc, etc.). The `error-after` property on
`identity` is better but obviously reports an error afterwards. This
adds `eos-after` which does exactly the same thing but reports EOS
instead.
By doing so GL source elements can successfully reuse the GL context and display
of downstream elements. This change fixes an issue in playbin when using
gltestsrc where the context query made by the source element would fail and the
source element would create a second (useless) GLDisplay.
Allows determining from downstream what the expected bitrate of a stream
may be which is useful in queue2 for setting time based limits when
upstream does not provide timing information.
Implement bitrate query handling in queue2
https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/issues/60
If upstream is pushing buffers larger than our limits, only 1 buffer
is ever in the queue at a time. Once that single buffer has left the
queue, a 0% buffering message would be posted followed immediately by a
100% buffering message when the next buffer was inserted into the queue
a very short time later. As per the recommendations, This would result
in the application pausing for a short while causing the appearance of
a short stutter.
The first step of a solution involves not posting a buffering message if
there is still data waiting on the sink pad for insertion into the queue.
This successfully drops the 0% messages from being posted however a
message is still posted on each transition to 100% when the new buffer
arrives resulting in a string of 100% buffering messages. We silence
these by storing the last posted buffering percentage and only posting a
new message when it is different from or last posted message.
The post tracer hooks have a GstQuery argument which was truncated from
the trace. As the post hook is the one that contains the useful data,
this bug was hiding the important information from that trace.
Since we use full signed running times, we no longer need to clamp
the buffer time.
This avoids having the position of single queues not advancing for
buffers that are out of segment and never waking up non-linked
streams (resulting in an apparent "deadlock").