It might cause deadlocks to post messages while holding the queue2
lock. To avoid this a new boolean flag is set whenever a new
buffering percent is found. The message is posted after the lock
is released.
To make sure the buffering messages are posted in the right order, messages
are posted holding another lock. This prevents 2 threads trying to post
messages at the same time.
https://bugzilla.gnome.org/show_bug.cgi?id=736969
They are very confusing for people, and more often than not
also just not very accurate. Seeing 'last reviewed: 2005' in
your docs is not very confidence-inspiring. Let's just remove
those comments.
The qlock is released between popping a buffer from the queue
and pushing it. When this buffer causes the sink to wait in
preroll, this lets a query see that the queue is empty, and
push the query then wait for it to be serviced. However, this
will not be done till after peroll, and this will thus block.
If upstream was waiting on buffering to reach 100% before
switching to PLAYING, a deadlock would ensue.
This had been fixed recently by failing queries when the
queue2 was buffering, but this happens to break some other
case (playbin on a local http server and matroska), while
this patch works for both.
See https://bugzilla.gnome.org/show_bug.cgi?id=728345
The qlock is released between popping a buffer from the queue
and pushing it. When this buffer causes the sink to wait in
preroll, this lets a query see that the queue is empty, and
push the query then wait for it to be serviced. However, this
will not be done till after peroll, and this will thus block.
If upstream was waiting on buffering to reach 100% before
switching to PLAYING, a deadlock would ensue.
We fix it by refusing the query when buffering, as per Wim's
recommendation on IRC.
Make a method to get the seeking threshold. If data is further away from
this threshold we want to perform a seek upstream.
When the current downloaded range can merge with the next range,
actually include the data of the next range into the current range
instead of discarding it. Also decide if we seek to the write position
of the merged range or continue reading.
In some cases the wait for more data was happening without updating
the buffering state, meaning the API user would not be able to notice
it should pause the pipeline and update UI to indicate that is the
case, the video would likely stutter instead.
https://bugzilla.gnome.org/show_bug.cgi?id=707648
When a buffering query is handled it uses the get_buffering_percent()
function to get some statitics. Unfortunately this function also
calculates whether the queue should be buffering and adapts the
global queue2 state in case of state transitions from/to buffering
(including whether a buffering message was posted on the bus!).
This means that there is a race which can cause buffering messages
to never posted if the global state changes happen as a result of aa
query instead of resulting from bytes flowing in/out.
Spotted by Sjoerd Simons.
Change to only query state in get_buffering_percent() and update
state only in update_buffering().
https://bugzilla.gnome.org/show_bug.cgi?id=705332
When in download buffering mode queue2 didn't check if a range offset is
in a undownloaded range before the currently in-progress range. Causing
seeks to an earlier offset to, well, take a while.
When asked about the scheduling flags first check with upstream and
simply add the _SEEKABLE flag when using a temporary file as storage.
This enables the forwarding of _SEQUENTIAL and _BANDWIDTH_LIMITED from
sources if needed.
https://bugzilla.gnome.org/show_bug.cgi?id=704927
We must be certain that we don't cause a deadlock when blocking the serialized
queries. One such deadlock can happen when we are buffering and downstream is
blocked in preroll and a serialized query arrives. Downstream will not unblock
(and allow our query to execute) until we complete buffering and buffering will
not complete until we can answer the query..
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=702840
Fixes some deadlocks during flushing.
And store queue items differently to not accidentially read
already unreffed queries when flushing. Queries are owned by
upstream and not us.
Fix race that could cause data corruption when seeking in ring buffer
mode.
In perform_seek_to_offset(), called from the demuxer's pull_range
request, we drop the lock, tell upstream (usually a http source)
to seek to a different offset, then re-acquire the lock before we
do things to the ranges. However, between us sending the seek event
and re-acquiring the lock, the source thread might already have pushed
some data and moved along the range's writing_pos beyond the seek
offset. In that case we don't want to set the writing position back
to the requested seek position, as it would cause data to be written
to the wrong offset in the file or ring buffer.
Reproducible doing seek-emulated fast-forward/backward on 006653.
Conflicts:
plugins/elements/gstqueue2.c
The buffering-left field in the buffering message should contain a time estimate
in milliseconds about for long the buffering is going to take. We can calculate
this value when we do rate_estimates.
When we don't have the requested data in the ringbuffer and we move our read
pointer to the requested position, signal the delete cond to inform the writer
that we changed the current fill level. If we don't, the writer might stay
blocked and we might wait forever.
When we don't have enough bytes in the ringbuffer to satisfy the current
request, first update the current read position before waiting. If we don't do
that, the ringbuffer might appear full and the writer will never write more
bytes to wake us up.
Only add the range when we receive a segment event on the sinkpad. The add_range
method will modify the write position, which only makes sense to do on the
sinkpad.
Set the seeking flag right before we send a seek event upstream and discard all
data untill we see a flush-stop again. We need to do this because we activate
the range that we seek to immediately after sending the seek event and it is
possible that we receive data in our chain function from before the seek
which would then be added to the wrong range resulting in data corruption.
When using the ringbuffer, handle the newsegment event like we handle it when
using the temp-file mode: create a new range for the new byte segment. The new
segment should normally already be created when we do a seek.
A flush from the upstream element should not make buffering go to 0, the next
pull request might be inside a range that we have and then we don't need to
buffer at all. If the next pull is outside anything we have, buffering will
happen as usual anyway.
We want to forward the flush events received on the sinkpad whenever the srcpad
is activated in pushmode, which can also happen when using the RINGBUFFER or
DOWNLOAD mode and downstream failed to activate us in pull mode.
When we have EOS, read the remaining bytes in the buffer and make sure we don't
wait for more data. Also clip the output buffer to the amount of remaining
bytes.
When using the ringbuffer mode, the buffer is filled when we reached the
max_level.bytes mark or the total size of the ringbuffer, whichever is smaller.
Use a threshold variable to hold the maximum distance from the current position
for with we will wait instead of doing a seek.
When using the ringbuffer and the requested offset is not available, avoid
waiting until the complete ringbuffer is filled but instead do a seek when the
requested data is further than the threshold.
Avoid doing the seek twice in the ringbuffer case.
Use the same threshold for ringbuffer and download buffering.
Improve the docs of the get/pull_range functions, define the lifetime of the
buffer in case of errors and short reads.
Make sure the code does what the docs say.
Group the extra allocation parameters in a GstAllocationParams structure to make
it easier to deal with them and so that we can extend them later if needed.
Make gst_buffer_new_allocate() take the GstAllocationParams for added
functionality.
Add boxed type for GstAllocationParams.
Add private replacements for deprecated functions such as
g_mutex_new(), g_mutex_free(), g_cond_new() etc., mostly
to avoid the deprecation warnings. We can't change most of
these in 0.10 because they're part of our API and ABI.
Add the pad mode to the activate function so that we can reuse the same function
for all activation modes. This makes the core logic smaller and allows for some
elements to make their activation code easier. It would allow us to add more
scheduling modes later without having to add more activate functions.
Turns some boolean arguments in the scheduling query to flags, which are easier
to extend and makes the code easier to read.
Make extra methods for configuring and querying the supported scheduling modes.
This should make it easier to add new modes later.
Remove the getcaps function on the pad and use the CAPS query for
the same effect.
Add PROXY_CAPS to the pad flags. This instructs the default caps event and query
handlers to pass on the CAPS related queries and events. This simplifies a lot
of elements that passtrough caps negotiation.
Make two utility functions to proxy caps queries and aggregate the result. Needs
to use the pad forward function instead later.
Make the _query_peer_ utility functions use the gst_pad_peer_query() function to
make sure the probes are emited properly.