Fixes some deadlocks during flushing.
And store queue items differently to not accidentially read
already unreffed queries when flushing. Queries are owned by
upstream and not us.
Fix race that could cause data corruption when seeking in ring buffer
mode.
In perform_seek_to_offset(), called from the demuxer's pull_range
request, we drop the lock, tell upstream (usually a http source)
to seek to a different offset, then re-acquire the lock before we
do things to the ranges. However, between us sending the seek event
and re-acquiring the lock, the source thread might already have pushed
some data and moved along the range's writing_pos beyond the seek
offset. In that case we don't want to set the writing position back
to the requested seek position, as it would cause data to be written
to the wrong offset in the file or ring buffer.
Reproducible doing seek-emulated fast-forward/backward on 006653.
Conflicts:
plugins/elements/gstqueue2.c
The buffering-left field in the buffering message should contain a time estimate
in milliseconds about for long the buffering is going to take. We can calculate
this value when we do rate_estimates.
When we don't have the requested data in the ringbuffer and we move our read
pointer to the requested position, signal the delete cond to inform the writer
that we changed the current fill level. If we don't, the writer might stay
blocked and we might wait forever.
When we don't have enough bytes in the ringbuffer to satisfy the current
request, first update the current read position before waiting. If we don't do
that, the ringbuffer might appear full and the writer will never write more
bytes to wake us up.
Only add the range when we receive a segment event on the sinkpad. The add_range
method will modify the write position, which only makes sense to do on the
sinkpad.
Set the seeking flag right before we send a seek event upstream and discard all
data untill we see a flush-stop again. We need to do this because we activate
the range that we seek to immediately after sending the seek event and it is
possible that we receive data in our chain function from before the seek
which would then be added to the wrong range resulting in data corruption.
When using the ringbuffer, handle the newsegment event like we handle it when
using the temp-file mode: create a new range for the new byte segment. The new
segment should normally already be created when we do a seek.
A flush from the upstream element should not make buffering go to 0, the next
pull request might be inside a range that we have and then we don't need to
buffer at all. If the next pull is outside anything we have, buffering will
happen as usual anyway.
We want to forward the flush events received on the sinkpad whenever the srcpad
is activated in pushmode, which can also happen when using the RINGBUFFER or
DOWNLOAD mode and downstream failed to activate us in pull mode.
When we have EOS, read the remaining bytes in the buffer and make sure we don't
wait for more data. Also clip the output buffer to the amount of remaining
bytes.
When using the ringbuffer mode, the buffer is filled when we reached the
max_level.bytes mark or the total size of the ringbuffer, whichever is smaller.
Use a threshold variable to hold the maximum distance from the current position
for with we will wait instead of doing a seek.
When using the ringbuffer and the requested offset is not available, avoid
waiting until the complete ringbuffer is filled but instead do a seek when the
requested data is further than the threshold.
Avoid doing the seek twice in the ringbuffer case.
Use the same threshold for ringbuffer and download buffering.
Improve the docs of the get/pull_range functions, define the lifetime of the
buffer in case of errors and short reads.
Make sure the code does what the docs say.
Group the extra allocation parameters in a GstAllocationParams structure to make
it easier to deal with them and so that we can extend them later if needed.
Make gst_buffer_new_allocate() take the GstAllocationParams for added
functionality.
Add boxed type for GstAllocationParams.