upstream_size can be negative but queue->upstream_size is unsigned type.
to get a chance to update queue->upstream_size in gst_queue2_get_range()
it should keep the default value.
https://bugzilla.gnome.org/show_bug.cgi?id=753011
Resetting the flushing state of the pads at the end of the
PAUSED_TO_READY transition will make pads handle serialized
queries again which will wait for non-active pads and might
cause deadlocks when stopping the pipeline.
Move the reset to the READY_TO_PAUSED instead.
https://bugzilla.gnome.org/show_bug.cgi?id=752623
Writing a test for unscheduling the gst_clock_id_wait inside the
identity element, found an invalid read, caused by removing the clock-id
when calling _unschedule instead of letting the code calling _wait remove
the clock-id after being unscheduled.
https://bugzilla.gnome.org/show_bug.cgi?id=752055
Microoptimisation: Let GstQueueArray store our
item struct. That way we don't have to alloc/free
temporary QueueItem slices for every item we want
to put into the queue.
https://bugzilla.gnome.org/show_bug.cgi?id=750149
Have all sections in alphabetical order. Also make the macro order consistent.
This is a preparation for generating the file. Remove GET_CLASS macro for
typefine element, since it is not used and the header is not installed.
event can't be NULL, it has been dereferenced by GST_EVENT_TYPE (), and no
case frees the pointer. Remove unnecessary check which will always be True.
CID #1308955
This disables the segment.base adjustments, which is useful if downstream
takes care of base adjustments already (example: a combination of concat
and streamsynchronizer)
https://bugzilla.gnome.org/show_bug.cgi?id=751047
If we receive more than UIO_MAXIOV (1024 typically) buffers
in a single writev call, fall back to consolidating them
into one output buffer or multiple write calls.
This could be made more optimal, but let's wait until it's
ever a bottleneck for someone
If the reset_time value of a FLUSH_STOP event is set to TRUE, the pipeline
will have the base_time of its elements reset. This means that the concat
element's current_start_offset has to be reset to 0, since it was
calculated with the old base-time in mind.
Only FLUSH_STOP events coming from the active pad are looked at.
Signed-off-by: Carlos Rafael Giani <dv@pseudoterminal.org>
Subsequent EOS will push on the source pad that already received
EOS and that will make the event function return FALSE. It needs
only to push the first one and only return TRUE for the subsequent
ones.
The gst_object_unref() in the block above may be dropping
the last ref to the pad and free the pad. Set pad pointer
to NULL here, so that we don't accidentally use a
possibly-freed pad pointer in the debug log statements
further below, and also use the tee element as log object
since that's more appropriate anyway.
Fixes valgrind warnings and crashes in tee test_stress
unit test when debug logging is enabled.
gst_type_find_element_src_event() is supposed to consume @event but wasn't
doing so when it was handling the event itself.
https://bugzilla.gnome.org/show_bug.cgi?id=747775
Signed-off-by: Guillaume Desmottes <guillaume.desmottes@collabora.co.uk>
There is no reason I can see to set mq->buffering = TRUE when
use_buffering is set; the code here also calls update_buffering(), which
will set mq->buffering = TRUE if this is warranted because of low buffer
levels.
https://bugzilla.gnome.org/show_bug.cgi?id=745937
gst_selector_pad_chain() was popping cached buffers out of the queue without
freeing those. Make sure we don't steal the GstBuffer as the cached buffer ref
has been passed to the pad chain function.
This can be reproduced by running the
validate.file.playback.switch_subtitle_track_while_paused.test5_mkv scenario
with Valgrind.
https://bugzilla.gnome.org/show_bug.cgi?id=747611
Signed-off-by: Guillaume Desmottes <guillaume.desmottes@collabora.co.uk>
This property avoids not linked error when all the pads are unlinked
or when there are no source pads. This is useful in dynamic pipelines
where it can happen that for a short time there are no pads at all or
all downstream pads are not linked yet.
https://bugzilla.gnome.org/show_bug.cgi?id=746436
When determining whether the running_time of a pad can be
calculated, check if the segment is in TIME format instead
of using the 'active' field.
Since the latter is set through *any* activity, it's not a
reliable indicator of segment presence.
https://bugzilla.gnome.org/show_bug.cgi?id=739620
With the disappearance of the 'block' signal, this
flag cannot be set to TRUE.
gst_input_selector_wait disappears as it never waits
and just returns self->flushing.
https://bugzilla.gnome.org/show_bug.cgi?id=736891
This signal blocks the input-selector with no means of unblocking
other than a state change back to READY. It seems this signal was
part of an old way of synchronously switching the selector,
together with the already-removed 'switch' signal.
Removing the signal is safe, as attempting to use it could only
end in deadlocks. Attempting to emit an unknown signal just causes
g_criticals.
https://bugzilla.gnome.org/show_bug.cgi?id=736891
This apparently got broken by bc1ec4e. Since self->blocked is always
FALSE, gst_input_selector_wait never actually waits.
Using (!self->eos || self->blocked) && ... as the loop condition would
be incorrect as well, because then the other call to the function in
_chain would block until EOS, so the functions cannot be merged trivially.
Since blocking is obsolete, gst_input_selector_wait will get removed anyway.
As such, just inline the loop.
https://bugzilla.gnome.org/show_bug.cgi?id=746518
Add QUERY_SEEKING handling to queue2, so RTMP live streams become
seekable when a queue2 in download or ringbuffer mode is inserted:
rtmpsrc ! queue2 ! flvdemux
https://bugzilla.gnome.org/show_bug.cgi?id=733351
Otherwise we end up dropping e.g. CAPS queries, and then upstream just
negotiates to whatever format it wants to. Once the valve is not-dropping
anymore this can easily result in negotiation failing completely.
https://bugzilla.gnome.org/show_bug.cgi?id=746448
Demultiplex a stream to multiple source pads based on the stream ids from the
stream-start events. This basically reverses the behaviour of funnel.
https://bugzilla.gnome.org/show_bug.cgi?id=707605
We use the segment format to detect if we run the streaming thread or not.
Without resetting we might believe we do so, although we only did in the past
and are now running in e.g. push mode.
https://bugzilla.gnome.org/show_bug.cgi?id=745073
It might still be waiting for a query to be handled, or the queue to become
empty again for the next item. Also if downstream returns FLUSHING, flush the
queue like we do in queue and multiqueue.
Otherwise we might wait forever for serialized queries to be handled as the
loop function is stopped and as such we will never ever dequeue the query and
handle it.
https://bugzilla.gnome.org/show_bug.cgi?id=745319
... and only unblock when either a) the pad becomes active and the event
should be forwarded or b) the active pad went EOS itself.
Otherwise it can happen that we switch from a longer track that is not EOS yet
to a shorter track that already is EOS, but the shorter track won't have any
possibility to send its EOS event downstream anymore.
https://bugzilla.gnome.org/show_bug.cgi?id=740949
TRUE is 1, but every other non-zero value is also considered true. Comparing
for equality with TRUE would only consider 1 but not the others.
Also normalize booleans in a few places.
Write out multiple buffers possibly containing multiple
memories with one writev() call, without merging the
buffer memories first, like ::render() does currently.
When comparing percentage values, compare with 0-100 scale as it
has already been made relative to 0-high_percent, otherwise we mark
the queue as not buffering and report a 50% to the user. This leads to
a buffering stall as the user assumes the queue is still buffering but
it thinks it isn't.
https://bugzilla.gnome.org/show_bug.cgi?id=736969
multiqueue's queues stored percent value is the percentage from 0
to 100 (max-size-*) and should be compared with the requested limit
(high_percentage) set by the user and not with 100% to check if
buffering should stop. Otherwise we are only stopping buffering when the
queue gets completely full.
In this mode we accept previously set filter caps until
upstream renegotiates to something that is compatible
to the current filter caps.
This allows dynamic caps changes in the pipeline even
if there is a queue between any conversion element
and the capsfilter. Without this we would get not-negotiated
errors if timing is bad.
https://bugzilla.gnome.org/show_bug.cgi?id=739002
If we are pushing a serialized query into a queue and the queue is
filled, we will end in a deadlock. We need to release the lock before
pushing and acquire it again afterward.
https://bugzilla.gnome.org/show_bug.cgi?id=737794
Revert the previous commit which removes the pattern property of fakesrc because
doing so will break ABI. Bringing the property back but marking it as unused
in the property string.
https://bugzilla.gnome.org/show_bug.cgi?id=737683
Eventhough the "pattern" property of fakesrc can be set, it is never used. The
only pattern supported is the default 0x00 -> 0xff, and if a pattern is set by
the user it is ignored. Removing the unused property and variable.
https://bugzilla.gnome.org/show_bug.cgi?id=737683
This might create deadlocks and we need to avoid holding element
specific lock while posting messages
For example a deadlock will happen if while posting the message,
someone connected on the bus (sync) tries to DOT the pipeline.
https://bugzilla.gnome.org/show_bug.cgi?id=737102
It might cause deadlocks to post messages while holding the queue2
lock. To avoid this a new boolean flag is set whenever a new
buffering percent is found. The message is posted after the lock
is released.
To make sure the buffering messages are posted in the right order, messages
are posted holding another lock. This prevents 2 threads trying to post
messages at the same time.
https://bugzilla.gnome.org/show_bug.cgi?id=736969
It might cause deadlocks to post messages while holding the multiqueue
lock. To avoid this a new boolean flag is set whenever a new buffering percent
is found. The message is posted after the lock can be released.
To make sure the buffering messages are posted in the right order, messages
are posted holding another lock. This prevents 2 threads trying to post
messages at the same time.
https://bugzilla.gnome.org/show_bug.cgi?id=736295
Don't re-start the queue push task on the source pad when a
flush-stop event comes in and we're in the process of shutting
down, otherwise that task will never be stopped again.
When the element is set to READY state, the pads get de-activated.
The source pad gets deactivated before the queue's own activate_mode
function on the source pads gets called (which will stop the thread),
so checking whether the pad is active before re-starting the task on
receiving flush-stop should be fine. The problem would happen when the
flush-stop handler was called just after the queue's activate mode
function had stopped the task.
Spotted and debugged by Linus Svensson <linux.svensson@axis.com>
https://bugzilla.gnome.org/show_bug.cgi?id=734688
Otherwise it would only be proxied for the active pad which can lead
upstream to use an incompatible caps for the downstream element.
Even if a reconfigure event is sent upstream when the pad is activated, this
will save the caps reconfiguration if it is already using an acceptable caps.
Imagine the following 'pipeline'
--------------
p1/| 'fullqueue' |--- 'laggy' downstream
--------- / | |
-| demuxer | | multiqueue |
--------- \ | |
p2\| 'emptyqueue' |--- 'fast' downstream
--------------
In the case downstream of one single queue (fullqueue) has (a lot of) latency
(for example for reverse playback with video), we can end up having the other
SingleQueue (emptyqueue) emptied, before that fullqueue gets
unblocked. In the meantime, the demuxer tries to push on fullqueue, and
is blocking there.
In that case the current code will post a BUFFERING message on the bus when
emptyqueue gets emptied, that leads to the application setting the pipeline state to
PAUSED. So now we end up in a situation where 'laggy downstream' is
prerolled and will not unblock anymore because the pipeline is set to
PAUSED, the fullequeue does not have a chance to be emptied and
the emptyqueue can not get filled anymore so no more BUFERRING message
will be posted and the pipeline is stucked in PAUSED for the eternity.
Making sure that we do not try to "buffer" if one of the single queue
does not need buffering, prevents this situtation from happening though it lets the
oportunity for buffering in all other cases.
That implements a new logic where we need all singlequeue to need
buffering for the multiqueue to actually state buffering is needed,
taking the maximum buffering of the single queue as the reference point.
https://bugzilla.gnome.org/show_bug.cgi?id=734412
After EOS there will be no further buffer which could propagate the
error upstream, so nothing is going to post an error message and
the pipeline just idles around.
When eos events are forwarded simultaneouly from two sinkpads on
funnel, it doesnot forward the eos to sourcepad. The reason is
sticky events are stored after the event callbacks are returned.
Therefore while one is about to store the sticky events on the its
sinkpad, other sinkpad starts checking for the eos events on all other
sinkpads and assumes eos is not present yet.
https://bugzilla.gnome.org/show_bug.cgi?id=732851
Avoid relocations and refactor so that we don't calculate
the fixed and known at compile time maximum string size
every time. Also skip the mini object flags which we are
not going to print anyway.
The initial buffers (that were used for timestamping) might have PTS
and DTS set. In order to forward those properly, get the initial
PTS/DTS from the adapter and set them on the reconstructed output
buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=733291
We always work in passthrough mode so there's no point in doing
something more clever in basetransform. Also the basetransform
code leads to problems with incomplete caps and downstream
elements that use GST_PAD_FLAG_ACCEPT_INTERSECT.
https://bugzilla.gnome.org/show_bug.cgi?id=732559
When no data is coming from sinkpads and eos events
arrived at one of the sinkpad, funnel forwards the EOS
event to downstream. It forwards the EOS because lastsink pad
is NULL. Also the unit testcase of the funnel is not checking
the correct behavior as it should. The unit test case should
fail if one of the sink pad has already EOS present on it and
we are trying to push one more EOS.
https://bugzilla.gnome.org/show_bug.cgi?id=731716
The start and stop should represent the currently downloading region.
The estimated-total should represent the remaining time to download
the currently downloading region. This makes it a lot more useful
for applications because they can then use those values to update
the fill region and use the estimated time to delay playback.
Update the docs with this clarification.
When the first segment has position != 0 and position > max-size-time
it will immediatelly cause the multiqueue to signal overrun.
This can happen easily with adaptive streams when switching bitrates
and starting a new group. The segment for this new group will have
a position that is much greater than 0 and will lead to this issue.
This is particularly harmful when the adaptive stream uses mpegts
that doesn't emit no-more-pads and it might happen that only one
of the stream pads was added when the multiqueue overruns and gets
the group ready for exposing. So the user will only get audio or
video.
The solution is to fallback to the sink segment while the source pad
has no segment.
https://bugzilla.gnome.org/show_bug.cgi?id=729124
They are very confusing for people, and more often than not
also just not very accurate. Seeing 'last reviewed: 2005' in
your docs is not very confidence-inspiring. Let's just remove
those comments.
The qlock is released between popping a buffer from the queue
and pushing it. When this buffer causes the sink to wait in
preroll, this lets a query see that the queue is empty, and
push the query then wait for it to be serviced. However, this
will not be done till after peroll, and this will thus block.
If upstream was waiting on buffering to reach 100% before
switching to PLAYING, a deadlock would ensue.
This had been fixed recently by failing queries when the
queue2 was buffering, but this happens to break some other
case (playbin on a local http server and matroska), while
this patch works for both.
See https://bugzilla.gnome.org/show_bug.cgi?id=728345
This should never happen theoretically, but since a transient
failure would get us to silently read wrong data, it's worth
erroring out. And it silence this:
Coverity 206034
The qlock is released between popping a buffer from the queue
and pushing it. When this buffer causes the sink to wait in
preroll, this lets a query see that the queue is empty, and
push the query then wait for it to be serviced. However, this
will not be done till after peroll, and this will thus block.
If upstream was waiting on buffering to reach 100% before
switching to PLAYING, a deadlock would ensue.
We fix it by refusing the query when buffering, as per Wim's
recommendation on IRC.
Use the pad as object for logging to get more context. Use
gst_pad_store_sticky_event() instead of sending the event. This avoids a warning
as here the pad is not yet linked and we actually don't want to send anyway.
Use the last result as a default when pushing a item from a single queue,
otherwise the status gets reset to _OK when pushing events.
This causes problems when mistakenly activating a not-linked stream
that is being ignored upstream as it is not being used (adaptive
scenarios), it will make the multiqueue post a buffering message
on a pad that won't receive buffers
https://bugzilla.gnome.org/show_bug.cgi?id=725917
Make a method to get the seeking threshold. If data is further away from
this threshold we want to perform a seek upstream.
When the current downloaded range can merge with the next range,
actually include the data of the next range into the current range
instead of discarding it. Also decide if we seek to the write position
of the merged range or continue reading.
If the segment event is allowed to be pushed to all pads it
will lead to an assertion of 'sticky event misordering:
segment received before caps' in case the pad-negotiation-mode
is set to 'active' or 'none'.
This patch fixes this by making all sticky events follow the
property like the caps event to prevent misordering warnings.
When a new pad is activated the current sticky events on the
sinkpad are forwarded to it in the proper order.
https://bugzilla.gnome.org/show_bug.cgi?id=723266
When the single queue size was just bumped by 1 to allow more buffers to
be added, the buffers limit could be reduced to the current level when
setting the max-size-buffers property. This would result in a stall
since the queue would not grow anymore at this point.
Prevent this by not reducing a single queue size below the current
number of buffers + 1.
https://bugzilla.gnome.org/show_bug.cgi?id=712597
In the case where one singlequeue is full and all other are not linked, the
growing of the full queue does not work correctly. The result depends on if
the full queue is last in the queue list or not.
https://bugzilla.gnome.org/show_bug.cgi?id=722891
It is already stored inside the GstSegment struct and
was only duplicating information. Also removed some
weird positon if/else that would possibly change the
segment that was going to be pushed downstream
When prerolling/buffering, multiqueue has its buffers limit set
to 0, this means it can take an infinite amount of buffers.
When prerolling/buffering finishes, its limit is set back to 5, but
only if the current level is lower than 5. It should (almost) never be
and this will cause prerolling/buffering to need to wait to reach the
hard bytes and time limits, which are much higher.
This can lead to a very long startup time. This patch fixes this
by setting the single queues to the max(current, new_value) instead
of simply ignoring the new value and letting it as infinite(0)
https://bugzilla.gnome.org/show_bug.cgi?id=712597
The offset can be -1 when we are configured in TIME format. Instead of
failing the seek and erroring, do what and offset of -1 is supposed to
do and simply read from the current offset.
It was used for pad-alloc in 0.10 but currently is completely unused
and not necessary. All pad access is protected by the tee object lock
and keeping another reference to the current pad.