Otherwise baseparse will consider empty streams to be an error while
an empty stream is a valid scenario. With this patch, errors would
only be emitted if the parser received data but wasn't able to
produce any output from it.
This change is only for push-mode operation as in pull mode an
empty file can be considered an error for the one driving the
pipeline
Includes a unit test for it
https://bugzilla.gnome.org/show_bug.cgi?id=733171
This property avoids not linked error when all the pads are unlinked
or when there are no source pads. This is useful in dynamic pipelines
where it can happen that for a short time there are no pads at all or
all downstream pads are not linked yet.
https://bugzilla.gnome.org/show_bug.cgi?id=746436
3 new tests:
1) Tests that a stream that is empty (just an EOS event)
on inactive pad doesn't get through and tamper
with the active pad that still has data
2) Tests that a stream that is shorter than the active one
(pushes EOS earlier) doesn't has its EOS pushed
3) Tests that switching to an inactive stream that has received
EOS will make input-selector push EOS
https://bugzilla.gnome.org/show_bug.cgi?id=746518
Do not do any checks for the start/stop in the new
gst_segment_to_running_time_full() method, we can let this be done by
the more capable gst_segment_clip() method. This allows us to remove the
enum of results and only return the sign of the calculated running-time.
We need to put the old clipping checks in the old
gst_segment_to_running_time() still because they work slightly
differently than the _clip methods.
See https://bugzilla.gnome.org/show_bug.cgi?id=740575
Add a clip argument to gst_segment_to_running_time_full() to disable
the checks against the segment boundaries. This makes it possible to
generate an extrapolated running-time for timestamps outside of the
segment.
See https://bugzilla.gnome.org/show_bug.cgi?id=740575
Add a helper method to get a running-time with a little more features
such as detecting if the value was before or after the segment and
negative running-time.
API: gst_segment_to_running_time_full()
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=740575
The position in the segment is relative to the start but the offset
isn't, so subtract the start from the position when setting the offset.
Add unit test for this as well.
Don't stop the pool in set_config(). Instead, let the controlling
element manage it. Most of the time, when an active pool is being
configured is because the caps didn't change.
https://bugzilla.gnome.org/show_bug.cgi?id=745377
A variant of gst_buffer_copy that forces the underlying memory
to be copied.
This is added to avoid adding an extra reference to a GstMemory
that might belong to a bufferpool that is trying to be drained.
The use case is when the buffer copying is done to release the
old buffer and all its resources.
https://bugzilla.gnome.org/show_bug.cgi?id=745287
Demultiplex a stream to multiple source pads based on the stream ids from the
stream-start events. This basically reverses the behaviour of funnel.
https://bugzilla.gnome.org/show_bug.cgi?id=707605
This reverts commit 1911554cff.
This breaks the functionality of GST_PAD_FLAG_NEED_PARENT, the reason for this
flag is that if a pad is removed from a running element, you don't want
functions (such as chain or event) to be called on the pad without a parent set.
This can happen if you remove a request or sometimes pad from a running element.
I don't see the code that caused this in tsdemux, but if it needs to unset
the flag on remove, it should do it itself and then make sure that the parent
exists in any pad function.
Add domain checks for the input values, and a variable precision
calculation that loops if necessary to ensure we never overflow
accumulators and then silently produce garbage results.
Make the (non-public) linear regression function available for
unit testing by putting it in a separate source file the test
can include. Add a unit test that the new regression function
produces sensible results for several inputs taken from real-world
captures.
Pools are allowed to change the size in order to adapt padding. So
don't check the size. Normally pool will change the size without
failing set_config(), but it they endup changing the size before
the validate method may fail on a false positive.
https://bugzilla.gnome.org/show_bug.cgi?id=741420
If a task thread is calling pause on it self and the
controlling/"main" thread stops the task, it could end in a race
where gst_task_func loops and then checks for paused after the
controlling thread just changed the task state to stopped.
Hence the task would actually call func again even though it was
both paused and stopped.
https://bugzilla.gnome.org/show_bug.cgi?id=740001
In this mode we accept previously set filter caps until
upstream renegotiates to something that is compatible
to the current filter caps.
This allows dynamic caps changes in the pipeline even
if there is a queue between any conversion element
and the capsfilter. Without this we would get not-negotiated
errors if timing is bad.
https://bugzilla.gnome.org/show_bug.cgi?id=739002
Use an array of constant strings so if arguments get
removed from it they are not considered leaked, and
valgrind is happy. Still some stuff leaking in GLib
though.
Make pipe socket non-blocking, so we don't
end up being blocked in a write on the pipe
while the src is eos and not reading data
any more, and thus we never unblock and never
notice that we're done. This would happen
quite reliably on the rpi.
Adds API to get or peek a sub-reader of a certain size from
a given byte reader. This is useful when parsing nested chunks,
one can easily get a byte reader for a sub-chunk and make
sure one never reads beyond the sub-chunk boundary.
API: gst_byte_reader_peek_sub_reader()
API: gst_byte_reader_get_sub_reader()