Large scale skip is an optimization, and thus it is safer to
stop skipping than to continue. Clear skip on segments and
discontinuities, as these are points where it is possible that
the original idea of "bytes to skip" changes.
Allows buffers to be reclaimed when caps is to be renegotiated so
that bufferpools can be stopped. As the allocation query is
serialized all buffers have been already drained from the pipeline,
except this last_sample one.
https://bugzilla.gnome.org/show_bug.cgi?id=682770
Use gst_buffer_copy_deep() to force the copy of the underlying
memory instead of possibly doing a shallow copy of the buffer
and just referencing the memory
https://bugzilla.gnome.org/show_bug.cgi?id=745287
Based on patch from Song Bing <b06498@freescale.com>
Don't just set the need_preroll flag to TRUE in all cases. When we
are already prerolled it needs to be set to FALSE and when we go to
READY we should not touch it. We should only set it to TRUE in other
cases, like what the code above does.
See https://bugzilla.gnome.org/show_bug.cgi?id=736655
Both for the peer filter caps and the converted caps based on the peer caps.
If the peer filter caps are EMPTY, the peer caps query will also return
EMPTY. There's no ned to both downstream/upstream with this query.
When using a negative rate (rate being segment.rate * segment.applied_rate),
we will end up reporting decreasing positions, therefore adjust the clamping
against last reported value accordingly.
Fixes positions getting properly reported with applied_rate < 0.0
https://bugzilla.gnome.org/show_bug.cgi?id=738092
TRUE is 1, but every other non-zero value is also considered true. Comparing
for equality with TRUE would only consider 1 but not the others.
Also normalize booleans in a few places.
Instead of checking if our outcaps are equivalent to the previous incaps, and
if that is the case not setting any caps on the pad... compare against our
previous outcaps because that's what we care about.
Fixes some cases where the outcaps became equivalent to the previous incaps,
but the previous outcaps were different and we were then sending buffers
downstream that were corresponding to the caps we forgot to set on the pad.
Resulting in crashes or image corruption.
Currently we are just returning FALSE, but we do have the information
we should just answer the query the same way as when answering through
the GstElement.query vmethod default implementation.
https://bugzilla.gnome.org/show_bug.cgi?id=739580
Fixes 'Attempt to unlock mutex that was not locked'
warning with newer GLibs when sink is shut down in
certain situations. Triggered by the decodebin
test_reuse_without_decoders unit test in -base
sometimes, esp. on slower machines.
Add a first_buffer boolean state flag to have baseparse do actions
before pushing data. This is used to check the caps for streamheader
buffers that are prepended to the stream, but only if the first buffer
isn't already marked with the _HEADER flag. In this case, it is assumed
that the _HEADER marked buffer is the same as the streamheader.
https://bugzilla.gnome.org/show_bug.cgi?id=735070
Adds API to get or peek a sub-reader of a certain size from
a given byte reader. This is useful when parsing nested chunks,
one can easily get a byte reader for a sub-chunk and make
sure one never reads beyond the sub-chunk boundary.
API: gst_byte_reader_peek_sub_reader()
API: gst_byte_reader_get_sub_reader()
Just remove one skip annotation that causes this:
** (g-ir-compiler:12458): ERROR **: Caught NULL node, parent=empty
with older g-i versions such as 1.32.1.
Adds gst_byte_reader_masked_scan_uint32_peek just like
GstAdapter has a _peek and non _peek version
Upgraded tests to check that the returned value is correct in the
_peek version
API: gst_byte_reader_masked_scan_uint32_peek
https://bugzilla.gnome.org/show_bug.cgi?id=728356
When going to READY, it is possible that we are still pusing a frame but that
our srcpad has already been set to flushing. In that case we should not
post any error on the bus but instead cleanly return FLOW_FLUSHING.
https://bugzilla.gnome.org/show_bug.cgi?id=733320
When the parser receives non-aligned packets it can push a buffer
and get a not-linked return while still leaving some data still to
be parsed. This remaining data will not form a complete frame and
the subclass likely returns _OK and baseparse would take that
as the return, while it the element is actually not-linked.
This patch fixes this by storing the last flow-return from a push
and using that if a parsing operation doesn't result in data being
flushed or skipped.
https://bugzilla.gnome.org/show_bug.cgi?id=731474
Currently the scan uses Boyer-moore method and its performance is good.
but, it can be optimized from an implementation of view.
The original scan code is implemented by byte array and index-based access.
In _scan_for_start_code(), the index is increasing from start to end and the
base address of the byte array is referred to as return value.
In the case, index-based access can be replaced by pointer access, which
improve the performance by removing index-related operations.
Its performace is enhanced by approximately 8% on arm-based embedded devices.
Although it seems trivial, it can affect the overall performance because the
_scan_for_start_code() function is very often called when H.264/H.265 video is
played.
In addition, the technique can apply for all architectures and it is good in
view of readability and maintainability.
https://bugzilla.gnome.org/show_bug.cgi?id=731442
Adds a utility struct that is capable of storing and aggregating flow returns
associated with pads.
This way all demuxers will have a standard function to use and have the
same expected results.
Includes tests.
https://bugzilla.gnome.org/show_bug.cgi?id=709224
Buffer pool set_config() may return FALSE if requested configuration needed
small changes. Reget the config and try setting it again (validating the
changes first). This ensure we have a configured pool if possible.
https://bugzilla.gnome.org/show_bug.cgi?id=727916
Currently, if prepare() takes too much time, we skip the call to render().
The side effect of this, is that we endup starving the render(). The solution
in this patch is to always render frames that are on time before prepare() is
executed. This will maximize the number of frames we display and smoothly
degrade the rendering performance.
https://bugzilla.gnome.org/show_bug.cgi?id=729335
They are very confusing for people, and more often than not
also just not very accurate. Seeing 'last reviewed: 2005' in
your docs is not very confidence-inspiring. Let's just remove
those comments.
We iterate the current discont group backwards and push each GOP forwards,
starting from the last one. However if the first buffer in the current
discont group is a keyframe, we will keep it around until next time,
which is far from ideal. Just push it.
This prevents situations where a first branch would get seeked and
receive a buffer before all branches got seeked, and thus collected
would get called based on EOS from the previous segment.
As a consequence, during the process of seeking, don't decrease
the eospads number when a FLUSH_STOP is received.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=724571