When the parser receives non-aligned packets it can push a buffer
and get a not-linked return while still leaving some data still to
be parsed. This remaining data will not form a complete frame and
the subclass likely returns _OK and baseparse would take that
as the return, while it the element is actually not-linked.
This patch fixes this by storing the last flow-return from a push
and using that if a parsing operation doesn't result in data being
flushed or skipped.
https://bugzilla.gnome.org/show_bug.cgi?id=731474
Currently the scan uses Boyer-moore method and its performance is good.
but, it can be optimized from an implementation of view.
The original scan code is implemented by byte array and index-based access.
In _scan_for_start_code(), the index is increasing from start to end and the
base address of the byte array is referred to as return value.
In the case, index-based access can be replaced by pointer access, which
improve the performance by removing index-related operations.
Its performace is enhanced by approximately 8% on arm-based embedded devices.
Although it seems trivial, it can affect the overall performance because the
_scan_for_start_code() function is very often called when H.264/H.265 video is
played.
In addition, the technique can apply for all architectures and it is good in
view of readability and maintainability.
https://bugzilla.gnome.org/show_bug.cgi?id=731442
Adds a utility struct that is capable of storing and aggregating flow returns
associated with pads.
This way all demuxers will have a standard function to use and have the
same expected results.
Includes tests.
https://bugzilla.gnome.org/show_bug.cgi?id=709224
Buffer pool set_config() may return FALSE if requested configuration needed
small changes. Reget the config and try setting it again (validating the
changes first). This ensure we have a configured pool if possible.
https://bugzilla.gnome.org/show_bug.cgi?id=727916
exit() will call atexit handlers, which may try to
clean up things or wait for things to get cleaned up,
which we don't want or need. We just want to stop
and let the parent know about the failure as quickly
as possible in case fork() is used.
Fixes timeouts on assert failures in checks where
an exit handler waits for things to stop, but they
don't stop because they haven't been shut down,
and they haven't been shut down because there's no
simple way to do so on failures.
http://sourceforge.net/p/check/patches/50/
Currently, if prepare() takes too much time, we skip the call to render().
The side effect of this, is that we endup starving the render(). The solution
in this patch is to always render frames that are on time before prepare() is
executed. This will maximize the number of frames we display and smoothly
degrade the rendering performance.
https://bugzilla.gnome.org/show_bug.cgi?id=729335
They are very confusing for people, and more often than not
also just not very accurate. Seeing 'last reviewed: 2005' in
your docs is not very confidence-inspiring. Let's just remove
those comments.
Keep it simple. Likely also makes things easier for bindings,
and efficiency clearly has not been a consideration given how
the existing code handled these lists.
In order to be deterministic, multiple waiting GstClockIDs needs to be
released at the same time, or else one can get into the situation that
the one being released first can add itself back again before the next
one waiting is released.
Test added for new API and old tests rewritten to comply.
This reverts commit b9313afc75.
This should be fixed in upstream libcheck instead. We want
to keep diff of our local copy to upstream libcheck
to a minimum.
We iterate the current discont group backwards and push each GOP forwards,
starting from the last one. However if the first buffer in the current
discont group is a keyframe, we will keep it around until next time,
which is far from ideal. Just push it.
This prevents situations where a first branch would get seeked and
receive a buffer before all branches got seeked, and thus collected
would get called based on EOS from the previous segment.
As a consequence, during the process of seeking, don't decrease
the eospads number when a FLUSH_STOP is received.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=724571
GST_CHECKS can be simply "test*" to run run all tests (including those that are
marked broken). Update the sparse comments a bit to tell how this works.
Don't set the size to -1 in automatic_eos mode (which also updates the
duration to -1). We only want automatic_eos mode influence the maxsize
calculations without any side effects.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=724564
This defaults to TRUE and if it is set to FALSE it is the subclasses
responsibility to return GST_FLOW_EOS from the create() vmethod once
the stream is done.
Store the eos event seqnum and use it when creating the
new eos event to be pushed downstream. To know if the eos
was caused by the eos events received on send_event, a
'forced_eos' flag is used to use the correct seqnum on
the event pushed downstream.
Useful if the application wants to check if the EOS message
was generated from its own pushed EOS or from another source
(stream really finished).
Also adds a test for this
https://bugzilla.gnome.org/show_bug.cgi?id=722791
If on passthrough during reverse playback, do not accumulate buffers as
baseparse will never check for DISCONT flag to push those buffers.
So just push buffers downstream as if it was forward playback.
https://bugzilla.gnome.org/show_bug.cgi?id=721941
TIME segments are being ignored and a standard initialized
segment is used instead. This causes issues as not properly detecting
reverse playback or not cliping output based on the segment.
This seems to be a regression from one of the GstSegment/GstEvent
redesigns on the 0.10 -> 1.0 transition
It wasn't required, instead baseparse was using it to check the media
caps to identify if it was handling audio or video.
The pending_segment was removed and a checked_media boolean
replaced it for a more accurate naming.
https://bugzilla.gnome.org/show_bug.cgi?id=721350
A GAP event is handled as an empty buffer by sinks and they expect
to receive start up events before GAP events (like a segment).
This is important specially if there is a GAP at the beginning of
a stream (before any buffers) so that the segment event can be
pushed downstream before the GAP
https://bugzilla.gnome.org/show_bug.cgi?id=721350
* fix typo GstBufferFlag -> GstBufferFlags
* fix typo GstFeatures -> GstCapsFeatures
* fix typo GstAllocatorParams -> GstAllocationParams
* fix typo GstContrlSources -> GstControlSource
* do not refer to gstcheck as an object
* make references gtk_init() and tcase_set_timeout() not be references
* gst_element_get_pad() renamed gst_element_get_static_pad()
* gst_clock_id_wait_async_full() renamed gst_clock_id_wait_async()
* _drop_element() is really gst_queue_array_drop_element()
* gst_pad_accept_caps() was removed, do not refer to it
* separate GST_META_TAG_MEMORY_STR declaration from description
* do not describe removed gst_collect_pads_collect()
* correctly link to GstElementClass' virtual set_context()
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=719614
Fix a typo in a doc string - the property is round-trip-limit, not
roundtrip-limit.
Remove a bogus GST_WARNING that can print an uninitialised variable
and is redundant anyway.
Sometimes, packets might take a very long time to return. Such packets
usually are way too late and destabilize the regression with their
obsolete data. On Wi-Fi, round-trips of over 7 seconds have been observed.
If the limit is set to a nonzero value, packets with a round-trip period
larger than the limit are ignored.
Signed-off-by: Carlos Rafael Giani <dv@pseudoterminal.org>
https://bugzilla.gnome.org/show_bug.cgi?id=712385
Keep a rolling average of the round trip time for network clock
observations, favouring shorter round trips as being more accurate.
Don't pass any clock observation to the clock slaving if it has a
round-trip time greater than 2 times the average.
Actual shifts in the network topology will be noticed after some
time, as the rolling average incorporates the new round trip times.