Adds API to get or peek a sub-reader of a certain size from
a given byte reader. This is useful when parsing nested chunks,
one can easily get a byte reader for a sub-chunk and make
sure one never reads beyond the sub-chunk boundary.
API: gst_byte_reader_peek_sub_reader()
API: gst_byte_reader_get_sub_reader()
Adds gst_byte_reader_masked_scan_uint32_peek just like
GstAdapter has a _peek and non _peek version
Upgraded tests to check that the returned value is correct in the
_peek version
API: gst_byte_reader_masked_scan_uint32_peek
https://bugzilla.gnome.org/show_bug.cgi?id=728356
Adds a utility struct that is capable of storing and aggregating flow returns
associated with pads.
This way all demuxers will have a standard function to use and have the
same expected results.
Includes tests.
https://bugzilla.gnome.org/show_bug.cgi?id=709224
Stores the last result of a gst_pad_push or a pull on the GstPad and provides
a getter and a macro to access this field.
Whenever the pad is inactive it is set to FLUSHING
API: gst_pad_get_last_flow_return
https://bugzilla.gnome.org/show_bug.cgi?id=709224
Currently there is no other way to unlock a buffer pool other then
stopping it. This may have the effect of freeing all the buffers,
which is too heavy for a seek. This patch add a method to enter and
leave flushing state. As a convenience, flush_start/flush_stop
virtual are added so pool implementation can also unblock their own
internal poll atomically with the rest of the pool. This is fully
backward compatible with doing stop/start to actually flush the pool
(as being done in GstBaseSrc).
https://bugzilla.gnome.org/show_bug.cgi?id=727611
When we call gst_buffer_pool_set_config() the pool may return FALSE and
slightly change the parameters. This helper is useful to do the minial required
validation before accepting the modified configuration.
https://bugzilla.gnome.org/show_bug.cgi?id=727916
Events passing through #GstPads that have a running time
offset set via gst_pad_set_offset() will get their offset
adjusted according to the pad's offset.
If the event contains any information that related to the
running time, this information will need to be updated
before usage with this offset.
This defaults to TRUE and if it is set to FALSE it is the subclasses
responsibility to return GST_FLOW_EOS from the create() vmethod once
the stream is done.
Adds a variant of the _push function that doesn't check the queue limits
before adding the new item. It is useful when pushing an element to the
queue shouldn't lock the thread.
One particular scenario is when the queue is used to serialize buffers
and events that are going to be pushed from another thread. The
dataqueue should have a limit on the amount of buffers to be stored to
avoid large memory consumption, but events can be considered to have
negligible impact on memory compared to buffers. So it is useful to be
used to push items into the queue that contain events, even though the
queue is already full, it shouldn't matter inserting an item that has
no significative size.
This scenario happens on adaptive elements (dashdemux / mssdemux) as
there is a single download thread fetching buffers and putting into the
dataqueues for the streams. This same download thread can als generate
events in some situations as caps changes, eos or a internal control
events. There can be a deadlock at preroll if the first buffer fetched
is large enough to fill the dataqueue and the download thread and the
next iteration of the download thread decides to push an event to this
same dataqueue before fetching buffers to other streams, if this push
locks, the pipeline will be stuck in preroll as no more buffers will be
downloaded.
There is a somewhat common practice in dash streams to have a single
very large buffer for audio and one for video, so this will always
happen as the download thread will have to push an EOS right after
fetching the first buffer for any stream.
API: gst_data_queue_push_force
https://bugzilla.gnome.org/show_bug.cgi?id=705694
All streams that have the same group id are supposed to be played
together, i.e. all streams inside a container file should have the
same group id but different stream ids. The group id should change
each time the stream is started, resulting in different group ids
each time a file is played for example.
API: gst_value_array_append_and_take_value
API: gst_value_list_append_and_take_value
We were already using this internally, this makes it public for code
which frequently appends values which are expensive to copy (like
structures, arrays, caps, ...).
Avoids copies of the values for users. The passed GValue will also
be 0-memset'ed for re-use.
New users can replace this kind of code:
gst_value_*_append_value(mycontainer, &myvalue);
g_value_unset(&myvalue);
by:
gst_value_*_append_and_take_value(mycontainer, &myvalue);
https://bugzilla.gnome.org/show_bug.cgi?id=701632
This function works just like gst_data_queue_pop, but it doesn't
remove the object from the queue.
Useful when inspecting multiple GstDataQueues to decide from which
to pop the element from.
Add: gst_data_queue_peek
Source elements with limited bandwidth capabilities and supporting
buffering for downstream elements should set this flag when answering
a scheduling query. This is useful for the on-disk buffering scenario
of uridecodebin to avoid checking the URI protocol against a list of
hardcoded protocols.
Bug 693484
These are meant to specify features in caps that are required
for a specific structure, for example a specific memory type
or meta.
Semantically they could be though of as an extension of the media
type name of the structures and are handled exactly like that.
Elements should override GstElement::set_context() and also call
gst_element_set_context() to keep this context up-to-date with
the very latest context they internally use.
The duration should be re-queried via a query using the
normal path, we don't want applications to use the value
from the message itself, since it might no match what a
duration query done from the sink upstream might yield.
Also disables duration caching in GstBin. It should be
added back again at some point.