docs: go over design docs and fix things

Remove bufferlist part, it's merged with part-buffer.txt
This commit is contained in:
Wim Taymans 2011-06-06 16:11:31 +02:00
parent ba8c8bb2c8
commit f48e7920da
21 changed files with 278 additions and 585 deletions

View file

@ -22,13 +22,14 @@ API/ABI
- rethink how we handle dynamic replugging wrt segments and other events that
already got pushed and need to be pushed again. Might need GstFlowReturn from
gst_pad_push_event().
gst_pad_push_event(). FIXED in 0.11 with sticky events.
- Optimize negotiation. We currently do a get_caps() call when we link pads,
which could potentially generate a huge list of caps and all their
combinations, we need to avoid generating these huge lists by generating them
incrementaly when needed. We can do this with a gst_pad_iterate_caps() call.
We also need to incrementally return intersections etc, for this.
We also need to incrementally return intersections etc, for this. somewhat
FIXED in 0.11 with a filter on getcaps functions.
- Elements in a bin have no clue about the final state of the parent element
since the bin sets the target state on its children in small steps. This
@ -50,6 +51,7 @@ API/ABI
and another a push, the push might be busy while the block callback is done.
* maybe this name is overloaded. We need to look at some more use cases before
trying to fix this.
FIXED in 0.11 with BLOCKING probes. Not everything is implemented yet, though.
- rethink the way we do upstream renegotiation. Currently it's done with
pad_alloc but this has many issues such as only being able to suggest 1 format
@ -57,6 +59,7 @@ API/ABI
as capsfilter only know about the format, not the size). We would ideally like
to let upstream renegotiate a new format just like it did when it started.
This could, for example, easily be triggered with a RENEGOTIATE event.
FIXED in 0.11 with RECONFIGURE events.
- Remove the result format value in queries.
@ -73,8 +76,6 @@ IMPLEMENTATION
- implement BUFFERSIZE.
- implement pad_block with probes? see above.
DESIGN
~~~~~~

View file

@ -156,20 +156,3 @@ as well, so that there is a generic method for both PAUSED and PLAYING.
The same flow works as well for any chain of multiple elements and might
be implemented with a helper function in the future.
Issues
~~~~~~
When an EOS event has passed a pad and the pad is set to blocked, the block will
never happen because no data is going to flow anymore. One possibility is to
keep track of the pad's EOS state and make the block succeed immediatly. This is
not yet implemenented.
When dynamically reconnecting pads, some events (like NEWSEGMENT, EOS,
TAGS, ...) are not yet retransmitted to the newly connected element. It's
unclear if this can be done by core automatically by caching those events and
resending them on a relink. It might also be possible that this needs a
GstFlowReturn value from the event function, in which case the implementation
must be delayed for after 0.11, when we can break API/ABI.

View file

@ -147,4 +147,8 @@ A typical udpsink will then use something like sendmsg to send the memory region
on the network inside one UDP packet. This will further avoid having to memcpy
data into contiguous memory.
Using bufferlists, the complete array of output buffers can be pushed in one
operation to the peer element.

View file

@ -1,96 +0,0 @@
Buffer Lists
------------
GstBuffer provides a datastructure to manage:
- a continuous region of memory
- functions to copy/free the memory
- metadata associated with that memory such as timestamps and caps.
It is the primary means of transfering data between pads and elements.
GstBufferList expands on GstBuffer to allow multiple GstBuffers (conceptually
organized in a list) to be treated as a multiple groups of GstBuffers. This allows
for the following extra functionality:
- A logical GstBuffer (called a group) can consist of disjoint memory each with
their own copy/free and metadata. Logically the group should be treated as
one single GstBuffer.
- Multiple groups can be put into one bufferlist. This allows for a single
method call to pass multiple (logical) buffers downstream.
Use cases
~~~~~~~~~
A typical use case for multimedia pipelines is to append or remove 'headers'
from packets of data.
Generating RTP packets from h264 video
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
We receive as input a GstBuffer with an encoded h264 image and we need to
create RTP packets containing this h264 data as the payload. We typically need
to fragment the h264 data into multiple packets, each with their own RTP and
payload specific header.
+-------+-------+---------------------------+--------+
input H264 buffer: | NALU1 | NALU2 | ..... | NALUx |
+-------+-------+---------------------------+--------+
|
V
+-+ +-------+ +-+ +-------+ +-+ +-------+
output bufferlist: | | | NALU1 | | | | NALU2 | .... | | | NALUx |
+-+ +-------+ +-+ +-------+ +-+ +-------+
: : : :
\-----------/ \-----------/
group 1 group 2
The output bufferlist consists of x groups consisting of an RTP payload header
and a subbuffer of the original input H264 buffer. Since the rtp headers and
the h264 data don't need to be contiguous in memory, we can avoid to memcpy the
h264 data into the rtp packets.
Since we can generate a bufferlist with multiple groups, we can push all the
RTP packets for the input data to the next element in one operation.
A typical udpsink will then use something like sendmsg to send the groups on
the network inside one UDP packet. This will further avoid having to memcpy
data into contiguous memory.
API
~~~
The GstBufferList is an opaque data structure and is operated on using an
iterator. It derives from GstMiniObject so that it has basic refcounting and
copy/free functions.
The bufferlist is writable when its refcount is 1 and it's not marked as
readonly. A writable bufferlist means that elements can be added and removed
form the list but it does not mean that the actual buffers in the list are
writable.
To modify the data in the buffers of the bufferlist, both the list and the
buffer must be writable.
Methods exist for navigating the groups in the list and the buffers inside a
group.
Metadata
~~~~~~~~
Each of the buffers inside the bufferlist can have metadata assiociated with it.
The metadata of the bufferlist is always the metadata of the first buffer of the
first group in the bufferlist. This means that:
- Before pushing the list to a pad, negotiation happens with (only) the caps of
the first buffer in the list. Caps of other buffers is ignored.
- synchronisation happens on the timestamp of the first buffer in the list.
This allows for efficient (re)timestamping and re-typing (caps) of a group of
buffers without having to modify each of the buffer's metadata.

View file

@ -12,9 +12,6 @@ Caps are exposed on the element pads using the _get_caps() pad function.
This function describes the possible types that the pad can handle or
produce (see part-pads.txt and part-negotiation.txt).
Caps are also attached to buffers to describe to content of the data
pointed to be the buffer.
Various methods exist to work with the media types such as substracting
or intersecting.

View file

@ -99,8 +99,7 @@ Negotiation
Typical (re)negotiation of the transform element in push mode always goes from
sink to src, this means triggers the following sequence:
- the sinkpad receives a buffer with new caps, this triggers the setcaps
function on the sinkpad before handing the buffer to transform.
- the sinkpad receives a new caps event.
- the transform function figures out what it can convert these caps to.
- try to see if we can configure the caps unmodified on the peer. We need to
do this because we prefer to not do anything.
@ -111,10 +110,10 @@ sink to src, this means triggers the following sequence:
We call this downstream negotiation (DN) and it goes roughly like this:
sinkpad transform srcpad
setcaps() | | |
CAPS event | | |
------------>| find_transform() | |
|------------------->| |
| | setcaps() |
| | CAPS event |
| |--------------------->|
| <configure caps> <-| |
@ -148,8 +147,7 @@ assume nothing is going to write to the buffer and we don't enforce a writable
buffer for the transform_ip function, when present.
One common function that we need for the transform element is to find the best
transform from one format (src) to another (dest). Since the function is
bidirectional, we will use the src->dest negotiation. Some requirements of this
transform from one format (src) to another (dest). Some requirements of this
function are:
- has a fixed src caps
@ -198,18 +196,14 @@ state. We can identify these steady states:
- in-place: buffers are modified in-place, this means that the input
buffer is modified to produce a new output buffer. This requires the
input buffer to be writable. If the input buffer is not writable, a new
buffer has to be allocated with pad-alloc. (SCI)
buffer has to be allocated from the bufferpool. (SCI)
sinkpad transform srcpad
chain() | | |
------------>| handle_buffer() | |
|------------------->| |
| | [!writable] |
| | pad-alloc() |
| |--------------------->|
| [caps-changed] .-| [caps-changed] |
| <reconfigure> | | setcaps() |
| '>|--------------------->|
| | alloc buffer |
| .-| |
| <transform_ip> | | |
| '>| |
@ -217,18 +211,15 @@ state. We can identify these steady states:
| |--------------------->|
| | |
- copy transform: a new output buffer is allocated with pad-alloc and data
from the input buffer is transformed into the output buffer. (SCC)
- copy transform: a new output buffer is allocate from the bufferpool
and data from the input buffer is transformed into the output buffer.
(SCC)
sinkpad transform srcpad
chain() | | |
------------>| handle_buffer() | |
|------------------->| |
| | pad_alloc() |
| |--------------------->|
| [caps-changed] .-| [caps-changed] |
| <reconfigure> | | setcaps() |
| '>|--------------------->|
| | alloc buffer |
| .-| |
| <transform> | | |
| '>| |
@ -250,11 +241,7 @@ state. We can identify these steady states:
------------>| handle_buffer() | |
|------------------->| |
| | [!writable || !size] |
| | pad-alloc |
| |--------------------->|
| [caps-changed] .-| [caps-changed] |
| <reconfigure> | | setcaps() |
| '>|--------------------->|
| | alloc buffer |
| .-| |
| <transform_ip> | | |
| '>| |
@ -267,146 +254,41 @@ state. We can identify these steady states:
the same as the case with the same-caps negotiation. (DCC)
We can immeditatly observe that the copy transform states will need to
allocate a buffer from a downstream element using pad-alloc. When the transform
element is receiving a non-writable buffer in the in-place state, it will also
need to perform a pad-alloc. There is no reason why the passthrough state would
perform a pad-alloc. This is important because upstream re-negotiation can only
happen when the transform uses pad-alloc for all outgoing buffers.
allocate a new buffer from the bufferpool. When the transform element is
receiving a non-writable buffer in the in-place state, it will also
need to perform an allocation. There is no reason why the passthrough state would
perform an allocation.
This steady state changes when one of the following actions occur:
- the sink pad receives new caps, this triggers the above downstream
renegotation process, see above for the flow.
- the src pad is instructed to produce new caps because of new caps from
pad-alloc, this only happens when the transform calls pad-alloc on the
srcpad in order to produce a new output buffer.
- the transform element wants to renegotiate (because of changed properties,
for example). This essentially clears the current steady state and
triggers the downstream and upstream renegotiation process.
Parallel to the downstream negotiation process there is an upstream negotiation
process. The handling and proxy of buffer-alloc is the most comple part of the
transform element. This upstream negotiation process has 3 cases: (UN)
- upstream calls the buffer-alloc function of the transform sinkpad and this
call is proxied downstream (UNP)
- upstream calls the buffer-alloc function of the transform sinkpad, the
transform does not proxy the call but returns a buffer itself (UNU)
- the transform calls the pad-alloc function downstream to allocate a new
output buffer (but not because of a proxied buffer-alloc) (UNA)
The case where the pad-alloc is called because an output buffer must be
generated in the chain function is handled above in the copy-transform and the
in-place transform when the input buffer is not writable or the input buffer
size is smaller than the output size.
We are left with the last case (proxy an incomming pad-alloc or not). We have 2
possibilities here:
- pad-alloc is called with the same caps as are currently being handled by
the transform on the sinkcaps. Note that this will only be true when the
transform element is completely negotiated because of data processing, see
above. Then the element is not yet negotiated, we proceed with the case
where sinkcaps are different from thos in the buffer-alloc.
* If the transform is using copy-transform, we don't need to proxy because
we will call pad-alloc when generating an output buffer.
sinkpad transform srcpad
buffer_alloc() | | |
--------------->| | |
| | |
|-. [same caps && | |
return default | | copy-trans] | |
<------------|<' | |
| | |
* If the transform is using in-place and insize < outsize, we proxy
the pad-alloc with the srccaps. If the caps are unmodified, we proxy
the buffer after changing the caps and size.
sinkpad transform srcpad
buffer_alloc() | | |
--------------->| | |
| [same caps && | |
| in-place] | |
|------------------->| pad_alloc() |
| |--------------------->|
| [caps unchanged] | |
return | adjust_buffer | |
<----------------------------------| |
| | |
| | |
* If the transform is using in-place and insize < outsize, we proxy
the pad-alloc with the srccaps. If the caps are modified find the best
transform from these new caps and return a buffer of this size/caps
instead.
sinkpad transform srcpad
buffer_alloc() | | |
--------------->| | |
| [same caps && | |
| in-place] | pad-alloc() |
|------------------------------------------>|
| [caps changed] .-| |
| find_transform() | | |
return | '>| |
<----------------------------------| |
| | |
* If the transform is using in-place and insize >= outsize, we cannot proxy
the pad-alloc because the resulting buffer would be too small to return
anyway.
* If the transform is using passthrough, we can proxy the pad-alloc to the
source pad. If the caps change, find the best transform and return a
buffer of those caps and size instead.
sinkpad transform srcpad
buffer_alloc() | | |
--------------->| [same caps && | |
| passtrough] | pad-alloc() |
|------------------------------------------>|
| [caps changed] .-| |
| find_transform() | | |
return | '>| |
<----------------------------------| |
| | |
- pad-alloc is called with different caps than are currently being handled by
the transform on the sinkcaps we have to try to negotiate a new
configuration for the transform element.
* we perform the standard way to finding a best transform using
find_transform() and we call the pad-alloc function with these caps.
If we get different caps from pad-alloc, we find the best format to
transform these to and return those caps instead.
triggers the downstream and upstream renegotiation process. This situation
also happens when a RECONFIGURE event was received on the transform srcpad.
sinkpad transform srcpad
buffer_alloc() | | |
--------------->| | |
| find_transform() | |
|------------------->| |
| | pad-alloc() |
| |--------------------->|
return | [caps unchanged] | |
<----------------------------------| |
| | |
| [caps changed] .-| |
| find_transform() | | |
return | '>| |
<----------------------------------| |
| | |
Allocation
~~~~~~~~~~
In order to perform passthrough buffer-alloc or pad-alloc, we need to be able
to get the size of the output buffer after the transform.
After the transform element is configured with caps, a bufferpool needs to be
negotiated to perform the allocation of buffers. We habe 2 cases:
For passthrough buffer-alloc, this is trivial: the input size equals the output
size.
- The element is operating in passthrough we don't need to allocate a buffer
in the transform element.
- The element is not operating in passthrough and needs to allocation an
output buffer.
For the copy transform or the in-place transform we need additional function to
In case 1, we don't query and configure a pool. We let upstream decide if it
wants to use a bufferpool and then we will proxy the bufferpool from downstream
to upstream.
In case 2, we query and set a bufferpool on the srcpad that will be used for
doing the allocations.
In order to perform allocation, we need to be able to get the size of the
output buffer after the transform. We need additional function to
retrieve the size. There are two functions:
- transform_size()
@ -424,62 +306,3 @@ retrieve the size. There are two functions:
For performance reasons, the mapping between caps and size is kept in a cache.
Issues
~~~~~~
passthrough and in-place transforms (with writable buffers) never need to
perform a pad-alloc on the srcpad. This means that if upstream negotiation
happens, the transform element will never know about it.
The transform element will keep therefore track of the allocation pattern of
the peer elements. We can see the following cases:
- upstream peer calls buffer-alloc on the sinkpad of the transform. In some
cases (see above) this call gets proxied or not.
- upstream peer does never call buffer-alloc.
We will keeps state about this allocation pattern and perform the following in
each case respectively:
- Upstream calls buffer-alloc: In passthrough and (some) in-place we proxy
this call onto the downstream element. If the caps are changed, we mark
a flag that we will require a new pad-alloc for the output of the next
output buffer.
- upstream peer does not call buffer-alloc: We always perform a pad-alloc
when processing buffers. We can further optimize by only looking at the
returned caps instead of doing a full, needless buffer copy.
Use cases
~~~~~~~~~
videotestsrc ! ximagesink
- resizing happens because videotestsrc performs pad-alloc.
videotestsrc peer-alloc=0 ! ximagesink
- resizing cannot happen because videotestsrc never performs pad-alloc.
videotestsrc ! videoscale ! ximagesink
- videoscale is initially configured in passthrough mode, pad-alloc from
videotestsrc is proxied through videoscale.
- pad-alloc will renegotiate a new size in videotestsrc.
videotestsrc peer-alloc=0 ! videoscale ! ximagesink
- videoscale is initially configured in passthrough mode.
- videoscale performs pad-alloc because no buffer-alloc is called on the
sinkpad
- resizing the videosink makes videoscale perform the scaling.
Problematic
~~~~~~~~~~~
filesrc location=~/media/moveyourfeet.mov ! decodebin !
ffmpegcolorspace ! videoscale ! ffmpegcolorspace ! ximagesink -v

View file

@ -12,14 +12,18 @@ Different types of events exist to implement various functionalities.
GST_EVENT_FLUSH_START: data is to be discarded
GST_EVENT_FLUSH_STOP: data is allowed again
GST_EVENT_EOS: no more data is to be expected on a pad.
GST_EVENT_NEWSEGMENT: A new group of buffers with common start time
GST_EVENT_CAPS: Format information about the following buffers
GST_EVENT_SEGMENT: Timing information for the following buffers
GST_EVENT_TAG: Stream metadata.
GST_EVENT_BUFFERSIZE: Buffer size requirements
GST_EVENT_SINK_MESSAGE: An event turned into a message by sinks
GST_EVENT_EOS: no more data is to be expected on a pad.
GST_EVENT_QOS: A notification of the quality of service of the stream
GST_EVENT_SEEK: A seek should be performed to a new position in the stream
GST_EVENT_NAVIGATION: A navigation event.
GST_EVENT_LATENCY: Configure the latency in a pipeline
GST_EVENT_STEP: Stepping event
GST_EVENT_RECONFIGURE: stream reconfigure event
* GST_EVENT_DRAIN: Play all data downstream before returning.
@ -36,20 +40,21 @@ gst_pad_push_event() function returns NOT_LINKED.
Note that the behaviour is not influenced by a flushing pad.
FLUSH_START and FLUSH_STOP events are dropped on blocked pads.
sink pads
---------
A gst_pad_send_event() on a sinkpad will check the new event against the
existing event. If they are different, the new event is stored as a pending
event. If the events are the same, nothing changes.
When the pad is flushing, the _send_event() function returns WRONG_STATE
immediately.
A gst_pad_send_event() on a sinkpad will check the new event against the
existing event. If they are different, the old event is replaced with the new
event and the event is marked as inactive. If the events are the same, nothing
changes.
The event function is then called for all inactive events. If the function
returns success, the event is marked active, else the event is removed and set
to NULL in the array.
The event function is then called for all pending events. If the function
returns success, the pending event is copied to the active events, else the
pending event is removed and the current active event is unchanged.
This ensures that the event function is never called for flushing pads and that
the sticky array only contains events for which the event function returned
@ -60,9 +65,8 @@ pad link
--------
When linking pads, all the sticky events from the srcpad are copied to the
array on the sinkpad. All the different events are marked inactive.
The inactive events will be sent to the event function of the sinkpad on the next
event or buffer.
pending array on the sinkpad. The pending events will be sent to the event
function of the sinkpad on the next event or buffer.
FLUSH_START/STOP
@ -96,11 +100,9 @@ For elements that use the pullrange function, they send both flush events to
the upstream pads in the same way to make sure that the pullrange function
unlocks and any pending buffers are cleared in the upstream elements.
A FLUSH_STOP event will also clear any configured synchronisation information
like NEWSEGMENT events. After a FLUSH_STOP, any element that performs
synchronisation to the clock will therefore need a NEWSEGMENT event (which makes
the running_time start from 0 again) and will therefore also need a new
base_time (see part-clocks.txt and part-synchronisation.txt).
A FLUSH_START may instruct the pipeline to distribute a new base_time to
elements so that the running_time is reset to 0.
(see part-clocks.txt and part-synchronisation.txt).
EOS
@ -145,43 +147,38 @@ goes to PLAYING.
A FLUSH_STOP event on an element flushes the EOS state and all pending EOS messages.
NEWSEGMENT
~~~~~~~~~~
SEGMENT
~~~~~~~
A newsegment event is sent downstream by an element to indicate that the following
A segment event is sent downstream by an element to indicate that the following
group of buffers start and end at the specified positions. The newsegment event
also contains the playback speed and the applied rate of the stream.
Since the stream time is always set to 0 at start and after a seek, a 0
point for all next buffer's timestamps has to be propagated through the
pipeline using the NEWSEGMENT event.
pipeline using the SEGMENT event.
Before sending buffers, an element must send a NEWSEGMENT event. An element is
free to refuse buffers if they were not preceeded by a NEWSEGMENT event.
Before sending buffers, an element must send a SEGMENT event. An element is
free to refuse buffers if they were not preceeded by a SEGMENT event.
Elements that sync to the clock should store the NEWSEGMENT start and end values
Elements that sync to the clock should store the SEGMENT start and end values
and substract the start value from the buffer timestamp before comparing
it against the stream time (see part-clocks.txt).
An element is allowed to send out buffers with the NEWSEGMENT start time already
An element is allowed to send out buffers with the SEGMENT start time already
substracted from the timestamp. If it does so, it needs to send a corrected
NEWSEGMENT downstream, ie, one with start time 0.
SEGMENT downstream, ie, one with start time 0.
A NEWSEGMENT event should be generated as soon as possible in the pipeline and
A SEGMENT event should be generated as soon as possible in the pipeline and
is usually generated by a demuxer or source. The event is generated before
pushing the first buffer and after a seek, right before pushing the new buffer.
The NEWSEGMENT event should be sent from the streaming thread and should be
The SEGMENT event should be sent from the streaming thread and should be
serialized with the buffers.
Buffers should be clipped within the range indicated by the newsegment event
start and stop values. Sinks must drop buffers with timestamps out of the
indicated newsegment range.
If a newsegment arrives at an element not preceeded by a flush event, the
streamtime of the pipeline will not be reset to 0 so any element that syncs
to the clock must use the stop times of the previous newsegment events to
make the buffer timestamps increasing (part-segments.txt).
indicated segment range.
TAG

View file

@ -39,7 +39,7 @@ These pads are stored in a single GList within the Element. Several counters
are kept in order to allow quicker determination of the type and properties of
a given Element.
Pads may be added to an element with _add_pad. Retrieval is via _get_pad(),
Pads may be added to an element with _add_pad. Retrieval is via _get_static_pad(),
which operates on the name of the Pad (the unique key). This means that all
Pads owned by a given Element must have unique names.
A pointer to the GList of pads may be obtained with _iterate_pads.

View file

@ -89,12 +89,3 @@ Flags
Each object in the GStreamer object hierarchy can have flags associated with it,
which are used to describe a state or a feature of the object.
GstObject has flags to mark its lifecycle: FLOATING and DISPOSING.
Class signals
~~~~~~~~~~~~~
It is possible to know when a new object is loaded by connecting to the
GstObjectClass signal. This feature is not very much used and might be removed
at some point.

View file

@ -195,13 +195,11 @@ capture pipelines.
prerolled.
State Changes revised
~~~~~~~~~~~~~~~~~~~~~
State Changes
~~~~~~~~~~~~~
As a first step in a generic solution we propose to modify the state changes so
that no sink is set to PLAYING before it is prerolled.
In order to do this, the pipeline (at the GstBin level) keeps track of all
A Sink is never set to PLAYING before it is prerolled. In order to do this, the
pipeline (at the GstBin level) keeps track of all
elements that require preroll (the ones that return ASYNC from the state
change). These elements posted a ASYNC_START message without a matching
ASYNC_DONE message.
@ -221,18 +219,12 @@ NO_PREROLL element to PLAYING. This operation has to be performed in the
separate async state change thread (like the one currently used for going from
PAUSED->PLAYING in a non-live pipeline).
implications:
- the current async_play vmethod in basesink can be deprecated since we now
always call the state change function when going from PAUSED->PLAYING. We
keep this method however to remain backward compatible.
Latency compensation
~~~~~~~~~~~~~~~~~~~~
As an extension to the revised state changes we can perform latency calculation
and compensation before we proceed to the PLAYING state.
Latency calculation and compensation is performed before the pipeline proceeds to
the PLAYING state.
When the pipeline collected all ASYNC_DONE messages it can calculate the global
latency as follows:
@ -279,8 +271,8 @@ the same for all sinks, all sinks will render data relatively synchronised.
Flushing a playing pipeline
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Using the new state change mechanism we can implement resynchronisation after an
uncontrolled FLUSH in (part of) a pipeline. Indeed, when a flush is performed on
We can implement resynchronisation after an uncontrolled FLUSH in (part of) a
pipeline in the same way. Indeed, when a flush is performed on
a PLAYING live element, a new base time must be distributed to this element.
A flush in a pipeline can happen in the following cases:

View file

@ -134,3 +134,16 @@ GST_MESSAGE_REQUEST_STATE:
are in. A typical use case would be an audio sink that requests the pipeline
to pause in order to play a higher priority stream.
GST_MESSAGE_STEP_START:
A Stepping operation has started.
GST_MESSAGE_QOS:
A buffer was dropped or an element changed its processing strategy for
Quality of Service reasons.
GST_MESSAGE_PROGRESS:
A progress message was posted. Progress messages inform the application about
the state of asynchronous operations.

View file

@ -118,15 +118,13 @@ GstMetaInfo will point to more information about the metadata and looks like thi
struct _GstMetaInfo {
GQuark api; /* api name */
GQuark impl; /* implementation name */
GType type; /* implementation type */
gsize size; /* size of the structure */
GstMetaInitFunction init_func;
GstMetaFreeFunction free_func;
GstMetaCopyFunction copy_func;
GstMetaTransformFunction transform_func;
GstMetaSerializeFunction serialize_func
GstMetaDeserializeFunction deserialize_func
};
api will contain a GQuark of the metadata api. A repository of registered MetaInfo
@ -136,16 +134,16 @@ register additional custom metadata.
For each implementation of api, there will thus be a unique GstMetaInfo. In the
case of metadata with a well defined API, the implementation specific init
function will setup the methods in the metadata structure.
function will setup the methods in the metadata structure. A unique GType will
be made for each implementation and stored in the type field.
Along with the metadata description we will have functions to initialize/free (and/or refcount)
a specific GstMeta instance. We also have the possibility to add a custom
transform function that can be used to modify the metadata when a transformation
happens.
We also add serialize and deserialize function for the metadata in case we need special
logic for reading and writing the metadata. This is needed for GDP payloading of the
metadata.
There are no explicit methods to serialize and deserialize the metadata. Since
each type has a GType, we can reuse the Gobject transform functions for this.
The purpose of the separate MetaInfo is to not have to carry the free/init functions in
each buffer instance but to define them globally. We still want quick access to the info
@ -218,7 +216,7 @@ The following defines can usually be found in the shared .h file.
Adding metadata to a buffer can be done with the gst_buffer_add_meta() call.
This function will create new metadata based on the implementation specified by
the GstMetaInfo. It is alos possible to pass a generic pointer to the add_meta()
the GstMetaInfo. It is also possible to pass a generic pointer to the add_meta()
function that can contain parameters to initialize the new metadata fields.
Retrieving the metadata on a buffer can be done with the
@ -305,9 +303,7 @@ Serialization
When buffer should be sent over the wire or be serialized in GDP, we need a way
to perform custom serialization and deserialization on the metadata.
For this we add the serialize and deserialize functions to the metadata info.
Possible use cases are to make sure we write out the fields with a specific size
and endianness.
for this we can use the GType transform functions.
Transformations

View file

@ -26,32 +26,18 @@ negotiation.
The basics of negotiation are as follows:
- GstCaps (see part-caps.txt) are refcounted before they
are attached to a buffer to describe the contents of the buffer.
It is possible to add a NULL caps to a buffer, this means that the
buffer type did not change relative to the previous buffer. If no
previous buffer was received by a downstream element, it is free to
discard the buffer.
- GstCaps (see part-caps.txt) are refcounted before they are pushed as
an event to describe the contents of the following buffer.
- Before receiving a buffer, an element must check if the datatype of
the buffer has changed. The element should reconfigure itself to the
new format before processing the buffer data. If the data type on
the buffer is not acceptable, the element should refuse the buffer by
- An element should reconfigure itself to the new format received as a CAPS
event before processing the following buffers. If the data type in the
caps event is not acceptable, the element should refuse the buffer by
returning an appropriate GST_FLOW_NOT_NEGOTIATED return value from the
chain function.
The core will automatically call the set_caps function for this purpose
when it is installed on the sink or source pad.
- When requesting a buffer from a bufferpool, the prefered type should
be passed to the buffer allocation function. After receiving a buffer
from a bufferpool, the datatype should be checked again.
- A bufferpool allocation function should try to allocate a buffer of the
prefered type. If there is a good reason to choose another type, the
alloc function should see if that other type is accepted by the other
element, then allocate a buffer of that type and attach the type to the
buffer before returning it.
- Upstream elements can request a format change of the stream by sending a
RECONFIGURE event upstream. Upstream elements will renegotiate a new format
when they receive a RECONFIGURE event.
The general flow for a source pad starting the negotiation.
@ -60,17 +46,14 @@ The general flow for a source pad starting the negotiation.
| accepts? |
type A |---------------->|
| yes |
|<----------------|
|< - - - - - - - -|
| |
get buffer | alloc_buf |
from pool |---------------->|
with type A | | Create buffer of type A.
| send_event() |
send CAPS |---------------->| Receive type A, reconfigure to
event A | | process type A.
| |
check type |<----------------|
and use A | |
| push |
push buffer |---------------->| Receive type A, reconfigure to
with new type| | process type A.
push buffer |---------------->| Process buffer of type A
| |
One possible implementation in pseudo code:
@ -93,20 +76,14 @@ The general flow for a source pad starting the negotiation.
if gst_pad_peer_accept_caps (srcpad, fixedcaps)
# store the caps as the negotiated caps, this will
# call the setcaps function on the pad
gst_pad_set_caps (srcpad, fixedcaps)
gst_pad_push_event (srcpad, gst_event_new_caps (fixedcaps))
break
endif
done
endif
# if the type is different, the buffer will have different caps from
# the src pad -- setcaps will get called on the pad_push
buffer = gst_pad_alloc_buffer (srcpad, 0, size, GST_PAD_CAPS (fixedcaps));
if buffer
buffer = gst_buffer_new_and_alloc (size);
[fill buffer and push]
elseif
[no buffer, either no peer or no acceptable format found]
endif
The general flow for a sink pad starting a renegotiation.
@ -116,22 +93,28 @@ The general flow for a sink pad starting a renegotiation.
| accepts? |
|<----------------| type B
| yes |
|---------------->|
|- - - - - - - - >|-.
| | | suggest B caps next
| |<'
| |
get buffer | alloc_buf |
from pool |---------------->|
with type A | | Create buffer of new type B.
| push_event() |
mark .-|<----------------| send RECONFIGURE event
renegotiate| | |
'>| |
| get_caps() |
renegotiate |---------------->|
| suggest B |
|< - - - - - - - -|
| |
| send_event() |
send CAPS |---------------->| Receive type B, reconfigure to
event B | | process type B.
| |
check type |<----------------|
and | |
reconfigure | |
| push |
push buffer |---------------->| Receive type B, reconfigure to
with new type| | process type B.
push buffer |---------------->| Process buffer of type B
| |
Use case:
@ -146,25 +129,30 @@ videotestsrc ! xvimagesink
2) When does negotiation happen?
- before srcpad does a push, it figures out a type as stated in 1), then
it calls the pad alloc function with the type. The sinkpad has to
create a buffer of that type, src fills the buffer and sends it to sink.
it pushes a caps event with the type. The sink checks the media type and
configures itself for this type.
- the source then usually does an ALLOCATION query to negotiate a bufferpool
with the sink. It then allocates a buffer from the pool and pushes it to
the sink. since the sink accepted the caps, it can create a pool for the
format.
- since the sink stated in 1) it could accept the type, it will be able to
create a buffer of the type and handle it.
- sink checks media type of buffer and configures itself for this type.
handle it.
3) How can sink request another format?
- sink asks if new format is possible for the source.
- sink returns buffer with new type in allocfunction.
- src receives buffer with new type, reconfigures and pushes.
- sink can always select something it can create and handle since it takes
the initiative. src should be able to handle the new type since it said
it could accept it.
- sink pushes RECONFIGURE event upstream
- src receives the RECONFIGURE event and marks renegotiation
- On the next buffer push, the source renegotiates the caps and the
bufferpool. The sink will put the new new prefered format high in the list
of caps it returns from its getcaps function.
videotestsrc ! queue ! xvimagesink
- queue implements an allocfunction, proxying all calls to its srcpad peer.
- queue proxies all accept and getcaps to the other peer pad.
- queue contains buffers with different types.
- queue proxies the bufferpool
- queue proxies the RECONFIGURE event
- queue stores CAPS event in the queue. This means that the queue can contain
buffers with different types.
Pull-mode negotiation
@ -232,7 +220,7 @@ deadlines.
The pull thread is usually started in the PAUSED->PLAYING state change. We must
be able to complete the negotiation before this state change happens.
The time to do capsnego, then, is after _check_pull_range() has succeeded,
The time to do capsnego, then, is after the SCHEDULING query has succeeded,
but before the sink has spawned the pulling thread.
@ -240,7 +228,7 @@ Mechanism
^^^^^^^^^
The sink determines that the upstream elements support pull based scheduling by
calling gst_pad_check_pull_range().
doing a SCHEDULING query.
The sink initiates the negotiation process by intersecting the results
of gst_pad_get_caps() on its sink pad and its peer src pad. This is the
@ -250,8 +238,7 @@ intersection of calling get_allowed_caps() on all of its sink pads. In
this way the sink element knows the capabilities of the entire pipeline.
The sink element then fixates the resulting caps, if necessary,
resulting in the flow caps. It notifies the pipeline of the caps by
calling gst_pad_set_caps() on its sink pad. From now on, the getcaps function
resulting in the flow caps. From now on, the getcaps function
of the sinkpad will only return these fixed caps meaning that upstream elements
will only be able to produce this format.
@ -269,11 +256,3 @@ function. The state will commit to PAUSED when the first buffer is received in
the sink. This is needed to provide a consistent API to the applications that
expect ASYNC return values from sinks but it also allows us to perform the
remainder of the negotiation outside of the context of the pulling thread.
During dataflow, gst_pad_pull_range() checks the caps on the pulled
buffer. If they are different from the sink pad's caps, it will return
GST_FLOW_NOT_NEGOTIATED. Because of the low-latency requirements,
changing caps in an activate pull-mode pipeline is not supported, as it
might require e.g. the sound card to reconfigure its hardware buffers,
and start capsnego again.

View file

@ -195,10 +195,9 @@ includes:
- offset of the data: a media specific offset, this could be samples for audio or
frames for video.
- the duration of the data in time.
- the media type of the data described with caps, these are key/value pairs that
describe the media type in a unique way.
- additional flags describing special properties of the data such as
discontinuities or delta units.
- additional arbitrary metadata
When an element whishes to send a buffer to another element is does this using one
of the pads that is linked to a pad of the other element. In the push model, a
@ -208,13 +207,13 @@ is pulled from the peer with the gst_pad_pull_range() function.
Before an element pushes out a buffer, it should make sure that the peer element
can understand the buffer contents. It does this by querying the peer element
for the supported formats and by selecting a suitable common format. The selected
format is then attached to the buffer with gst_buffer_set_caps() before pushing
out the buffer.
format is then first sent to the peer element with a CAPS event before pushing
the buffer.
When an element pad receives a buffer, if has to check if it understands the media
type of the buffer before starting processing it. The GStreamer core does this
automatically and will call the gst_pad_set_caps() function of the element before
sending the buffer to the element.
When an element pad receives a CAPS event, it has to check if it understand the
media type. The element must refuse following buffers if the media type
preceeding it was not accepted.
Both gst_pad_push() and gst_pad_pull_range() have a return value indicating whether
the operation succeeded. An error code means that no more data should be sent
@ -222,12 +221,11 @@ to that pad. A source element that initiates the data flow in a thread typically
pauses the producing thread when this happens.
A buffer can be created with gst_buffer_new() or by requesting a usable buffer
from the peer pad using gst_pad_alloc_buffer(). Using the second method, it is
possible for the peer element to suggest the element to produce data in another
format by attaching another media type caps to the buffer.
from a buffer pool using gst_buffer_pool_acquire_buffer(). Using the second
method, it is possible for the peer element to implement a custom buffer
allocation algorithm.
The process of selecting a media type and attaching it to the buffers is called
caps negotiation.
The process of selecting a media type is called caps negotiation.
Caps
@ -349,14 +347,18 @@ it accepts the data from filesrc on the sinkpad and starts decoding the compress
data to raw audio samples.
The mp3 decoder figures out the samplerate, the number of channels and other audio
properties of the raw audio samples, puts the decoded samples into a Buffer,
attaches the media type caps to the buffer and pushes this buffer to the next
properties of the raw audio samples and sends out a caps event with the media type.
Alsasink then receives the caps event, inspects the caps and reconfigures
itself to process the media type.
mp3dec then puts the decoded samples into a Buffer and pushes this buffer to the next
element.
Alsasink then receives the buffer, inspects the caps and reconfigures itself to process
the buffer. Since it received the first buffer of samples, it completes the state change
to the PAUSED state. At this point the pipeline is prerolled and all elements have
samples. Alsasink is now also capable of providing a clock to the pipeline.
Alsasink receives the buffer with samples. Since it received the first buffer of
samples, it completes the state change to the PAUSED state. At this point the
pipeline is prerolled and all elements have samples. Alsasink is now also
capable of providing a clock to the pipeline.
Since alsasink is now in the PAUSED state it blocks while receiving the first buffer. This
effectively blocks both mp3dec and filesrc in their gst_pad_push().
@ -488,7 +490,7 @@ element performs the following steps.
always stop because of step 1).
3) perform the seek operation
4) send a FLUSH done event to all downstream and upstream peer elements.
5) send NEWSEGMENT event to inform all elements of the new position and to complete
5) send SEGMENT event to inform all elements of the new position and to complete
the seek.
In step 1) all downstream elements have to return from any blocking operations
@ -512,8 +514,8 @@ Since the pipeline is still PAUSED, this will preroll the next media sample in t
sinks. The application can wait for this preroll to complete by performing a
_get_state() on the pipeline.
The last step in the seek operation is then to adjust the stream time of the pipeline
to 0 and to set the pipeline back to PLAYING.
The last step in the seek operation is then to adjust the stream running_time of
the pipeline to 0 and to set the pipeline back to PLAYING.
The sequence of events in our mp3 playback example.
@ -533,8 +535,8 @@ The sequence of events in our mp3 playback example.
| 2) stop streaming
| 3) perform seek
--------------------------> 4) FLUSH done event
--------------------------> 5) NEWSEGMENT event
--------------------------> 5) SEGMENT event
| e) update stream time to 0
| e) update running_time to 0
| f) PLAY pipeline

View file

@ -137,22 +137,27 @@ Push dataflow
All probes have the GST_PROBE_TYPE_PUSH flag set in the callbacks.
In push based scheduling, the blocking probe is called first with the DATA item.
Then the data probes are called before the peer pad chain function is called.
In push based scheduling, the blocking probe is called first with the data item.
Then the data probes are called before the peer pad chain or event function is
called.
The data probes are called before the peer pad is checked. This allows for
linking the pad in either the BLOCK or DATA probes on the srcpad.
linking the pad in either the BLOCK or DATA probes on the pad.
Before the sinkpad chain function is called, the data probes are called.
Before the peerpad chain or event function is called, the peer pad data probes
are called.
Finally, the IDLE probe is called on the srcpad after the data was sent to the
Finally, the IDLE probe is called on the pad after the data was sent to the
peer pad.
The push dataflow probe behavior is the same for buffers and biderectional events.
srcpad sinkpad
pad peerpad
| |
gst_pad_push() | |
-------------->O |
gst_pad_push() / | |
gst_pad_push_event() | |
-------------------->O |
O |
flushing? O |
WRONG_STATE O |
@ -163,14 +168,16 @@ peer pad.
no peer? O |
NOT_LINKED O |
< - - - - - - O |
O gst_pad_chain() |
O gst_pad_chain() / |
O gst_pad_send_event() |
O------------------------------>O
O flushing? O
O WRONG_STATE O
O< - - - - - - - - - - - - - - -O
O O-> do DATA probes
O O
O O---> chainfunc
O O---> chainfunc /
O O eventfunc
O< - - - - - - - - - - - - - - -O
O |
O-> do IDLE probes |
@ -190,7 +197,9 @@ item. This allows the pad to be linked before the peer pad is resolved.
After the getrange function is called on the peer pad and there is a data item,
the DATA probes are called.
When control returns to the sinkpad, the IDLE callbacks are called.
When control returns to the sinkpad, the IDLE callbacks are called. The IDLE
callback is called without a data item so that it will also be called when there
was an error.
It there is a valid DATA item, the DATA probes are called for the item.
@ -217,7 +226,7 @@ It there is a valid DATA item, the DATA probes are called for the item.
O flow error? O
O- - - - - - - - - - - - - - - >O
O O
dp DATA probes <-O O
do DATA probes <-O O
O- - - - - - - - - - - - - - - >O
| O
| do IDLE probes <-O

View file

@ -127,7 +127,7 @@ When a seek to a certain position is requested, the demuxer/parser will
do two things (ignoring flushing and segment seeks, and simplified for
illustration purposes):
- send a newsegment event with a new start position
- send a segment event with a new start position
- start pushing data/buffers again
@ -136,15 +136,15 @@ can actually be decoded, a demuxer or parser needs to start pushing data
from a keyframe/keyunit at or before the requested seek position.
Unless requested differently (via the KEY_UNIT flag), the start of the
newsegment event should be the requested seek position.
segment event should be the requested seek position.
So by default a demuxer/parser will then start pushing data from
position DATA and send a newsegment event with start position SEG_START,
position DATA and send a segment event with start position SEG_START,
and DATA <= SEG_START.
If DATA < SEG_START, a well-behaved video decoder will start decoding frames
from DATA, but take into account the segment configured by the demuxer via
the newsegment event, and only actually output decoded video frames from
the segment event, and only actually output decoded video frames from
SEG_START onwards, dropping all decoded frames that are before the
segment start and adjusting the timestamp/duration of the buffer that
overlaps the segment start ("clipping"). A not-so-well-behaved video decoder

View file

@ -22,8 +22,8 @@ on the stream. The seek has a start time, a stop time and a processing rate.
The playback of a segment starts with a source or demuxer element pushing a
newsegment event containing the start time, stop time and rate of the segment.
The purpose of this newsegment is to inform downstream elements of the
segment event containing the start time, stop time and rate of the segment.
The purpose of this segment is to inform downstream elements of the
requested segment positions. Some elements might produce buffers that fall
outside of the segment and that might therefore be discarded or clipped.
@ -46,8 +46,8 @@ Use case: FLUSHING seek
upstream and downstream.
When avidemux starts playback of the segment from second 1 to 5, it pushes
out a newsegment with 1 and 5 as start and stop times. The stream_time in
the newsegment is also 1 as this is the position we seek to.
out a segment with 1 and 5 as start and stop times. The stream_time in
the segment is also 1 as this is the position we seek to.
The video decoder stores these values internally and forwards them to the
next downstream element (videosink, which also stores the values)
@ -64,7 +64,7 @@ Use case: FLUSHING seek
When it reaches timestamp 5, it does not decode and push frames anymore.
The video sink receives a frame of timestamp 1. It takes the start value of
the previous newsegment and aplies the folowing (simplified) formula:
the previous segment and aplies the folowing (simplified) formula:
render_time = BUFFER_TIMESTAMP - segment_start + element->base_time

View file

@ -8,7 +8,7 @@ In 0.8, there was some support for Sparse Streams through the use of
FILLER events. These were used to mark gaps between buffers so that downstream
elements could know not to expect any more data for that gap.
In 0.10, segment information conveyed through NEWSEGMENT events can be used
In 0.10, segment information conveyed through SEGMENT events can be used
for the same purpose.
Use cases
@ -45,9 +45,9 @@ Details
The main requirement here is to avoid stalling the pipeline between sub-title
packets, and is effectively updating the minimum-timestamp for that stream.
A demuxer can do this by sending an 'update' NEWSEGMENT with a new start time
A demuxer can do this by sending an 'update' SEGMENT with a new start time
to the subtitle pad. For example, every time the SCR in MPEG data
advances more than 0.5 seconds, the MPEG demuxer can issue a NEWSEGMENT with
advances more than 0.5 seconds, the MPEG demuxer can issue a SEGMENT with
(update=TRUE, start=SCR ). Downstream elements can then be aware not to
expect any data older than the new start time.
@ -57,7 +57,7 @@ Details
This technique can also be used, for example, to represent a stream of
MIDI events spaced to a clock period. When there is no event present for
a clock time, a NEWSEGMENT update can be sent in its place.
a clock time, a SEGMENT update can be sent in its place.
2) Still frame/menu support
Still frames in DVD menus are not the same, in that they do not introduce
@ -74,7 +74,7 @@ Details
if necessary due to an intervening activity (such as a user navigation)
* FLUSH the pipeline using a normal flush sequence (FLUSH_START,
chain-lock, FLUSH_STOP)
* Send a NEWSEGMENT to restart playback with the next timestamp in the
* Send a SEGMENT to restart playback with the next timestamp in the
stream.
The upstream element performing the wait must only do so when in the PLAYING
@ -90,7 +90,7 @@ Details
arriving late at the sink, and they will be discarded instead of played.
3) For audio, 3) is the same case as 1) - there is a 'gap' in the audio data
that needs to be presented, and this can be done by sending a NEWSEGMENT
that needs to be presented, and this can be done by sending a SEGMENT
update that moves the start time of the segment to the next timestamp when
data will be sent.

View file

@ -11,7 +11,7 @@ Stream objects
The following objects are to be expected in the streaming thread:
- events
- NEW_SEGMENT (NS)
- SEGMENT (S)
- EOS (EOS) *
- TAG (T)
- buffers (B) *
@ -23,25 +23,27 @@ and live sources.
Typical stream
~~~~~~~~~~~~~~
A typical stream starts with a newsegment event that marks the
A typical stream starts with a segment event that marks the
buffer timestamp range. After that buffers are sent one after the
other. After the last buffer an EOS marks the end of the stream. No
more buffers are to be processed after the EOS event.
+--+ +-++-+ +-+ +---+
|NS| |B||B| ... |B| |EOS|
+--+ +-++-+ +-+ +---+
+-+ +-++-+ +-+ +---+
|S| |B||B| ... |B| |EOS|
+-+ +-++-+ +-+ +---+
1) NEW_SEGMENT, rate, start/stop, time
1) SEGMENT, rate, start/stop, time
- marks valid buffer timestamp range (start, stop)
- marks stream_time of buffers (time). This is the stream time of buffers
with a timestamp of NS.start.
- marks playback rate (rate). This is the required playback rate.
- marks applied rate (applied_rate). This is the already applied playback
rate. (See also part-trickmodes.txt)
- marks running_time of buffers. This is the time used to synchronize
against the clock.
2) N buffers
- displayable buffers are between start/stop of the NEW_SEGMENT. Buffers
- displayable buffers are between start/stop of the SEGMENT. Buffers
outside the segment range should be dropped or clipped.
- running_time:

View file

@ -8,7 +8,7 @@ Synchronisation in a GstPipeline is achieved using the following 3 components:
- a GstClock, which is global for all elements in a GstPipeline.
- Timestamps on a GstBuffer.
- the NEW_SEGMENT event preceding the buffers.
- the SEGMENT event preceding the buffers.
A GstClock
@ -68,7 +68,7 @@ This value is monotonically increasing at the rate of the clock.
Timestamps
~~~~~~~~~~
The GstBuffer timestamps and the preceeding NEW_SEGMENT event (See
The GstBuffer timestamps and the preceeding SEGMENT event (See
part-streams.txt) define a transformation of the buffer timestamps to
running_time as follows:
@ -77,13 +77,13 @@ The following notation is used:
B: GstBuffer
- B.timestamp = buffer timestamp (GST_BUFFER_TIMESTAMP)
NS: NEWSEGMENT event preceeding the buffers.
- NS.start: start field in the NEWSEGMENT event
- NS.stop: stop field in the NEWSEGMENT event
- NS.rate: rate field of NEWSEGMENT event
- NS.abs_rate: absolute value of rate field of NEWSEGMENT event
- NS.time: time field in the NEWSEGMENT event
- NS.accum: total accumulated time of all previous NEWSEGMENT events. This
NS: SEGMENT event preceeding the buffers.
- NS.start: start field in the SEGMENT event
- NS.stop: stop field in the SEGMENT event
- NS.rate: rate field of SEGMENT event
- NS.abs_rate: absolute value of rate field of SEGMENT event
- NS.time: time field in the SEGMENT event
- NS.accum: total accumulated time of all previous SEGMENT events. This
field is kept in the GstSegment structure.
Valid buffers for synchronisation are those with B.timestamp between NS.start
@ -97,7 +97,7 @@ The following transformation to running_time exist:
else
B.running_time = (NS.stop - B.timestamp) / NS.abs_rate + NS.accum
We write B.running_time as the running_time obtained from the NEWSEGMENT event
We write B.running_time as the running_time obtained from the SEGMENT event
and the buffers of that segment.
The first displayable buffer will yield a value of 0 (since B.timestamp ==
@ -120,7 +120,7 @@ As we have seen, we can get a running_time:
C.running_time = absolute_time - base_time
- using the buffer timestamp and the preceeding NEWSEGMENT event as (assuming
- using the buffer timestamp and the preceeding SEGMENT event as (assuming
positive playback rate):
B.running_time = (B.timestamp - NS.start) / NS.abs_rate + NS.accum
@ -154,9 +154,9 @@ the sink (See also part-clocks.txt).
For multiple streams this means that buffers with the same running_time are to
be displayed at the same time.
A demuxer must make sure that the NEWSEGMENT it emits on its output pads yield
A demuxer must make sure that the SEGMENT it emits on its output pads yield
the same running_time for buffers that should be played synchronized. This
usually means sending the same NEWSEGMENT on all pads and making sure that the
usually means sending the same SEGMENT on all pads and making sure that the
synchronized buffers have the same timestamps.
@ -172,7 +172,7 @@ It is the stream time that is used for:
- the position used in seek events/queries
- the position used to synchronize controller values
Stream time is calculated using the buffer times and the preceeding NEWSEGMENT
Stream time is calculated using the buffer times and the preceeding SEGMENT
event as follows:
stream_time = (B.timestamp - NS.start) * NS.abs_applied_rate + NS.time

View file

@ -72,12 +72,12 @@ One element will actually perform the seek, this is usually the demuxer or
source element. For more information on how to perform the different seek
types see part-seeking.txt.
For client side trickmode a NEW_SEGMENT event will be sent downstream with
For client side trickmode a SEGMENT event will be sent downstream with
the new rate and start/stop positions. All elements prepare themselves to
handle the rate (see below). The applied rate of the NEW_SEGMENT event will
handle the rate (see below). The applied rate of the SEGMENT event will
be set to 1.0 to indicate that no rate adjustment has been done.
for server side trick mode a NEW_SEGMENT event is sent downstream with a
for server side trick mode a SEGMENT event is sent downstream with a
rate of 1.0 and the start/stop positions. The elements will configure themselves
for normal playback speed since the server will perform the rate conversions.
The applied rate will be set to the rate that will be applied by the server. This
@ -137,16 +137,16 @@ playback speed or direction.
client side forward trickmodes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The seek happens as stated above. a NEW_SEGMENT event is sent downstream with a rate
different from 1.0. Plugins receiving the NEW_SEGMENT can decide to perform the
The seek happens as stated above. a SEGMENT event is sent downstream with a rate
different from 1.0. Plugins receiving the SEGMENT can decide to perform the
rate conversion of the media data (retimestamp video frames, resample audio, ...).
If a plugin decides to resample or retimestamp, it should modify the NEW_SEGMENT with
If a plugin decides to resample or retimestamp, it should modify the SEGMENT with
a rate of 1.0 and update the applied rate so that downstream elements don't resample
again but are aware that the media has been modified.
The GStreamer base audio and video sinks will resample automatically if they receive
a NEW_SEGMENT event with a rate different from 1.0. The position reporting in the
a SEGMENT event with a rate different from 1.0. The position reporting in the
base audio and video sinks will also depend on the applied rate of the segment
information.
@ -162,10 +162,10 @@ client side backwards trickmode
For backwards playback the following rules apply:
- the rate in the NEW_SEGMENT is less than 0.0.
- the NEW_SEGMENT start position is less than the stop position, playback will
- the rate in the SEGMENT is less than 0.0.
- the SEGMENT start position is less than the stop position, playback will
however happen from stop to start in reverse.
- the time member in the NEW_SEGMENT is set to the stream time of the start
- the time member in the SEGMENT is set to the stream time of the start
position.
For plugins the following rules apply:
@ -181,12 +181,12 @@ For plugins the following rules apply:
forward continuous with the previous buffer.
- A video decoder decodes and accumulates all decoded frames. If a buffer with
a DISCONT, accumulate NEWSEGMENT or EOS is received, all accumulated frames
are sent downsteam in reverse.
a DISCONT, SEGMENT or EOS is received, all accumulated frames are sent
downsteam in reverse.
- An audio decoder decodes and accumulates all decoded audio. If a buffer with
a DISCONT, accumulate NEWSEGMENT or EOS is received, all accumulated audio
is sent downstream in reverse order. Some audio codecs need the previous
a DISCONT, SEGMENT or EOS is received, all accumulated audio is sent
downstream in reverse order. Some audio codecs need the previous
data buffer to decode the current one, in that case, the previous DISCONT
buffer needs to be combined with the last non-DISCONT buffer to generate the
last bit of output.
@ -201,7 +201,7 @@ For plugins the following rules apply:
- for transcoding, audio and video resamplers can be used to reverse, resample
and retimestamp the buffers. Any rate adjustments performed on the media must
be added to the applied_rate and subtracted from the rate members in the
NEWSEGMENT event.
SEGMENT event.
In SKIP mode, the same algorithm as for forward SKIP mode can be used.