mirror of
https://gitlab.freedesktop.org/gstreamer/gstreamer.git
synced 2024-12-25 01:30:38 +00:00
295 lines
12 KiB
XML
295 lines
12 KiB
XML
<chapter id="chapter-clocks">
|
|
<title>Clocks and synchronization in &GStreamer;</title>
|
|
|
|
<para>
|
|
When playing complex media, each sound and video sample must be played in a
|
|
specific order at a specific time. For this purpose, GStreamer provides a
|
|
synchronization mechanism.
|
|
</para>
|
|
<para>
|
|
&GStreamer; provides support for the following use cases:
|
|
<itemizedlist>
|
|
<listitem>
|
|
<para>
|
|
Non-live sources with access faster than playback rate. This is
|
|
the case where one is reading media from a file and playing it
|
|
back in a synchronized fashion. In this case, multiple streams need
|
|
to be synchronized, like audio, video and subtitles.
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
Capture and synchronized muxing/mixing of media from multiple live
|
|
sources. This is a typical use case where you record audio and
|
|
video from a microphone/camera and mux it into a file for
|
|
storage.
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
Streaming from (slow) network streams with buffering. This is the
|
|
typical web streaming case where you access content from a streaming
|
|
server with http.
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
Capture from live source and and playback to live source with
|
|
configurable latency. This is used when, for example, capture from
|
|
a camera, apply an effect and display the result. It is also used
|
|
when streaming low latency content over a network with UDP.
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
Simultaneous live capture and playback from prerecorded content.
|
|
This is used in audio recording cases where you play a previously
|
|
recorded audio and record new samples, the purpose is to have the
|
|
new audio perfectly in sync with the previously recorded data.
|
|
</para>
|
|
</listitem>
|
|
</itemizedlist>
|
|
</para>
|
|
<para>
|
|
&GStreamer; uses a <classname>GstClock</classname> object, buffer
|
|
timestamps and a SEGMENT event to synchronize streams in a pipeline
|
|
as we will see in the next sections.
|
|
</para>
|
|
|
|
<sect1 id="section-clock-time-types" xreflabel="Clock running-time">
|
|
<title>Clock running-time </title>
|
|
<para>
|
|
In a typical computer, there are many sources that can be used as a
|
|
time source, e.g., the system time, soundcards, CPU performance
|
|
counters, ... For this reason, there are many
|
|
<classname>GstClock</classname> implementations available in &GStreamer;.
|
|
The clock time doesn't always start from 0 or from some known value.
|
|
Some clocks start counting from some known start date, other clocks start
|
|
counting since last reboot, etc...
|
|
</para>
|
|
<para>
|
|
A <classname>GstClock</classname> returns the
|
|
<emphasis role="strong">absolute-time</emphasis>
|
|
according to that clock with <function>gst_clock_get_time ()</function>.
|
|
The absolute-time (or clock time) of a clock is monotonically increasing.
|
|
From the absolute-time is a <emphasis role="strong">running-time</emphasis>
|
|
calculated, which is simply the difference between a previous snapshot
|
|
of the absolute-time called the <emphasis role="strong">base-time</emphasis>.
|
|
So:
|
|
</para>
|
|
<para>
|
|
running-time = absolute-time - base-time
|
|
</para>
|
|
<para>
|
|
A &GStreamer; <classname>GstPipeline</classname> object maintains a
|
|
<classname>GstClock</classname> object and a base-time when it goes
|
|
to the PLAYING state. The pipeline gives a handle to the selected
|
|
<classname>GstClock</classname> to each element in the pipeline along
|
|
with selected base-time. The pipeline will select a base-time in such
|
|
a way that the running-time reflects the total time spent in the
|
|
PLAYING state. As a result, when the pipeline is PAUSED, the
|
|
running-time stands still.
|
|
</para>
|
|
<para>
|
|
Because all objects in the pipeline have the same clock and base-time,
|
|
they can thus all calculate the running-time according to the pipeline
|
|
clock.
|
|
</para>
|
|
</sect1>
|
|
|
|
<sect1 id="section-buffer-running-time" xreflabel="Buffer running-time">
|
|
<title>Buffer running-time</title>
|
|
<para>
|
|
To calculate a buffer running-time, we need a buffer timestamp and
|
|
the SEGMENT event that preceeded the buffer. First we can convert
|
|
the SEGMENT event into a <classname>GstSegment</classname> object
|
|
and then we can use the
|
|
<function>gst_segment_to_running_time ()</function> function to
|
|
perform the calculation of the buffer running-time.
|
|
</para>
|
|
<para>
|
|
Synchronization is now a matter of making sure that a buffer with a
|
|
certain running-time is played when the clock reaches the same
|
|
running-time. Usually this task is done by sink elements. Sink also
|
|
have to take into account the latency configured in the pipeline and
|
|
add this to the buffer running-time before synchronizing to the
|
|
pipeline clock.
|
|
</para>
|
|
<para>
|
|
Non-live sources timestamp buffers with a running-time starting
|
|
from 0. After a flushing seek, they will produce buffers again
|
|
from a running-time of 0.
|
|
</para>
|
|
<para>
|
|
Live sources need to timestamp buffers with a running-time matching
|
|
the pipeline running-time when the first byte of the buffer was
|
|
captured.
|
|
</para>
|
|
</sect1>
|
|
|
|
<sect1 id="section-buffer-stream-time" xreflabel="Buffer stream-time">
|
|
<title>Buffer stream-time</title>
|
|
<para>
|
|
The buffer stream-time, also known as the position in the stream,
|
|
is calculated from the buffer timestamps and the preceeding SEGMENT
|
|
event. It represents the time inside the media as a value between
|
|
0 and the total duration of the media.
|
|
</para>
|
|
<para>
|
|
The stream-time is used in:
|
|
<itemizedlist>
|
|
<listitem>
|
|
<para>
|
|
Report the current position in the stream with the POSITION
|
|
query.
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
The position used in the seek events and queries.
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
The position used to synchronize controlled values.
|
|
</para>
|
|
</listitem>
|
|
</itemizedlist>
|
|
</para>
|
|
<para>
|
|
The stream-time is never used to synchronize streams, this is only
|
|
done with the running-time.
|
|
</para>
|
|
</sect1>
|
|
|
|
<sect1 id="section-time-overview" xreflabel="Time overview">
|
|
<title>Time overview</title>
|
|
<para>
|
|
Here is an overview of the various timelines used in &GStreamer;.
|
|
</para>
|
|
<para>
|
|
The image below represents the different times in the pipeline when
|
|
playing a 100ms sample and repeating the part between 50ms and
|
|
100ms.
|
|
</para>
|
|
|
|
<figure float="1" id="chapter-clock-img">
|
|
<title>&GStreamer; clock and various times</title>
|
|
<mediaobject>
|
|
<imageobject>
|
|
<imagedata scale="75" fileref="images/clocks.ℑ" format="&IMAGE;" />
|
|
</imageobject>
|
|
</mediaobject>
|
|
</figure>
|
|
|
|
<para>
|
|
You can see how the running-time of a buffer always increments
|
|
monotonically along with the clock-time. Buffers are played when their
|
|
running-time is equal to the clock-time - base-time. The stream-time
|
|
represents the position in the stream and jumps backwards when
|
|
repeating.
|
|
</para>
|
|
</sect1>
|
|
|
|
<sect1 id="section-clocks-providers">
|
|
<title>Clock providers</title>
|
|
<para>
|
|
A clock provider is an element in the pipeline that can provide
|
|
a <classname>GstClock</classname> object. The clock object needs to
|
|
report an absoulute-time that is monotonocally increasing when the
|
|
element is in the PLAYING state. It is allowed to pause the clock
|
|
while the element is PAUSED.
|
|
</para>
|
|
<para>
|
|
Clock providers exist because they play back media at some rate, and
|
|
this rate is not necessarily the same as the system clock rate. For
|
|
example, a soundcard may playback at 44,1 kHz, but that doesn't mean
|
|
that after <emphasis>exactly</emphasis> 1 second <emphasis>according
|
|
to the system clock</emphasis>, the soundcard has played back 44.100
|
|
samples. This is only true by approximation. In fact, the audio
|
|
device has an internal clock based on the number of samples played
|
|
that we can expose.
|
|
</para>
|
|
<para>
|
|
If an element with an internal clock needs to synchronize, it needs
|
|
to estimate when a time according to the pipeline clock will take
|
|
place according to the internal clock. To estimate this, it needs
|
|
to slave its clock to the pipeline clock.
|
|
</para>
|
|
<para>
|
|
If the pipeline clock is exactly the internal clock of an element,
|
|
the element can skip the slaving step and directly use the pipeline
|
|
clock to schedule playback. This can be both faster and more
|
|
accurate.
|
|
Therefore, generally, elements with an internal clock like audio
|
|
input or output devices will be a clock provider for the pipeline.
|
|
</para>
|
|
<para>
|
|
When the pipeline goes to the PLAYING state, it will go over all
|
|
elements in the pipeline from sink to source and ask each element
|
|
if they can provide a clock. The last element that can provide a
|
|
clock will be used as the clock provider in the pipeline.
|
|
This algorithm prefers a clock from an audio sink in a typical
|
|
playback pipeline and a clock from source elements in a typical
|
|
capture pipeline.
|
|
</para>
|
|
<para>
|
|
There exist some bus messages to let you know about the clock and
|
|
clock providers in the pipeline. You can see what clock is selected
|
|
in the pipeline by looking at the NEW_CLOCK message on the bus.
|
|
When a clock provider is removed from the pipeline, a CLOCK_LOST
|
|
message is posted and the application should go to PAUSED and back
|
|
to PLAYING to select a new clock.
|
|
</para>
|
|
</sect1>
|
|
|
|
<sect1 id="section-clocks-latency">
|
|
<title>Latency</title>
|
|
<para>
|
|
The latency is the time it takes for a sample captured at timestamp X
|
|
to reach the sink. This time is measured against the clock in the
|
|
pipeline. For pipelines where the only elements that synchronize against
|
|
the clock are the sinks, the latency is always 0 since no other element
|
|
is delaying the buffer.
|
|
</para>
|
|
<para>
|
|
For pipelines with live sources, a latency is introduced, mostly because
|
|
of the way a live source works. Consider an audio source, it will start
|
|
capturing the first sample at time 0. If the source pushes buffers with
|
|
44100 samples at a time at 44100Hz it will have collected the buffer at
|
|
second 1. Since the timestamp of the buffer is 0 and the time of the
|
|
clock is now >= 1 second, the sink will drop this buffer because it is
|
|
too late. Without any latency compensation in the sink, all buffers will
|
|
be dropped.
|
|
</para>
|
|
|
|
<sect2 id="section-latency-compensation">
|
|
<title>Latency compensation</title>
|
|
<para>
|
|
Before the pipeline goes to the PLAYING state, it will, in addition to
|
|
selecting a clock and calculating a base-time, calculate the latency
|
|
in the pipeline. It does this by doing a LATENCY query on all the sinks
|
|
in the pipeline. The pipeline then selects the maximum latency in the
|
|
pipeline and configures this with a LATENCY event.
|
|
</para>
|
|
<para>
|
|
All sink elements will delay playback by the value in the LATENCY event.
|
|
Since all sinks delay with the same amount of time, they will be
|
|
relative in sync.
|
|
</para>
|
|
</sect2>
|
|
|
|
<sect2 id="section-latency-dynamic">
|
|
<title>Dynamic Latency</title>
|
|
<para>
|
|
Adding/removing elements to/from a pipeline or changing element
|
|
properties can change the latency in a pipeline. An element can
|
|
request a latency change in the pipeline by posting a LATENCY
|
|
message on the bus. The application can then decide to query and
|
|
redistribute a new latency or not. Changing the latency in a
|
|
pipeline might cause visual or audible glitches and should
|
|
therefore only be done by the application when it is allowed.
|
|
</para>
|
|
</sect2>
|
|
</sect1>
|
|
</chapter>
|