Michael Bunk 2018-04-06 11:17:20 +02:00 committed by GStreamer Marge Bot
parent 1781b26ad2
commit dc64f3e6cf
11 changed files with 66 additions and 66 deletions

View file

@ -19,7 +19,7 @@ Within the context of a given object, functions defined in that objects
header and/or source file will have their object-specific prefix
stripped. For instance, `gst_element_add_pad()` would be referred to as
simply `*add_pad()`. Note that the trailing parentheses should always be
present, but sometimes may not be. A prefixing underscore (*) will
present, but sometimes may not be. A prefixed asterisk (*) will
always tell you its a function, however, regardless of the presence or
absence of the trailing parentheses.

View file

@ -120,7 +120,7 @@ all have their own distinct ssrc.
## GstRTPRetransmissionRequest
Custom upstream event which mainly contains the ssrc and the seqnum of
the packet which is asked to be retransmisted.
the packet which is asked to be retransmitted.
On the pipeline receiver side this event is generated by the
gstrtpjitterbuffer element. Then it is translated to a NACK to be sent
@ -135,64 +135,64 @@ gstrtpsession element when it receives a NACK from the network.
rtprtxsend keeps a history of rtp packets that it has already sent. When
it receives the event `GstRTPRetransmissionRequest` from the downstream
gstrtpsession element, it loopkup the requested seqnum in its stored
packets. If the packet is present in its history, it will create a RTX
gstrtpsession element, it looks up the requested seqnum in its stored
packets. If the packet is present in its history, it will create an RTX
packet according to RFC 4588. Then this rtx packet is pushed to its src
pad as other packets.
pad like other packets.
rtprtxsend works in SSRC-multiplexed mode, so it has one always sink and
rtprtxsend works in SSRC-multiplexed mode, so it always has one sink and
src pad.
### Building retransmission packet fron original packet
### Building retransmission packet from original packet
A rtx packet is mostly the same as an orignal packet, except it has its
own ssrc and its own seqnum. That's why rtprtxsend works in
An rtx packet is mostly the same as an orignal packet, except it has its
own `ssrc` and its own `seqnum`. That's why rtprtxsend works in
SSRC-multiplexed mode. It also means that the same session is used.
Another difference between rtx packet and its original is that it
Another difference between an rtx packet and its original is that it
inserts the original seqnum (OSN: 2 bytes) at the beginning of the
payload. Also rtprtxsend builds rtx packet without padding, to let other
elements do that. The last difference is the payload type. For now the
user has to set it through the rtx-payload-type property. Later it will
be automatically retreive this information from SDP. See fmtp field as
specifies in the RPC4588 (a=fmtp:99 apt=98) fmtp is the payload type of
the retransmission stream and apt the payload type of its associated
user has to set it through the `rtx-payload-type` property. Later it will
automatically retreive this information from SDP. See `fmtp` field as
specified in RFC 4588 (a=fmtp:99 apt=98): `fmtp` is the payload type of
the retransmission stream and `apt` the payload type of its associated
master stream.
### Retransmission ssrc and seqnum
To choose `rtx_ssrc` it randomly selects a number between 0 and 2^32-1
until it is different than `master_ssrc`. `rtx_seqnum` is randomly
selected between 0 and 2^16-1
until it is different from `master_ssrc`. `rtx_seqnum` is randomly
selected between 0 and 2^16-1.
### Deeper in the stored buffer history
For the history it uses a GSequence with 2^15-1 as its maximum size.
Which is resonable as the default value is 100. It contains the packets
in reverse order they have been sent (head:newest, tail:oldest)
in reverse order they have been sent (head:newest, tail:oldest).
GSequence allows to add and remove an element in constant time (like a
queue). Also GSequence allows to do a binary search when rtprtxsend
lookup in its history. It's important if it receives a lot of requests
does a lookup in its history. It's important if it receives a lot of requests
or if the history is large.
### Pending rtx packets
When looking up in its history, if seqnum is found then it pushes the
buffer into a GQueue to its tail. Before to send the current master
buffer into a GQueue to its tail. Before sending the current master
stream packet, rtprtxsend sends all the buffers which are in this
GQueue. Taking care of converting them to rtx packets. This way, rtx
GQueue, taking care of converting them to rtx packets. This way, rtx
packets are sent in the same order they have been requested.
(`g_list_foreach` traverse the queue from head to tail) The `GQueue` is
cleared between sending 2 master stream packets. So for this `GQueue` to
contain more than one element, it means that rtprtxsend receives more
(`g_list_foreach` traverses the queue from head to tail) The `GQueue` is
cleared between sending 2 master stream packets. So when this `GQueue`
contains more than one element, it means that rtprtxsend had received more
than one rtx request between sending 2 master packets.
### Collision
When handling a `GstRTPCollision` event, if the ssrc is its rtx ssrc then
rtprtxsend clear its history and its pending retransmission queue. Then
rtprtxsend clears its history and its pending retransmission queue. Then
it chooses a `rtx_ssrc` until it's different than master ssrc. If the
`GstRTPCollision` event does not contain its rtx ssrc, for example its
master ssrc or other, then it just forwards the event to upstream. So
master ssrc or other, then it just forwards the event upstream, so
that it can be handled by the rtppayloader.
## Rtprtxreceive element
@ -201,7 +201,7 @@ that it can be handled by the rtppayloader.
The same rtprtxreceive instance can receive several master streams and
several retransmission streams. So it will try to dynamically associate
a rtx ssrc with its master ssrc. So that it can reconstruct the original
an rtx ssrc with its master ssrc, so that it can reconstruct the original
from the proper rtx packet.
The algorithm is based on the fact that seqnums of different streams
@ -211,23 +211,23 @@ could also be different. So that they are statistically all different at
a given time. If bad luck then the association is delayed to the next
rtx request.
The algorithm also needs to know if a given packet is a rtx packet or
The algorithm also needs to know if a given packet is an rtx packet or
not. To know this information there is the `rtx-payload-types` property.
For now the user as to configure it but later it will be automatically
For now the user has to configure it but later it will automatically
retreive this information from SDP. It needs to know if the current
packet is rtx or not in order to know if it can extract the OSN from the
payload. Otherwise it would extract the OSN even on master streams which
means nothing and so it could do bad things. In theory maybe it could
work but we have this information in SDP so why not using it to avoid
work but we have this information in SDP so why not use it to avoid
bad associations.
Note that it also means that several master streams can have the same
payload type. And also several rtx streams can have the same payload
type. So the information from SDP which gives us which rtx payload type
belong to a give master payload type is not enough to do the association
belongs to a given master payload type is not enough to do the association
between rtx ssrc and master ssrc.
rtprtxreceive works in SSRC-multiplexed mode, so it has one always sink
rtprtxreceive works in SSRC-multiplexed mode, so it always has one sink
and src pad.
### Deeper in the association algorithm
@ -237,20 +237,20 @@ the ssrc and the seqnum from this request.
On incoming packets, if the packet has its ssrc already associated then
it knows if the ssrc is an rtx ssrc or a master stream ssrc. If this is
a rtx packet then it recontructs the original and pushs the result to
src pad as if it was a master packet.
a rtx packet then it recontructs the original and pushes the result to
the src pad as if it was a master packet.
If the ssrc is not yet associated rtprtxreceive checks the payload type.
if the packet has its payload type marked as rtx then it will extract
the OSN (original seqnum number) and lookup in its stored requests if a
seqnum matchs. If found, then it associates the current ssrc to the
seqnum matches. If found, then it associates the current ssrc to the
master ssrc marked in the request. If not found it just drops the
packet. Then it removes the request from the stored requests.
If there are 2 requests with the same seqnum and different ssrc, then
the couple seqnum,ssrc is removed from the stored requests. A stored
request actually means that actually the couple seqnum,ssrc is stored.
If it's happens the request is droped but it avoids to do bad
If it happens the request is dropped but it avoids to do bad
associations. In this case the association is just delayed to the next
request.

View file

@ -113,10 +113,10 @@ Hooks (\* already implemented)
Tracers are plugin features. They have a simple api:
class init Here the tracers describe the data the will emit.
class init Here the tracers describe the data they will emit.
instance init Tracers attach handlers to one or more hooks using
`gst_tracing_register_hook()`. In case the are configurable, they can
`gst_tracing_register_hook()`. In case they are configurable, they can
read the options from the *params* property. This is the extra detail
from the environment var.
@ -146,7 +146,7 @@ to describe their format:
``` c
fmt = gst_tracer_record_new ("thread-rusage.class",
// value in the log record (order does not matter)
// *thread-id* is a *key* to related the record to something as indicated
// *thread-id* is a *key* to relate the record to something as indicated
// by *scope* substructure
"thread-id", GST_TYPE_STRUCTURE, gst_structure_new ("scope",
"type", G_TYPE_GTYPE, G_TYPE_GUINT64,
@ -177,7 +177,7 @@ Later tracers can use the `GstTracerRecord` instance to log values efficiently:
gst_tracer_record_log (fmt, (guint64) (guintptr) thread_id, avg_cpuload);
```
Below a few more example for parts of tracer classes:
Below a few more examples for parts of tracer classes:
An optional value. Since the PTS can be GST_CLOCK_TIME_NONE and that is (-1),
we don't want to log this.

View file

@ -18,9 +18,9 @@ This chapter talks about the memory-management features available to
GStreamer plugins. We will first talk about the lowlevel `GstMemory`
object that manages access to a piece of memory and then continue with
one of it's main users, the `GstBuffer`, which is used to exchange data
between plugins and with the application. We will also discuss the `GstMeta`.
This object can be placed on buffers to provide extra info about it and
its memory. We will also discuss the `GstBufferPool`, which allows to
between elements and with the application. We will also discuss the `GstMeta`.
This object can be placed on buffers to provide extra info about them and
their memory. We will also discuss the `GstBufferPool`, which allows to
more-efficiently manage buffers of the same size.
To conclude this chapter we will take a look at the `GST_QUERY_ALLOCATION`

View file

@ -71,7 +71,7 @@ If your element is exclusively loop-based, you may or may not want a
sink event function (since the element is driving the pipeline it will
know the length of the stream in advance or be notified by the flow
return value of `gst_pad_pull_range()`. In some cases even loop-based
element may receive events from upstream though (for example audio
elements may receive events from upstream though (for example audio
decoders with an id3demux or apedemux element in front of them, or
demuxers that are being fed input from sources that send additional
information about the stream in custom events, as DVD sources do).
@ -80,7 +80,7 @@ information about the stream in custom events, as DVD sources do).
Upstream events are generated by an element somewhere downstream in the
pipeline (example: a video sink may generate navigation events that
informs upstream elements about the current position of the mouse
inform upstream elements about the current position of the mouse
pointer). This may also happen indirectly on request of the application,
for example when the application executes a seek on a pipeline this seek
request will be passed on to a sink element which will then in turn
@ -90,7 +90,7 @@ The most common upstream events are seek events, Quality-of-Service
(QoS) and reconfigure events.
An upstream event can be sent using the `gst_pad_send_event` function.
This function simply call the default event handler of that pad. The
This function simply calls the default event handler of that pad. The
default event handler of pads is `gst_pad_event_default`, and it
basically sends the event to the peer of the internally linked pad. So
upstream events always arrive on the src pad of your element and are
@ -118,7 +118,7 @@ handling. Here they are :
- If you are generating some new event based on the one you received
don't forget to gst\_event\_unref the event you received.
- Event handler function are supposed to return TRUE or FALSE
- Event handler functions are supposed to return TRUE or FALSE
indicating if the event has been handled or not. Never simply return
TRUE/FALSE in that handler except if you really know that you have
handled that event.
@ -130,7 +130,7 @@ handling. Here they are :
## All Events Together
In this chapter follows a list of all defined events that are currently
being used, plus how they should be used/interpreted. You can check the
being used, plus how they should be used/interpreted. You can check
what type a certain event is using the GST\_EVENT\_TYPE macro (or if you
need a string for debugging purposes you can use
GST\_EVENT\_TYPE\_NAME).
@ -197,7 +197,7 @@ should be sent on. The last is true for demuxers, which generally have a
byte-to-time conversion concept. Their input is usually byte-based, so
the incoming event will have an offset in byte units
(`GST_FORMAT_BYTES`), too. Elements downstream, however, expect segment
events in time units, so that it can be used to synchronize against the
events in time units, so that they can be used to synchronize against the
pipeline clock. Therefore, demuxers and similar elements should not
forward the event, but parse it, free it and send a segment event (in
time units, `GST_FORMAT_TIME`) further downstream.

View file

@ -16,11 +16,11 @@ a spin-button widget, whereas others would be better represented by a
slider widget. Such things are not possible because the UI has no actual
meaning in the application. A UI widget that represents a bitrate
property is the same as a UI widget that represents the size of a video,
as long as both are of the same `GParamSpec` type. Another problem, is
as long as both are of the same `GParamSpec` type. Another problem is
that things like parameter grouping, function grouping, or parameter
coupling are not really possible.
The second problem with parameters are that they are not dynamic. In
The second problem with parameters is that they are not dynamic. In
many cases, the allowed values for a property are not fixed, but depend
on things that can only be detected at runtime. The names of inputs for
a TV card in a video4linux source element, for example, can only be

View file

@ -7,7 +7,7 @@ title: Media Types and Properties
There is a very large set of possible media types that may be used to pass
data between elements. Indeed, each new element that is defined may use
a new data format (though unless at least one other element recognises
that format, it will be most likely be useless since nothing will be
that format, it will be useless since nothing will be
able to link with it).
In order for media types to be useful, and for systems like autopluggers to
@ -25,7 +25,7 @@ For now, the policy is simple:
- If creating a new media type, discuss it first with the other GStreamer
developers, on at least one of: IRC, mailing lists.
- Try to ensure that the name for a new format is as unlikely to
- Try to ensure that the name for a new format does not
conflict with anything else created already, and is not a more
generalised name than it should be. For example: "audio/compressed"
would be too generalised a name to represent audio data compressed
@ -149,7 +149,7 @@ samplerate of the contained audio stream in the header. MPEG system
streams don't. This means that an AVI stream demuxer would provide
samplerate as a property for MPEG audio streams, whereas an MPEG demuxer
would not. A decoder needing this data would require a stream parser in
between two extract this from the header or calculate it from the
between to extract this from the header or calculate it from the
stream.
### Table of Audio Types

View file

@ -67,7 +67,7 @@ the sink.
An element will have to install an event function on its source pads in
order to receive QOS events. Usually, the element will need to store the
value of the QOS event and use them in the data processing function. The
value of the QOS event and use it in the data processing function. The
element will need to use a lock to protect these QoS values as shown in
the example below. Also make sure to pass the QoS event upstream.
@ -111,7 +111,7 @@ timestamp + jitter is also going to be late. We can thus drop all
buffers with a timestamp less than timestamp + jitter.
If the buffer duration is known, a better estimation for the next likely
timestamp as: timestamp + 2 \* jitter + duration.
timestamp to arrive in time is: timestamp + 2 \* jitter + duration.
A possible algorithm typically looks like this:
@ -205,7 +205,7 @@ conditions:
- The element dropped a buffer because of QoS reasons.
- An element changes its processing strategy because of QoS reasons
(quality). This could include a decoder that decides to drop every B
- An element changed its processing strategy because of QoS reasons
(quality). This could include a decoder that decided to drop every B
frame to increase its processing speed or an effect element
switching to a lower quality algorithm.
that switched to a lower quality algorithm.

View file

@ -220,7 +220,7 @@ gst_my_filter_task_func (GstElement *element)
```
Note that normally, elements would not read the full stream before
Note that normally elements would not read the full stream before
processing tags. Rather, they would read from each sinkpad until they've
received data (since tags usually come in before the first data buffer)
and process that.

View file

@ -122,5 +122,5 @@ we will try to explain why those requirements are set.
with unseekable input streams (e.g. network sources) as well.
- Sources and sinks should be prepared to be assigned another clock
then the one they expose themselves. Always use the provided clock
than the one they expose themselves. Always use the provided clock
for synchronization, else you'll get A/V sync issues.

View file

@ -5,7 +5,7 @@ title: Pre-made base classes
# Pre-made base classes
So far, we've been looking at low-level concepts of creating any type of
GStreamer element. Now, let's assume that all you want is to create an
GStreamer element. Now, let's assume that all you want is to create a
simple audiosink that works exactly the same as, say, “esdsink”, or a
filter that simply normalizes audio volume. Such elements are very
general in concept and since they do nothing special, they should be
@ -31,7 +31,7 @@ in many elements. Therefore, sink elements can derive from the
functions automatically. The derived class only needs to implement a
bunch of virtual functions and will work automatically.
The base class implement much of the synchronization logic that a sink
The base class implements much of the synchronization logic that a sink
has to perform.
The `GstBaseSink` base-class specifies some limitations on elements,
@ -65,7 +65,7 @@ The advantages of deriving from `GstBaseSink` are numerous:
not need to know anything about the technical implementation
requirements of preroll. The base-class does all the hard work.
Less code to write in the derived class, shared code (and thus
- Less code to write in the derived class, shared code (and thus
shared bugfixes).
There are also specialized base classes for audio and video, let's look