gst-libs/gst/rtp/README: Some new documentation

Original commit message from CVS:
2006-05-18 Philippe Kalaf <philippe.kalaf@collabora.co.uk>

* gst-libs/gst/rtp/README:
Some new documentation
* gst-libs/gst/rtp/gstrtpbuffer.h:
Added GST_RTP_PAYLOAD_DYNAMIC_STRING for use by children
* gst-libs/gst/rtp/gstbasertpaudiopayload.c:
* gst-libs/gst/rtp/gstbasertpaudiopayload.h:
New RTP audio base payloader class. Supports frame or sample based codecs.
Not enabled in Makefile.am until approved.
This commit is contained in:
Philippe Kalaf 2006-05-18 23:00:02 +00:00
parent d41ffcb767
commit 8675bc89e4
5 changed files with 182 additions and 99 deletions

View file

@ -1,3 +1,14 @@
2006-05-18 Philippe Kalaf <philippe.kalaf@collabora.co.uk>
* gst-libs/gst/rtp/README:
Some new documentation
* gst-libs/gst/rtp/gstrtpbuffer.h:
Added GST_RTP_PAYLOAD_DYNAMIC_STRING for use by children
* gst-libs/gst/rtp/gstbasertpaudiopayload.c:
* gst-libs/gst/rtp/gstbasertpaudiopayload.h:
New RTP audio base payloader class. Supports frame or sample based codecs.
Not enabled in Makefile.am until approved.
2006-05-18 Tim-Philipp Müller <tim at centricular dot net>
* tests/check/elements/alsa.c: (test_device_property_probe):

View file

@ -1,8 +1,89 @@
The RTP libraries
---------------------
GstRTPBuffer:
RTP Buffers
-----------
The real time protocol as described in RFC 3550 requires the use of special
packets containing an additional RTP header of at least 12 bytes. GStreamer
provides some helper functions for creating and parsing these RTP headers.
The result is a normal #GstBuffer with an additional RTP header.
RTP buffers are usually created with gst_rtp_buffer_new_allocate() or
gst_rtp_buffer_new_allocate_len(). These functions create buffers with a
preallocated space of memory. It will also ensure that enough memory
is allocated for the RTP header. The first function is used when the payload
size is known. gst_rtp_buffer_new_allocate_len() should be used when the size
of the whole RTP buffer (RTP header + payload) is known.
When receiving RTP buffers from a network, gst_rtp_buffer_new_take_data()
should be used when the user would like to parse that RTP packet. (TODO Ask
Wim what the real purpose of this function is as it seems to simply create a
duplicate GstBuffer with the same data as the previous one). The
function will create a new RTP buffer with the given data as the whole RTP
packet. Alternatively, gst_rtp_buffer_new_copy_data() can be used if the user
wishes to make a copy of the data before using it in the new RTP buffer. An
important function is gst_rtp_buffer_validate() that is used to verify that
the buffer a well formed RTP buffer.
It is now possible to use all the gst_rtp_buffer_get_*() or
gst_rtp_buffer_set_*() functions to read or write the different parts of the
RTP header such as the payload type, the sequence number or the RTP
timestamp. The use can also retreive a pointer to the actual RTP payload data
using the gst_rtp_buffer_get_payload() function.
A GstBuffer subclass that can has extra RTP information such as timestamps and
marks. It is used for communications between the RTPSession element and the
RTP payloaders/depayloaders.
RTP Base Payloader Class (GstBaseRTPPayload)
--------------------------------------------
All RTP payloader elements (audio or video) should derive from this class.
RTP Base Audio Payloader Class (GstBaseRTPAudioPayload)
-------------------------------------------------------
This class derives from GstBaseRTPPayload.
It can be used for payloading audio codecs. It will only work with constant
bitrate codecs. It supports both frame based and sample based codecs. It takes
care of packing up the audio data into RTP packets and filling up the headers
accordingly. The payloading is done based on the maximum MTU (mtu) and the
maximum time per packet (max-ptime). The general idea is to divide large data
buffers into smaller RTP packets. The RTP packet size is the minimum of either
the MTU, max-ptime (if set) or available data. Any residual data is always
sent in a last RTP packet (no minimum RTP packet size). The idea is that since
this is a real time protocol, data should never be delayed. In the case of
frame based codecs, the resulting RTP packets always contain full frames.
To use this base class, your child element needs to call either
gst_basertpaudiopayload_set_frame_based() or
gst_basertpaudiopayload_set_sample_based(). This is usually done in the
element's _init() function. Then, the child element must call either
gst_basertpaudiopayload_set_frame_options() or
gst_basertpaudiopayload_set_sample_options(). Since GstBaseRTPAudioPayload
derives from GstBaseRTPPayload, the child element must set any variables or
call/override any functions required by that base class. The child element
does not need to override any other functions specific to
GstBaseRTPAudioPayload.
This base class can be tested through it's children classes. Here is an
example using the iLBC payloader (frame based).
For 20ms mode :
GST_DEBUG="basertpaudiopayload:5" gst-launch-0.10 fakesrc sizetype=2
sizemax=114 datarate=1900 ! audio/x-iLBC, mode=20 ! rtpilbcpay
max-ptime="40000000" ! fakesink
For 30ms mode :
GST_DEBUG="basertpaudiopayload:5" gst-launch-0.10 fakesrc sizetype=2
sizemax=150 datarate=1662 ! audio/x-iLBC, mode=30 ! rtpilbcpay
max-ptime="60000000" ! fakesink
Here is an example using the uLaw payloader (sample based).
GST_DEBUG="basertpaudiopayload:5" gst-launch-0.10 fakesrc sizetype=2
sizemax=150 datarate=8000 ! audio/x-mulaw ! rtppcmupay max-ptime="6000000" !
fakesink
RTP Base Depayloader Class (GstBaseRTPDepayload)
------------------------------------------------
All RTP depayloader elements (audio or video) should derive from this class.

View file

@ -31,9 +31,6 @@
GST_DEBUG_CATEGORY (basertpaudiopayload_debug);
#define GST_CAT_DEFAULT (basertpaudiopayload_debug)
/* let us define a minimum of 10 ms for sample based codecs */
#define GST_RTP_MIN_PTIME_MS 10
static void gst_basertpaudiopayload_finalize (GObject * object);
static GstFlowReturn
@ -85,8 +82,7 @@ static void
gst_basertpaudiopayload_init (GstBaseRTPAudioPayload * basertpaudiopayload,
GstBaseRTPAudioPayloadClass * klass)
{
basertpaudiopayload->adapter = gst_adapter_new ();
basertpaudiopayload->adapter_base_ts = 0;
basertpaudiopayload->base_ts = 0;
basertpaudiopayload->type = AUDIO_CODEC_TYPE_NONE;
@ -104,12 +100,18 @@ gst_basertpaudiopayload_finalize (GObject * object)
GstBaseRTPAudioPayload *basertpaudiopayload;
basertpaudiopayload = GST_BASE_RTP_AUDIO_PAYLOAD (object);
g_object_unref (basertpaudiopayload->adapter);
basertpaudiopayload->adapter = NULL;
GST_CALL_PARENT (G_OBJECT_CLASS, finalize, (object));
}
/**
* gst_basertpaudiopayload_set_frame_based:
* @basertpaudiopayload: a pointer to the element.
*
* Tells #GstBaseRTPAudioPayload that the child element is for a frame based
* audio codec
*
*/
void
gst_basertpaudiopayload_set_frame_based (GstBaseRTPAudioPayload *
basertpaudiopayload)
@ -123,6 +125,14 @@ gst_basertpaudiopayload_set_frame_based (GstBaseRTPAudioPayload *
basertpaudiopayload->type = AUDIO_CODEC_TYPE_FRAME_BASED;
}
/**
* gst_basertpaudiopayload_set_sample_based:
* @basertpaudiopayload: a pointer to the element.
*
* Tells #GstBaseRTPAudioPayload that the child element is for a sample based
* audio codec
*
*/
void
gst_basertpaudiopayload_set_sample_based (GstBaseRTPAudioPayload *
basertpaudiopayload)
@ -136,7 +146,15 @@ gst_basertpaudiopayload_set_sample_based (GstBaseRTPAudioPayload *
basertpaudiopayload->type = AUDIO_CODEC_TYPE_SAMPLE_BASED;
}
/* These are options that need to be set for frame based audio codecs */
/**
* gst_basertpaudiopayload_set_frame_options:
* @basertpaudiopayload: a pointer to the element.
* @frame_duration: The duraction of an audio frame in milliseconds.
* @frame_size: The size of an audio frame in bytes.
*
* Sets the options for frame based audio codecs.
*
*/
void
gst_basertpaudiopayload_set_frame_options (GstBaseRTPAudioPayload
* basertpaudiopayload, gint frame_duration, gint frame_size)
@ -147,6 +165,14 @@ gst_basertpaudiopayload_set_frame_options (GstBaseRTPAudioPayload
basertpaudiopayload->frame_duration = frame_duration;
}
/**
* gst_basertpaudiopayload_set_sample_options:
* @basertpaudiopayload: a pointer to the element.
* @sample_size: Size per sample in bytes.
*
* Sets the options for sample based audio codecs.
*
*/
void
gst_basertpaudiopayload_set_sample_options (GstBaseRTPAudioPayload
* basertpaudiopayload, gint sample_size)
@ -207,9 +233,8 @@ gst_basertpaudiopayload_handle_frame_based_buffer (GstBaseRTPPayload *
frame_size = basertpaudiopayload->frame_size;
frame_duration = basertpaudiopayload->frame_duration;
/* If buffer fits on an RTP packet, let's just push it through without using
* the adapter */
/* this will check again max_ptime and max_mtu */
/* If buffer fits on an RTP packet, let's just push it through */
/* this will check against max_ptime and max_mtu */
if (!gst_basertppayload_is_filled (basepayload,
gst_rtp_buffer_calc_packet_len (GST_BUFFER_SIZE (buffer), 0, 0),
GST_BUFFER_DURATION (buffer))) {
@ -220,10 +245,6 @@ gst_basertpaudiopayload_handle_frame_based_buffer (GstBaseRTPPayload *
return ret;
}
/* TODO : would be nice if we had some property that told the payloader to put
* just 1 frame per RTP packet, for the moment we can set the ptime to 0 or
* something smaller or equal to a frame duration */
/* max number of bytes based on given ptime, has to be multiple of
* frame_duration */
if (basepayload->max_ptime != -1) {
@ -238,27 +259,17 @@ gst_basertpaudiopayload_handle_frame_based_buffer (GstBaseRTPPayload *
}
}
/* if the adapter is empty (should be), let's set the base timestamp */
if (gst_adapter_available (basertpaudiopayload->adapter) == 0) {
basertpaudiopayload->adapter_base_ts = GST_BUFFER_TIMESTAMP (buffer);
} else {
GST_ERROR_OBJECT (basertpaudiopayload,
"Adapter should be empty but is not!");
return GST_FLOW_ERROR;
}
/* let's set the base timestamp */
basertpaudiopayload->base_ts = GST_BUFFER_TIMESTAMP (buffer);
gst_adapter_push (basertpaudiopayload->adapter, buffer);
available = gst_adapter_available (basertpaudiopayload->adapter);
available = GST_BUFFER_SIZE (buffer);
data = (guint8 *) GST_BUFFER_DATA (buffer);
/* as long as we have full frames */
/* this loop will always empty the adapter till the last frame */
/* TODO Make it possible to set a minimum size per packet, this way the
* algorithm doesn't empty the adapter if there is too little data left and
* will wait until the next buffers to arrive */
/* this loop will push all available buffers till the last frame */
while (available >= frame_size) {
/* we need to see how many frames we can get based on maximum MTU, maximum
* ptime and the number of bytes available in the adapter */
* ptime and the number of bytes available */
payload_len = MIN (MIN (
/* MTU max */
(int) (gst_rtp_buffer_calc_payload_len (GST_BASE_RTP_PAYLOAD_MTU
@ -268,29 +279,22 @@ gst_basertpaudiopayload_handle_frame_based_buffer (GstBaseRTPPayload *
/* currently available */
floor (available / frame_size) * frame_size);
data =
(guint8 *) gst_adapter_peek (basertpaudiopayload->adapter, payload_len);
ret =
gst_basertpaudiopayload_push (basepayload, data, payload_len,
basertpaudiopayload->adapter_base_ts);
ret = gst_basertpaudiopayload_push (basepayload, data, payload_len,
basertpaudiopayload->base_ts);
gst_adapter_flush (basertpaudiopayload->adapter, payload_len);
gfloat ts_inc = (payload_len * frame_duration) / frame_size;
ts_inc = ts_inc * GST_MSECOND;
basertpaudiopayload->adapter_base_ts += ts_inc;
GST_DEBUG_OBJECT (basertpaudiopayload, "%f %f %d", ts_inc,
ts_inc * GST_MSECOND, (payload_len * frame_duration) / frame_size);
GST_DEBUG_OBJECT (basertpaudiopayload, "Pushing with ts %" GST_TIME_FORMAT,
GST_TIME_ARGS (basertpaudiopayload->adapter_base_ts));
basertpaudiopayload->base_ts += ts_inc;
available = gst_adapter_available (basertpaudiopayload->adapter);
available -= payload_len;
data += payload_len;
}
/* adapter should be freed by now */
/* none should be available by now */
if (available != 0) {
GST_ERROR_OBJECT (basertpaudiopayload,
"Adapter should be empty but is not!");
"The buffer size is not a multiple of the frame_size");
return GST_FLOW_ERROR;
}
@ -309,7 +313,6 @@ gst_basertpaudiopayload_handle_sample_based_buffer (GstBaseRTPPayload *
guint maxptime_octets = G_MAXUINT;
guint minptime_octets = 0;
guint sample_size;
ret = GST_FLOW_ERROR;
@ -323,9 +326,8 @@ gst_basertpaudiopayload_handle_sample_based_buffer (GstBaseRTPPayload *
}
sample_size = basertpaudiopayload->sample_size;
/* If buffer fits on an RTP packet, let's just push it through without using
* the adapter */
/* this will check again max_ptime and max_mtu */
/* If buffer fits on an RTP packet, let's just push it through */
/* this will check against max_ptime and max_mtu */
if (!gst_basertppayload_is_filled (basepayload,
gst_rtp_buffer_calc_packet_len (GST_BUFFER_SIZE (buffer), 0, 0),
GST_BUFFER_DURATION (buffer))) {
@ -340,38 +342,23 @@ gst_basertpaudiopayload_handle_sample_based_buffer (GstBaseRTPPayload *
if (basepayload->max_ptime != -1) {
maxptime_octets = basepayload->max_ptime * basepayload->clock_rate /
(sample_size * GST_SECOND);
minptime_octets = GST_RTP_MIN_PTIME_MS * basepayload->clock_rate /
(sample_size * 1000);
GST_DEBUG_OBJECT (basertpaudiopayload,
"Calculated max_octects %u and min_octets %u", maxptime_octets,
minptime_octets);
if (maxptime_octets < minptime_octets) {
GST_WARNING_OBJECT (basertpaudiopayload,
"Given ptime %d is smaller than minimum %d, replacing by %d",
maxptime_octets, minptime_octets, minptime_octets);
maxptime_octets = minptime_octets;
}
GST_DEBUG_OBJECT (basertpaudiopayload, "Calculated max_octects %u",
maxptime_octets);
}
/* if the adapter is empty (should be), let's set the base timestamp */
if (gst_adapter_available (basertpaudiopayload->adapter) == 0) {
basertpaudiopayload->adapter_base_ts = GST_BUFFER_TIMESTAMP (buffer);
GST_DEBUG_OBJECT (basertpaudiopayload, "Setting to %" GST_TIME_FORMAT,
GST_TIME_ARGS (GST_BUFFER_TIMESTAMP (buffer)));
}
/* let's set the base timestamp */
basertpaudiopayload->base_ts = GST_BUFFER_TIMESTAMP (buffer);
GST_DEBUG_OBJECT (basertpaudiopayload, "Setting to %" GST_TIME_FORMAT,
GST_TIME_ARGS (GST_BUFFER_TIMESTAMP (buffer)));
gst_adapter_push (basertpaudiopayload->adapter, buffer);
available = gst_adapter_available (basertpaudiopayload->adapter);
available = GST_BUFFER_SIZE (buffer);
data = (guint8 *) GST_BUFFER_DATA (buffer);
/* as long as we have full frames */
/* this loop will always empty the adapter till the last frame */
/* TODO Make it possible to set a minimum size per packet, this way the
* algorithm doesn't empty the adapter if there is too little data left and
* will wait until the next buffers to arrive */
while (available >= minptime_octets) {
/* this loop will use all available data until the last byte */
while (available) {
/* we need to see how many frames we can get based on maximum MTU, maximum
* ptime and the number of bytes available in the adapter */
* ptime and the number of bytes available */
payload_len = MIN (MIN (
/* MTU max */
gst_rtp_buffer_calc_payload_len (GST_BASE_RTP_PAYLOAD_MTU
@ -381,27 +368,20 @@ gst_basertpaudiopayload_handle_sample_based_buffer (GstBaseRTPPayload *
/* currently available */
available);
data =
(guint8 *) gst_adapter_peek (basertpaudiopayload->adapter, payload_len);
GST_DEBUG_OBJECT (basertpaudiopayload, "Pushing with ts %" GST_TIME_FORMAT,
GST_TIME_ARGS (basertpaudiopayload->adapter_base_ts));
ret =
gst_basertpaudiopayload_push (basepayload, data, payload_len,
basertpaudiopayload->adapter_base_ts);
ret = gst_basertpaudiopayload_push (basepayload, data, payload_len,
basertpaudiopayload->base_ts);
gst_adapter_flush (basertpaudiopayload->adapter, payload_len);
gfloat num = payload_len;
gfloat datarate = (sample_size * basepayload->clock_rate);
basertpaudiopayload->adapter_base_ts +=
basertpaudiopayload->base_ts +=
/* payload_len (bytes) * nsecs/sec / datarate (bytes*sec) */
num / datarate * GST_SECOND;
GST_DEBUG_OBJECT (basertpaudiopayload, "Calculating ts inc %f %f %f", num,
datarate, num / datarate * GST_SECOND);
GST_DEBUG_OBJECT (basertpaudiopayload, "New ts is %" GST_TIME_FORMAT,
GST_TIME_ARGS (basertpaudiopayload->adapter_base_ts));
GST_TIME_ARGS (basertpaudiopayload->base_ts));
available = gst_adapter_available (basertpaudiopayload->adapter);
available -= payload_len;
data += payload_len;
}
return ret;
@ -415,6 +395,9 @@ gst_basertpaudiopayload_push (GstBaseRTPPayload * basepayload, guint8 * data,
guint8 *payload;
GstFlowReturn ret;
GST_DEBUG_OBJECT (basepayload, "Pushing %d bytes ts %" GST_TIME_FORMAT,
payload_len, GST_TIME_ARGS (timestamp));
/* create buffer to hold the payload */
outbuf = gst_rtp_buffer_new_allocate (payload_len, 0, 0);

View file

@ -22,7 +22,6 @@
#include <gst/gst.h>
#include <gst/rtp/gstbasertppayload.h>
#include <gst/base/gstadapter.h>
G_BEGIN_DECLS
@ -32,9 +31,11 @@ typedef struct _GstBaseRTPAudioPayloadClass GstBaseRTPAudioPayloadClass;
#define GST_TYPE_BASE_RTP_AUDIO_PAYLOAD \
(gst_basertpaudiopayload_get_type())
#define GST_BASE_RTP_AUDIO_PAYLOAD(obj) \
(G_TYPE_CHECK_INSTANCE_CAST((obj),GST_TYPE_BASE_RTP_AUDIO_PAYLOAD,GstBaseRTPAudioPayload))
(G_TYPE_CHECK_INSTANCE_CAST((obj), \
GST_TYPE_BASE_RTP_AUDIO_PAYLOAD,GstBaseRTPAudioPayload))
#define GST_BASE_RTP_AUDIO_PAYLOAD_CLASS(klass) \
(G_TYPE_CHECK_CLASS_CAST((klass),GST_TYPE_BASE_RTP_AUDIO_PAYLOAD,GstBaseRTPAudioPayload))
(G_TYPE_CHECK_CLASS_CAST((klass), \
GST_TYPE_BASE_RTP_AUDIO_PAYLOAD,GstBaseRTPAudioPayload))
#define GST_IS_BASE_RTP_AUDIO_PAYLOAD(obj) \
(G_TYPE_CHECK_INSTANCE_TYPE((obj),GST_TYPE_BASE_RTP_AUDIO_PAYLOAD))
#define GST_IS_BASE_RTP_AUDIO_PAYLOAD_CLASS(obj) \
@ -50,19 +51,22 @@ struct _GstBaseRTPAudioPayload
{
GstBaseRTPPayload payload;
GstClockTime adapter_base_ts;
GstAdapter *adapter;
GstClockTime base_ts;
gint frame_size;
gint frame_duration;
gint sample_size;
AudioCodecType type;
gpointer _gst_reserved[GST_PADDING];
};
struct _GstBaseRTPAudioPayloadClass
{
GstBaseRTPPayloadClass parent_class;
gpointer _gst_reserved[GST_PADDING];
};
gboolean gst_basertpaudiopayload_plugin_init (GstPlugin * plugin);
@ -70,10 +74,12 @@ gboolean gst_basertpaudiopayload_plugin_init (GstPlugin * plugin);
GType gst_basertpaudiopayload_get_type (void);
void
gst_basertpaudiopayload_set_frame_based (GstBaseRTPAudioPayload *basertpaudiopayload);
gst_basertpaudiopayload_set_frame_based (GstBaseRTPAudioPayload
*basertpaudiopayload);
void
gst_basertpaudiopayload_set_sample_based (GstBaseRTPAudioPayload *basertpaudiopayload);
gst_basertpaudiopayload_set_sample_based (GstBaseRTPAudioPayload
*basertpaudiopayload);
void
gst_basertpaudiopayload_set_frame_options (GstBaseRTPAudioPayload

View file

@ -71,6 +71,8 @@ typedef enum
#define GST_RTP_PAYLOAD_MPV_STRING "32"
#define GST_RTP_PAYLOAD_H263_STRING "34"
#define GST_RTP_PAYLOAD_DYNAMIC_STRING "[96, 127]"
/* creating buffers */
GstBuffer* gst_rtp_buffer_new (void);
void gst_rtp_buffer_allocate_data (GstBuffer *buffer, guint payload_len,