docs/pwg/: First try to resurrect the PWG. I'm halfway integrating the mimetypes and will from there on work on both ...

Original commit message from CVS:
2004-01-26  Ronald Bultje  <rbultje@ronald.bitfreak.net>

* docs/pwg/advanced_types.xml:
* docs/pwg/intro_basics.xml:
* docs/pwg/intro_preface.xml:
* docs/pwg/pwg.xml:
* docs/pwg/titlepage.xml:
First try to resurrect the PWG. I'm halfway integrating the mimetypes
in here (docs/random/mimetypes), and will from there on work on both
updating outdated parts and adding missing parts.
That doesn't mean I'll fix it completely, but I'll try at least. ;).
This commit is contained in:
Ronald S. Bultje 2004-01-26 16:43:31 +00:00
parent 92be9ac6e8
commit f0e1d945a4
6 changed files with 732 additions and 224 deletions

View file

@ -1,3 +1,15 @@
2004-01-26 Ronald Bultje <rbultje@ronald.bitfreak.net>
* docs/pwg/advanced_types.xml:
* docs/pwg/intro_basics.xml:
* docs/pwg/intro_preface.xml:
* docs/pwg/pwg.xml:
* docs/pwg/titlepage.xml:
First try to resurrect the PWG. I'm halfway integrating the mimetypes
in here (docs/random/mimetypes), and will from there on work on both
updating outdated parts and adding missing parts.
That doesn't mean I'll fix it completely, but I'll try at least. ;).
2004-01-26 Thomas Vander Stichele <thomas at apestaart dot org>
* gst/gsterror.h: reinstate GST_LIBRARY_ERROR_ENCODE until

View file

@ -85,4 +85,583 @@
<para>
</para>
</sect1>
<!-- ############ sect1 ############# -->
<sect1 id="sect1-types-definitions" xreflabel="List of Defined Types">
<title>List of Defined Types</title>
<para>
Below is a list of all the defined types in &GStreamer;. They are split
up in separate tables for audio, video, container, text and other
types, for the sake of readability. Below each table might follow a
list of notes that apply to that table. In the definition of each type,
we try to follow the types and rules as defined by <ulink type="http"
url="http://www.isi.edu/in-notes/iana/assignments/media-types/media-types">
IANA</ulink> for as far as possible.
</para>
<para>
Note that many of the properties are not <emphasis>required</emphasis>,
but rather <emphasis>optional</emphasis> properties. This means that
most of these properties can be extracted from the container header,
but that - in case the container header does not provide these - they
can also be extracted by parsing the stream header or the stream
content. The policy is that your element should provide the data that
it knows about by only parsing its own content, not another element's
content. Example: the AVI header provides samplerate of the contained
audio stream in the header. MPEG system streams don't. This means that
an AVI stream demuxer would provide samplerate as a property for MPEG
audio streams, whereas an MPEG demuxer would not.
</para>
<table frame="all" id="table-audio-types" xreflabel="Table of Audio Types">
<title>Table of Audio Types</title>
<tgroup cols="6" align="left" colsep="1" rowsep="1">
<colspec colnum="1" colname="cola1" colwidth="1*"/>
<colspec colnum="6" colname="cola6" colwidth="6*"/>
<spanspec spanname="fullwidth" namest="cola1" nameend="cola6"/>
<thead>
<row>
<entry>Mime Type</entry>
<entry>Description</entry>
<entry>Property</entry>
<entry>Property Type</entry>
<entry>Property Values</entry>
<entry>Property Description</entry>
</row>
</thead>
<tbody valign="top">
<!-- ############ subtitle ############# -->
<row>
<entry spanname="fullwidth">
<emphasis>All audio types.</emphasis>
</entry>
</row>
<!-- ############ type ############# -->
<row>
<entry morerows="1">audio/*</entry>
<entry morerows="1">
<emphasis>All audio types</emphasis>
</entry>
<entry>rate</entry>
<entry>integer</entry>
<entry>greater than 0</entry>
<entry>
The sample rate of the data, in samples (per channel) per second.
</entry>
</row>
<row>
<entry>channels</entry>
<entry>integer</entry>
<entry>greater than 0</entry>
<entry>
The number of channels of audio data.
</entry>
</row>
<!-- ############ subtitle ############# -->
<row>
<entry spanname="fullwidth">
<emphasis>All raw audio types.</emphasis>
</entry>
</row>
<!-- ############ type ############# -->
<row>
<entry morerows="3">audio/x-raw-int</entry>
<entry morerows="3">
Unstructured and uncompressed raw fixed-integer audio data.
</entry>
<entry>endianness</entry>
<entry>integer</entry>
<entry>G_BIG_ENDIAN (1234) or G_LITTLE_ENDIAN (4321)</entry>
<entry>
The order of bytes in a sample. The value G_LITTLE_ENDIAN (4321)
means <quote>little-endian</quote> (byte-order is <quote>least
significant byte first</quote>). The value G_BIG_ENDIAN (1234)
means <quote>big-endian</quote> (byte order is <quote>most
significant byte first</quote>).
</entry>
</row>
<row>
<entry>signed</entry>
<entry>boolean</entry>
<entry>TRUE or FALSE</entry>
<entry>
Whether the values of the integer samples are signed or not.
Signed samples use one bit to indicate sign (negative or
positive) of the value. Unsigned samples are always positive.
</entry>
</row>
<row>
<entry>width</entry>
<entry>integer</entry>
<entry>greater than 0</entry>
<entry>
Number of bits allocated per sample.
</entry>
</row>
<row>
<entry>depth</entry>
<entry>integer</entry>
<entry>greater than 0</entry>
<entry>
The number of bits used per sample. This must be less than or
equal to the width: If the depth is less than the width, the
low bits are assumed to be the ones used. For example, a width
of 32 and a depth of 24 means that each sample is stored in a
32 bit word, but only the low 24 bits are actually used.
</entry>
</row>
<!-- ############ type ############# -->
<row>
<entry morerows="2">audio/x-raw-float</entry>
<entry morerows="2">
Unstructured and uncompressed raw floating-point audio data.
</entry>
<entry>endianness</entry>
<entry>integer</entry>
<entry>G_BIG_ENDIAN (1234) or G_LITTLE_ENDIAN (4321)</entry>
<entry>
The order of bytes in a sample. The value G_LITTLE_ENDIAN (4321)
means <quote>little-endian</quote> (byte-order is <quote>least
significant byte first</quote>). The value G_BIG_ENDIAN (1234)
means <quote>big-endian</quote> (byte order is <quote>most
significant byte first</quote>).
</entry>
</row>
<row>
<entry>width</entry>
<entry>integer</entry>
<entry>greater than 0</entry>
<entry>
The amount of bits used and allocated per sample.
</entry>
</row>
<row>
<entry>buffer-frames</entry>
<entry>integer</entry>
<entry>greater than 0</entry>
<entry>
The number of frames per buffer. The reason for this property
is that the element does not need to reuse buffers or use data
spanned over multiple buffers, so this property - when used
rightly - will decrease latency. Note that some people think that
this property is very ugly, whereas others think it is vital for
the use of &GStreamer; in professional audio applications.
</entry>
</row>
<!-- ############ subtitle ############# -->
<row>
<entry spanname="fullwidth">
<emphasis>All encoded audio types.</emphasis>
</entry>
</row>
<!-- ############ type ############# -->
<row>
<entry>audio/x-ac3</entry>
<entry>AC-3 or A52 audio streams.</entry>
<entry></entry>
<entry></entry>
<entry></entry>
<entry>
There are currently no specific properties defined or needed for
this type.
</entry>
</row>
<!-- ############ type ############# -->
<row>
<entry>audio/x-adpcm</entry>
<entry>ADPCM Audio streams.</entry>
<entry>layout</entry>
<entry>string</entry>
<entry>
<quote>quicktime</quote>, <quote>wav</quote>,
<quote>microsoft</quote> or <quote>4xm</quote>.
</entry>
<entry>
The layout defines the packing of the samples in the stream. In
ADPCM, most formats store multiple samples per channel together.
This number of samples differs per format, hence the different
layouts. On the long term, we probably want this variable to die
and use something more descriptive, but this will do for now.
</entry>
</row>
<!-- ############ type ############# -->
<row>
<entry>audio/x-dv</entry>
<entry>Audio as provided in a Digital Video stream.</entry>
<entry></entry>
<entry></entry>
<entry></entry>
<entry>
There are currently no specific properties defined or needed for
this type.
</entry>
</row>
<!-- ############ type ############# -->
<row>
<entry>audio/x-flac</entry>
<entry>Free Lossless Audio codec (FLAC).</entry>
<entry></entry>
<entry></entry>
<entry></entry>
<entry>
There are currently no specific properties defined or needed for
this type.
</entry>
</row>
<!-- ############ type ############# -->
<row>
<entry>audio/x-gsm</entry>
<entry>Data encoded by the GSM codec.</entry>
<entry></entry>
<entry></entry>
<entry></entry>
<entry>
There are currently no specific properties defined or needed for
this type.
</entry>
</row>
<!-- ############ type ############# -->
<row>
<entry>audio/x-alaw</entry>
<entry>A-Law Audio.</entry>
<entry></entry>
<entry></entry>
<entry></entry>
<entry>
There are currently no specific properties defined or needed for
this type.
</entry>
</row>
<!-- ############ type ############# -->
<row>
<entry>audio/x-mulaw</entry>
<entry>Mu-Law Audio.</entry>
<entry></entry>
<entry></entry>
<entry></entry>
<entry>
There are currently no specific properties defined or needed for
this type.
</entry>
</row>
<!-- ############ type ############# -->
<row>
<entry>audio/x-mace</entry>
<entry>MACE Audio (used in Quicktime).</entry>
<entry>maceversion</entry>
<entry>integer</entry>
<entry>3 or 6</entry>
<entry>
The version of the MACE audio codec used to encode the stream.
</entry>
</row>
<!-- ############ type ############# -->
<row>
<entry morerows="3">audio/mpeg</entry>
<entry morerows="3">
Audio data compressed using the MPEG audio encoding scehem.
</entry>
<entry>mpegversion</entry>
<entry>integer</entry>
<entry>1, 2 or 4</entry>
<entry>
The MPEG-version used for encoding the data. The value 1 refers
to MPEG-1, -2 and -2.5 layer 1, 2 or 3. The values 2 and 4 refer
to the MPEG-AAC audio encoding schemes.
</entry>
</row>
<row>
<entry>framed</entry>
<entry>boolean</entry>
<entry>0 or 1</entry>
<entry>
A true value indicates that each buffer contains exactly one
frame. A false value indicates that frames and buffers do not
necessarily match up.
</entry>
</row>
<row>
<entry>layer</entry>
<entry>integer</entry>
<entry>1, 2, or 3</entry>
<entry>
The compression scheme layer used to compress the data
<emphasis>(only if mpegversion=1)</emphasis>.
</entry>
</row>
<row>
<entry>bitrate</entry>
<entry>integer</entry>
<entry>greater than 0</entry>
<entry>
The bitrate, in bits per second. For VBR (variable bitrate)
MPEG data, this is the average bitrate.
</entry>
</row>
<!-- ############ type ############# -->
<row>
<entry>audio/x-pn-realaudio</entry>
<entry>Real Audio data.</entry>
<entry>raversion</entry>
<entry>integer</entry>
<entry>1 or 2</entry>
<entry>
The version of the Real Audio codec used to encode the stream.
1 stands for a 14k4 stream, 2 stands for a 28k8 stream.
</entry>
</row>
<!-- ############ type ############# -->
<row>
<entry>audio/x-qdm2</entry>
<entry>Data encoded by the QDM version 2 codec.</entry>
<entry></entry>
<entry></entry>
<entry></entry>
<entry>
There are currently no specific properties defined or needed for
this type.
</entry>
</row>
<!-- ############ type ############# -->
<row>
<entry>audio/x-speex</entry>
<entry>Data encoded by the Speex audio codec</entry>
<entry></entry>
<entry></entry>
<entry></entry>
<entry>
There are currently no specific properties defined or needed for
this type.
</entry>
</row>
<!-- ############ type ############# -->
<row>
<entry>audio/x-vorbis</entry>
<entry>Vorbis audio data.</entry>
<entry></entry>
<entry></entry>
<entry></entry>
<entry>
There are currently no specific properties defined or needed for
this type.
</entry>
</row>
<!-- ############ type ############# -->
<row>
<entry>audio/x-wma</entry>
<entry>Windows Media Audio</entry>
<entry>wmaversion</entry>
<entry>integer</entry>
<entry>1 or 2</entry>
<entry>
The version of the WMA codec used to encode the stream.
</entry>
</row>
</tbody>
</tgroup>
</table>
<table frame="all" id="table-video-types" xreflabel="Table of Video Types">
<title>Table of Video Types</title>
<tgroup cols="6" align="left" colsep="1" rowsep="1">
<colspec colnum="1" colname="colv1" colwidth="1*"/>
<colspec colnum="6" colname="colv6" colwidth="6*"/>
<spanspec spanname="fullwidth" namest="colv1" nameend="colv6"/>
<thead>
<row>
<entry>Mime Type</entry>
<entry>Description</entry>
<entry>Property</entry>
<entry>Property Type</entry>
<entry>Property Values</entry>
<entry>Property Description</entry>
</row>
</thead>
<tbody valign="top">
<!-- ############ subtitle ############# -->
<row>
<entry spanname="fullwidth">
<emphasis>All video types.</emphasis>
</entry>
</row>
<!-- ############ type ############# -->
<row>
<entry morerows="2">video/*</entry>
<entry morerows="2">
<emphasis>All video types</emphasis>
</entry>
<entry>width</entry>
<entry>integer</entry>
<entry>greater than 0</entry>
<entry>The width of the video image</entry>
</row>
<row>
<entry>height</entry>
<entry>integer</entry>
<entry>greater than 0</entry>
<entry>The height of the video image</entry>
</row>
<row>
<entry>framerate</entry>
<entry>double</entry>
<entry>greater than 0</entry>
<entry>
The (average) framerate in frames per second. Note that this
property does not guarantee in <emphasis>any</emphasis> way that
it will actually come close to this value. If you need a fixed
framerate, please use an element that provides that (such as
<quote>videodrop</quote>).
</entry>
</row>
<!-- ############ subtitle ############# -->
<row>
<entry spanname="fullwidth">
<emphasis>All raw video types.</emphasis>
</entry>
</row>
<!-- ############ type ############# -->
<row>
<entry>video/x-raw-yuv</entry>
<entry>YUV (or Y'Cb'Cr) video format.</entry>
<entry>format</entry>
<entry>fourcc</entry>
<entry>
YUY2, YVYU, UYVY, Y41P, IYU2, Y42B, YV12, I420, Y41B, YUV9, YVU9,
Y800
</entry>
<entry>
The layout of the video. See <ulink type="http"
url="http://www.fourcc.org/">FourCC definition site</ulink>
for references and definitions. YUY2, YVYU and UYVY are 4:2:2
packed-pixel, Y41P is 4:1:1 packed-pixel and IYU2 is 4:4:4
packed-pixel. Y42B is 4:2:2 planar, YV12 and I420 are 4:2:0
planar, Y41B is 4:1:1 planar and YUV9 and YVU9 are 4:1:0 planar.
Y800 contains Y-samples only (black/white).
</entry>
</row>
<row>
<entry morerows="3">video/x-raw-rgb</entry>
<entry morerows="3">Red-Green-Blue (RBG) video.</entry>
<entry>bpp</entry>
<entry>integer</entry>
<entry>greater than 0</entry>
<entry>
The number of bits allocated per pixel. This is usually 16, 24
or 32.
</entry>
</row>
<row>
<entry>depth</entry>
<entry>integer</entry>
<entry>greater than 0</entry>
<entry>
The number of bits used per pixel by the R/G/B components. This
is usually 15, 16 or 24.
</entry>
</row>
<row>
<entry>endianness</entry>
<entry>integer</entry>
<entry>G_BIG_ENDIAN (1234) or G_LITTLE_ENDIAN (4321)</entry>
<entry>
The order of bytes in a sample. The value G_LITTLE_ENDIAN (4321)
means <quote>little-endian</quote> (byte-order is <quote>least
significant byte first</quote>). The value G_BIG_ENDIAN (1234)
means <quote>big-endian</quote> (byte order is <quote>most
significant byte first</quote>). For 24/32bpp, this should always
be big endian because the byte order can be given in both.
</entry>
</row>
<row>
<entry>red_mask, green_mask and blue_mask</entry>
<entry>integer</entry>
<entry>any</entry>
<entry>
The masks that cover all the bits used by each of the samples.
The mask should be given in the endianness specified above. This
means that for 24/32bpp, the masks might be opposite to host byte
order (if you are working on little-endian computers).
</entry>
</row>
<!-- ############ subtitle ############# -->
<row>
<entry spanname="fullwidth">
<emphasis>All encoded video types.</emphasis>
</entry>
</row>
<!-- ############ type ############# -->
<row>
<entry>video/x-divx</entry>
<entry>DivX video.</entry>
<entry>divxversion</entry>
<entry>integer</entry>
<entry>3, 4 or 5</entry>
<entry>
Version of the DivX codec used to encode the stream.
</entry>
</row>
</tbody>
</tgroup>
</table>
</sect1>
</chapter>

View file

@ -103,16 +103,41 @@
<!-- ############ sect1 ############# -->
<sect1 id="sect1-basics-buffers" xreflabel="Buffers">
<title>Buffers</title>
<sect1 id="sect1-basics-data" xreflabel="Data, Buffers and Events">
<title>Data, Buffers and Events</title>
<para>
All streams of data in &GStreamer; are chopped up into chunks that are
passed from a source pad on one element to a sink pad on another element.
<emphasis>Buffers</emphasis> are structures used to hold these chunks of
data. Buffers can be of any size, theoretically, and they may contain any
sort of data that the two linked pads know how to handle. Normally, a
buffer contains a chunk of some sort of audio or video data that flows
from one element to another.
<emphasis>Data</emphasis> are structures used to hold these chunks of
data.
</para>
<para>
Data contains the following important types:
<itemizedlist>
<listitem>
<para>
An exact type indicating what type of data (control, content, ...)
this Data is.
</para>
</listitem>
<listitem>
<para>
A reference count indicating the number of elements currently
holding a reference to the buffer. When the buffer reference count
falls to zero, the buffer will be unlinked, and its memory will be
freed in some sense (see below for more details).
</para>
</listitem>
</itemizedlist>
</para>
<para>
There are two types of data defined: events (control) and buffers
(content).
</para>
<para>
Buffers may contain any sort of data that the two linked pads
know how to handle. Normally, a buffer contains a chunk of some sort of
audio or video data that flows from one element to another.
</para>
<para>
Buffers also contain metadata describing the buffer's contents. Some of
@ -130,16 +155,31 @@
</listitem>
<listitem>
<para>
A <classname>GstData</classname> object describing the type of the
buffer's data.
A timestamp indicating the preferred display timestamp of the
content in the buffer.
</para>
</listitem>
</itemizedlist>
</para>
<para>
Events
contain information on the state of the stream flowing between the two
linked pads. Events will only be sent if the element explicitely supports
them, else the core will (try to) handle the events automatically. Events
are used to indicate, for example, a clock discontinuity, the end of a
media stream or that the cache should be flushed.
</para>
<para>
Events may contain several of the following items:
<itemizedlist>
<listitem>
<para>
A subtype indicating the type of the contained event.
</para>
</listitem>
<listitem>
<para>
A reference count indicating the number of elements currently
holding a reference to the buffer. When the buffer reference count
falls to zero, the buffer will be unlinked, and its memory will be
freed in some sense (see below for more details).
The other contents of the event depend on the specific event type.
</para>
</listitem>
</itemizedlist>
@ -147,7 +187,9 @@
<para>
See the &GstLibRef; for the current implementation details of a <ulink
type="http"
url="../gstreamer/gstreamer-gstbuffer.html"><classname>GstBuffer</classname></ulink>.
url="../gstreamer/gstreamer-gstdata.html"><classname>GstData</classname></ulink>, <ulink type="http"
url="../gstreamer/gstreamer-gstbuffer.html"><classname>GstBuffer</classname></ulink> and <ulink type="http"
url="../gstreamer/gstreamer-gstevent.html"><classname>GstEvent</classname></ulink>.
</para>
<sect2 id="sect2-buffers-bufferpools" xreflabel="Buffer Allocation and
@ -163,14 +205,17 @@
<para>
To improve the latency in a media pipeline, many &GStreamer; elements
use a <emphasis>buffer pool</emphasis> to handle buffer allocation and
unlinking. A buffer pool is a relatively large chunk of memory that is
the &GStreamer; process requests early on from the operating system.
Later, when elements request memory for a new buffer, the buffer pool
can serve the request quickly by giving out a piece of the allocated
memory. This saves a call to the operating system and lowers latency.
[If it seems at this point like &GStreamer; is acting like an operating
system (doing memory management, etc.), don't worry: &GStreamer;OS isn't
due out for quite a few years!]
releasing. A buffer pool is a virtual representation of one or more
buffers of which the data is not actually allocated by &GStreamer;
itself. Examples of these include hardware framebuffer memory in video
output elements or kernel-allocated DMA memory for video capture. The
huge advantage of using these buffers instead of creating our own is
that we do not have to copy memory from one place to another, thereby
saving a noticeable number of CPU cycles. Elements should not provide
a bufferpool to decrease the number of memory allocations: the kernel
will generally take care of that - and will probably do that much more
efficiently than we ever could. Using bufferpools in this way is highly
discouraged.
</para>
<para>
Normally in a media pipeline, most filter elements in &GStreamer; deal
@ -186,13 +231,14 @@
<!-- ############ sect1 ############# -->
<sect1 id="sect1-basics-types" xreflabel="Types and Properties">
<title>Types and Properties</title>
<title>Mimetypes and Properties</title>
<para>
&GStreamer; uses a type system to ensure that the data passed between
elements is in a recognized format. The type system is also important for
ensuring that the parameters required to fully specify a format match up
correctly when linking pads between elements. Each link that is
made between elements has a specified type.
elements is in a recognized format. The type system is also important
for ensuring that the parameters required to fully specify a format match
up correctly when linking pads between elements. Each link that is
made between elements has a specified type and optionally a set of
properties.
</para>
<!-- ############ sect2 ############# -->
@ -201,9 +247,11 @@
<title>The Basic Types</title>
<para>
&GStreamer; already supports many basic media types. Following is a
table of the basic types used for buffers in &GStreamer;. The table
contains the name ("mime type") and a description of the type, the
properties associated with the type, and the meaning of each property.
table of a few of the the basic types used for buffers in
&GStreamer;. The table contains the name ("mime type") and a
description of the type, the properties associated with the type, and
the meaning of each property. A full list of supported types is
included in <xref linkend="sect1-types-definitions"/>.
</para>
<table frame="all" id="table-basictypes" xreflabel="Table of Basic Types">
@ -226,15 +274,15 @@
<!-- ############ type ############# -->
<row>
<entry morerows="10">audio/raw</entry>
<entry morerows="10">
Unstructured and uncompressed raw audio data.
<entry morerows="1">audio/*</entry>
<entry morerows="1">
<emphasis>All audio types</emphasis>
</entry>
<entry>rate</entry>
<entry>integer</entry>
<entry>greater than 0</entry>
<entry>
The sample rate of the data, in samples per second.
The sample rate of the data, in samples (per channel) per second.
</entry>
</row>
<row>
@ -245,43 +293,33 @@
The number of channels of audio data.
</entry>
</row>
<!-- ############ type ############# -->
<row>
<entry>format</entry>
<entry>string</entry>
<entry><quote>int</quote> or <quote>float</quote></entry>
<entry>
The format in which the audio data is passed.
<entry morerows="3">audio/x-raw-int</entry>
<entry morerows="3">
Unstructured and uncompressed raw integer audio data.
</entry>
</row>
<row>
<entry>law</entry>
<entry>integer</entry>
<entry>0, 1, or 2</entry>
<entry>
(Valid only if the data is in integer format.) The law used to
describe the data. The value 0 indicates <quote>linear</quote>, 1
indicates <quote>mu&nbsp;law</quote>, and 2 indicates
<quote>A&nbsp;law</quote>.
</entry>
</row>
<row>
<entry>endianness</entry>
<entry>boolean</entry>
<entry>0 or 1</entry>
<entry>integer</entry>
<entry>G_BIG_ENDIAN (1234) or G_LITTLE_ENDIAN (4321)</entry>
<entry>
(Valid only if the data is in integer format.) The order of bytes
in a sample. The value 0 means <quote>little-endian</quote> (bytes
are least significant first). The value 1 means
<quote>big-endian</quote> (most significant byte first).
The order of bytes in a sample. The value G_LITTLE_ENDIAN (4321)
means <quote>little-endian</quote> (byte-order is <quote>least
significant byte first</quote>). The value G_BIG_ENDIAN (1234)
means <quote>big-endian</quote> (byte order is <quote>most
significant byte first</quote>).
</entry>
</row>
<row>
<entry>signed</entry>
<entry>boolean</entry>
<entry>0 or 1</entry>
<entry>TRUE or FALSE</entry>
<entry>
(Valid only if the data is in integer format.) Whether the samples
are signed or not.
Whether the values of the integer samples are signed or not.
Signed samples use one bit to indicate sign (negative or
positive) of the value. Unsigned samples are always positive.
</entry>
</row>
<row>
@ -289,8 +327,7 @@
<entry>integer</entry>
<entry>greater than 0</entry>
<entry>
(Valid only if the data is in integer format.) The number of bits
per sample.
Number of bits allocated per sample.
</entry>
</row>
<row>
@ -298,54 +335,31 @@
<entry>integer</entry>
<entry>greater than 0</entry>
<entry>
(Valid only if the data is in integer format.) The number of bits
used per sample. This must be less than or equal to the width: If
the depth is less than the width, the low bits are assumed to be
the ones used. For example, a width of 32 and a depth of 24 means
that each sample is stored in a 32 bit word, but only the low 24
bits are actually used.
</entry>
</row>
<row>
<entry>layout</entry>
<entry>string</entry>
<entry><quote>gfloat</quote></entry>
<entry>
(Valid only if the data is in float format.) A string representing
the way in which the floating point data is represented.
</entry>
</row>
<row>
<entry>intercept</entry>
<entry>float</entry>
<entry>any, normally 0</entry>
<entry>
(Valid only if the data is in float format.) A floating point
value representing the value that the signal
<quote>centers</quote> on.
</entry>
</row>
<row>
<entry>slope</entry>
<entry>float</entry>
<entry>any, normally 1.0</entry>
<entry>
(Valid only if the data is in float format.) A floating point
value representing how far the signal deviates from the intercept.
A slope of 1.0 and an intercept of 0.0 would mean an audio signal
with minimum and maximum values of -1.0 and 1.0. A slope of
0.5 and intercept of 0.5 would represent values in the range 0.0
to 1.0.
The number of bits used per sample. This must be less than or
equal to the width: If the depth is less than the width, the
low bits are assumed to be the ones used. For example, a width
of 32 and a depth of 24 means that each sample is stored in a
32 bit word, but only the low 24 bits are actually used.
</entry>
</row>
<!-- ############ type ############# -->
<row>
<entry morerows="4">audio/mp3</entry>
<entry morerows="4">
Audio data compressed using the mp3 encoding scheme.
<entry morerows="3">audio/mpeg</entry>
<entry morerows="3">
Audio data compressed using the MPEG audio encoding scheme.
</entry>
<entry>mpegversion</entry>
<entry>integer</entry>
<entry>1, 2 or 4</entry>
<entry>
The MPEG-version used for encoding the data. The value 1 refers
to MPEG-1, -2 and -2.5 layer 1, 2 or 3. The values 2 and 4 refer
to the MPEG-AAC audio encoding schemes.
</entry>
</row>
<row>
<entry>framed</entry>
<entry>boolean</entry>
<entry>0 or 1</entry>
@ -360,7 +374,8 @@
<entry>integer</entry>
<entry>1, 2, or 3</entry>
<entry>
The compression scheme layer used to compress the data.
The compression scheme layer used to compress the data
<emphasis>(only if mpegversion=1)</emphasis>.
</entry>
</row>
<row>
@ -368,134 +383,26 @@
<entry>integer</entry>
<entry>greater than 0</entry>
<entry>
The bitrate, in kilobits per second. For VBR (variable bitrate)
mp3 data, this is the average bitrate.
</entry>
</row>
<row>
<entry>channels</entry>
<entry>integer</entry>
<entry>greater than 0</entry>
<entry>
The number of channels of audio data present.
</entry>
</row>
<row>
<entry>joint-stereo</entry>
<entry>boolean</entry>
<entry>0 or 1</entry>
<entry>
If true, this implies that stereo data is stored as a combined
signal and the difference between the signals, rather than as two
entirely separate signals. If true, the <quote>channels</quote>
attribute must not be zero.
The bitrate, in bits per second. For VBR (variable bitrate)
MPEG data, this is the average bitrate.
</entry>
</row>
<!-- ############ type ############# -->
<row>
<entry morerows="0">audio/x-ogg</entry>
<entry morerows="0">
Audio data compressed using the Ogg Vorbis encoding scheme.
</entry>
<entry>audio/x-vorbis</entry>
<entry>Vorbis audio data</entry>
<entry></entry>
<entry></entry>
<entry></entry>
<entry>
FIXME: There are currently no parameters defined for this type.
There are currently no specific properties defined for this type.
</entry>
</row>
<!-- ############ type ############# -->
<row>
<entry morerows="2">video/raw</entry>
<entry morerows="2">
Raw video data.
</entry>
<entry>fourcc</entry>
<entry>FOURCC code</entry>
<entry></entry>
<entry>
A FOURCC code identifying the format in which this data is stored.
FOURCC (Four Character Code) is a simple system to allow
unambiguous identification of a video datastream format. See
<ulink url="http://www.webartz.com/fourcc/"
type="http">http://www.webartz.com/fourcc/</ulink>
</entry>
</row>
<row>
<entry>width</entry>
<entry>integer</entry>
<entry>greater than 0</entry>
<entry>
The number of pixels wide that each video frame is.
</entry>
</row>
<row>
<entry>height</entry>
<entry>integer</entry>
<entry>greater than 0</entry>
<entry>
The number of pixels high that each video frame is.
</entry>
</row>
<!-- ############ type ############# -->
<row>
<entry morerows="0">video/mpeg</entry>
<entry morerows="0">
Video data compressed using an MPEG encoding scheme.
</entry>
<entry></entry>
<entry></entry>
<entry></entry>
<entry>
FIXME: There are currently no parameters defined for this type.
</entry>
</row>
<!-- ############ type ############# -->
<row>
<entry morerows="0">video/x-msvideo</entry>
<entry morerows="0">
Video data compressed using the AVI encoding scheme.
</entry>
<entry></entry>
<entry></entry>
<entry></entry>
<entry>
FIXME: There are currently no parameters defined for this type.
</entry>
</row>
</tbody>
</tgroup>
</table>
</sect2>
</sect1>
<!-- ############ sect1 ############# -->
<sect1 id="sect1-basics-events" xreflabel="Events">
<title>Events</title>
<para>
Sometimes elements in a media processing pipeline need to know that
something has happened. An <emphasis>event</emphasis> is a special type of
data in &GStreamer; designed to serve this purpose. Events describe some
sort of activity that has happened somewhere in an element's pipeline, for
example, the end of the media stream or a clock discontinuity. Just like
any other data type, an event comes to an element on a sink pad and is
contained in a normal buffer. Unlike normal stream buffers, though, an
event buffer contains only an event, not any media stream data.
</para>
<para>
See the &GstLibRef; for the current implementation details of a <ulink
type="http"
url="../gstreamer/gstreamer-gstevent.html"><classname>GstEvent</classname></ulink>.
</para>
</sect1>
</chapter>

View file

@ -170,11 +170,10 @@
The remainder of this introductory part of the guide presents a short
overview of the basic concepts involved in &GStreamer; plugin development.
Topics covered include <xref linkend="sect1-basics-elements"/>, <xref
linkend="sect1-basics-pads"/>, <xref linkend="sect1-basics-buffers"/>,
<xref linkend="sect1-basics-types"/>, and <xref
linkend="sect1-basics-events"/>. If you are already familiar with this
information, you can use this short overview to refresh your memory, or
you can skip to <xref linkend="part-building"/>.
linkend="sect1-basics-pads"/>, <xref linkend="sect1-basics-data"/> and
<xref linkend="sect1-basics-types"/>. If you are already familiar with
this information, you can use this short overview to refresh your memory,
or you can skip to <xref linkend="part-building"/>.
</para>
<para>
As you can see, there a lot to learn, so let's get started!

View file

@ -33,7 +33,7 @@
<!ENTITY APPENDIX_PYTHON SYSTEM "appendix_python.xml">
<!ENTITY GStreamer "<application>GStreamer</application>">
<!ENTITY GstVersion "0.4.2.2">
<!ENTITY GstVersion "0.7.4">
<!ENTITY GstAppDevMan "<emphasis>GStreamer Application Development Manual</emphasis>">
<!ENTITY GstLibRef "<emphasis>GStreamer Library Reference</emphasis>">
]>

View file

@ -41,6 +41,17 @@
</para>
</authorblurb>
</author>
<author>
<firstname>Ronald</firstname>
<othername>S.</othername>
<surname>Bultje</surname>
<authorblurb>
<para>
<email>rbultje@ronald.bitfreak.net</email>
</para>
</authorblurb>
</author>
</authorgroup>
<legalnotice id="legalnotice">