docs/pwg/: General placeholders for now.

Original commit message from CVS:
2004-01-28  Ronald Bultje  <rbultje@ronald.bitfreak.net>

* docs/pwg/advanced_clock.xml:
* docs/pwg/advanced_interfaces.xml:
* docs/pwg/advanced_midi.xml:
General placeholders for now.
* docs/pwg/advanced_request.xml:
Explanation about sometimes and request pads.
* docs/pwg/advanced_scheduling.xml:
Concept of bytestream, loopfunctions and schedulers.
* docs/pwg/building_boiler.xml:
Add something about plugin-init.
This commit is contained in:
Ronald S. Bultje 2004-01-28 09:07:11 +00:00
parent 4317361af4
commit fd48a37fb0
7 changed files with 757 additions and 5 deletions

View file

@ -1,3 +1,16 @@
2004-01-28 Ronald Bultje <rbultje@ronald.bitfreak.net>
* docs/pwg/advanced_clock.xml:
* docs/pwg/advanced_interfaces.xml:
* docs/pwg/advanced_midi.xml:
General placeholders for now.
* docs/pwg/advanced_request.xml:
Explanation about sometimes and request pads.
* docs/pwg/advanced_scheduling.xml:
Concept of bytestream, loopfunctions and schedulers.
* docs/pwg/building_boiler.xml:
Add something about plugin-init.
2004-01-28 Thomas Vander Stichele <thomas at apestaart dot org>
* docs/pwg/building_pads.xml:

View file

@ -0,0 +1,6 @@
<chapter id="cha-advanced-clock">
<title>Clocking</title>
<para>
WRITEME
</para>
</chapter>

View file

@ -1,6 +1,105 @@
<chapter id="cha-advanced-interfaces">
<title>Interfaces</title>
<para>
WRITEME
Previously, in the chapter <xref linkend="cha-building-args"/>, we have
introduced the concept of GObject properties of controlling an element's
behaviour. This is a very powerful, but has two big disadvantage: firstly,
it is too generic, and secondly, it isn't dynamic.
</para>
<para>
The first disadvantage has to do with customizability of the end-user
interface that will be built to control the element. Some properties are
more important than others. Some integer properties are better shown in a
spin-button widget, whereas others would be better represented by a slider
widget. Such things are not possible because the UI has no actual meaning
in the application. A UI widget that stands for a bitrate property is the
same as an UI widget that stands for the size of a video, as long as both
are of the same <classname>GParamSpec</classname> type. Another problem,
related to the one about parameter important, is that things like parameter
grouping, function grouping or anything to make parameters coherent, is not
really possible.
</para>
<para>
The second argument against parameters are that they are not dynamic. In
many cases, the allowed values for a property are not fixed, but depend
on things that can only be detected at run-time. The names of inputs for
a TV card in a video4linux source element, for example, can only be
retrieved from the kernel driver when we've opened the device; this only
happens when the element goes into the READY state. This means that we
cannot create an enum property type to show this to the user.
</para>
<para>
The solution to those problems is to create very specialized types of
controls for certain often-used controls. We use the concept of interfaces
to achieve this. The basis of this all is the glib
<classname>GTypeInterface</classname> type. For each case where we think
it's useful, we've created interfaces which can be implemented by elements
at their own will. We've also created a small extension to
<classname>GTypeInterface</classname> (which is static itself, too) which
allows us to query for interface availability based on runtime properties.
This extension is called <classname>GstImplementsInterface</classname>.
</para>
<sect1 id="sect1-iface-general" xreflabel="How to Implement Interfaces">
<title>How to Implement Interfaces</title>
<para>
WRITEME
</para>
</sect1>
<sect1 id="sect1-iface-mixer" xreflabel="Mixer Interface">
<title>Mixer Interface</title>
<para>
WRITEME
</para>
</sect1>
<sect1 id="sect1-iface-tuner" xreflabel="Tuner Interface">
<title>Tuner Interface</title>
<para>
WRITEME
</para>
</sect1>
<sect1 id="sect1-iface-colorbalance" xreflabel="Color Balance Interface">
<title>Color Balance Interface</title>
<para>
WRITEME
</para>
</sect1>
<sect1 id="sect1-iface-propprobe" xreflabel="Property Probe Interface">
<title>Property Probe Interface</title>
<para>
WRITEME
</para>
</sect1>
<sect1 id="sect1-iface-profile" xreflabel="Profile Interface">
<title>Profile Interface</title>
<para>
WRITEME
</para>
</sect1>
<sect1 id="sect1-iface-xoverlay" xreflabel="X Overlay Interface">
<title>X Overlay Interface</title>
<para>
WRITEME
</para>
</sect1>
<sect1 id="sect1-iface-navigation" xreflabel="Navigation Interface">
<title>Navigation Interface</title>
<para>
WRITEME
</para>
</sect1>
<sect1 id="sect1-iface-tagging" xreflabel="Tagging Interface">
<title>Tagging Interface</title>
<para>
WRITEME
</para>
</sect1>
</chapter>

View file

@ -0,0 +1,6 @@
<chapter id="cha-advanced-midi">
<title>MIDI</title>
<para>
WRITEME
</para>
</chapter>

View file

@ -1,6 +1,267 @@
<chapter id="cha-advanced-request">
<title>Request pads</title>
<title>Request and Sometimes pads</title>
<para>
aka pushing and pulling
Until now, we've only dealt with pads that are always available. However,
there's also pads that are only being created in some cases, or only if
the application requests the pad. The first is called a
<emphasis>sometimes</emphasis>; the second is called a
<emphasis>request</emphasis> pad. The availability of a pad (always,
sometimes or request) can be seen in a pad's template. This chapted will
discuss when each of the two is useful, how they are created and when
they should be disposed.
</para>
<sect1 id="sect1-reqpad-sometimes" xreflabel="Sometimes pads">
<title>Sometimes pads</title>
<para>
A <quote>sometimes</quote> pad is a pad that is created under certain
conditions, but not in all cases. This mostly depends on stream content:
demuxers will generally parse the stream header, decide what elementary
(video, audio, subtitle, etc.) streams are embedded inside the system
stream, and will then create a sometimes pad for each of those elementary
streams. At its own choice, it can also create more than one instance of
each of those per element instance. The only limitation is that each
newly created pad should have a unique name. Sometimes pads are disposed
when the stream data is disposed, too (i.e. when going from PAUSED to the
READY state). You should <emphasis>not</emphasis> dispose the pad on EOS,
because someone might re-activate the pipeline and seek back to before
the end-of-stream point. The stream should still stay valid after EOS, at
least until the stream data is disposed. In any case, the element is
always the owner of such a pad.
</para>
<para>
The example code below will parse a text file, where the first line is
a number (n). The next lines all start with a number (0 to n-1), which
is the number of the source pad over which the data should be sent.
</para>
<programlisting>
3
0: foo
1: bar
0: boo
2: bye
</programlisting>
<para>
The code to parse this file and create the dynamic <quote>sometimes</quote>
pads, looks like this:
</para>
<programlisting>
typedef struct _GstMyFilter {
[..]
gboolean firstrun;
GList *srcpadlist;
} GstMyFilter;
static void
gst_my_filter_base_init (GstMyFilterClass *klass)
{
GstElementClass *element_class = GST_ELEMENT_CLASS (klass);
static GstStaticPadTemplate src_factory =
GST_STATIC_PAD_TEMPLATE (
"src_%02d",
GST_PAD_SRC,
GST_PAD_SOMETIMES,
GST_STATIC_CAPS ("ANY")
);
[..]
gst_element_class_add_pad_template (element_class,
gst_static_pad_template_get (&amp;src_factory));
[..]
}
static void
gst_my_filter_init (GstMyFilter *filter)
{
[..]
filter->firstrun = TRUE;
filter->srcpadlist = NULL;
}
/*
* Get one line of data - without newline.
*/
static GstBuffer *
gst_my_filter_getline (GstMyFilter *filter)
{
guint8 *data;
gint n, num;
/* max. line length is 512 characters - for safety */
for (n = 0; n < 512; n++) {
num = gst_bytestream_peek_bytes (filter->bs, &amp;data, n + 1);
if (num != n + 1)
return NULL;
/* newline? */
if (data[n] == '\n') {
GstBuffer *buf = gst_buffer_new_and_alloc (n + 1);
gst_bytestream_peek_bytes (filter->bs, &amp;data, n);
memcpy (GST_BUFFER_DATA (buf), data, n);
GST_BUFFER_DATA (buf)[n] = '\0';
gst_bytestream_flush_fast (filter->bs, n + 1);
return buf;
}
}
}
static void
gst_my_filter_loopfunc (GstElement *element)
{
GstMyFilter *filter = GST_MY_FILTER (element);
GstBuffer *buf;
GstPad *pad;
gint num, n;
/* parse header */
if (filter->firstrun) {
GstElementClass *klass;
GstPadTemplate *templ;
gchar *padname;
if (!(buf = gst_my_filter_getline (filter))) {
gst_element_error (element, STREAM, READ, (NULL),
("Stream contains no header"));
return;
}
num = atoi (GST_BUFFER_DATA (buf));
gst_buffer_unref (buf);
/* for each of the streams, create a pad */
klass = GST_ELEMENT_GET_CLASS (filter);
templ = gst_element_class_get_pad_template (klass, "src_%02d");
for (n = 0; n < num; n++) {
padname = g_strdup_printf ("src_%02d", n);
pad = gst_pad_new_from_template (templ, padname);
g_free (padname);
/* here, you would set _getcaps () and _link () functions */
gst_element_add_pad (element, pad);
filter->srcpadlist = g_list_append (filter->srcpadlist, pad);
}
}
/* and now, simply parse each line and push over */
if (!(buf = gst_my_filter_getline (filter))) {
GstEvent *event = gst_event_new (GST_EVENT_EOS);
GList *padlist;
for (padlist = srcpadlist;
padlist != NULL; padlist = g_list_next (padlist)) {
pad = GST_PAD (padlist->data);
gst_event_ref (event);
gst_pad_push (pad, GST_DATA (event));
}
gst_event_unref (event);
gst_element_set_eos (element);
return;
}
/* parse stream number and go beyond the ':' in the data */
num = atoi (GST_BUFFER_DATA (buf));
if (num >= 0 &amp;&amp; num < g_list_length (filter->srcpadlist)) {
pad = GST_PAD (g_list_nth_data (filter->srcpadlist, num);
/* magic buffer parsing foo */
for (n = 0; GST_BUFFER_DATA (buf)[n] != ':' &amp;&amp;
GST_BUFFER_DATA (buf)[n] != '\0'; n++) ;
if (GST_BUFFER_DATA (buf)[n] != '\0') {
GstBuffer *sub;
/* create subbuffer that starts right past the space. The reason
* that we don't just forward the data pointer is because the
* pointer is no longer the start of an allocated block of memory,
* but just a pointer to a position somewhere in the middle of it.
* That cannot be freed upon disposal, so we'd either crash or have
* a memleak. Creating a subbuffer is a simple way to solve that. */
sub = gst_buffer_create_sub (buf, n + 1, GST_BUFFER_SIZE (buf) - n - 1);
gst_pad_push (pad, GST_DATA (sub));
}
}
gst_buffer_unref (buf);
}
</programlisting>
<para>
Note that we use a lot of checks everywhere to make sure that the content
in the file is valid. This has two purposes: first, the file could be
erronous, in which case we prevent a crash. The second and most important
reason is that - in extreme cases - the file could be used maliciously to
cause undefined behaviour in the plugin, which might lead to security
issues. <emphasis>Always</emphasis> assume that the file could be used to
do bad things.
</para>
</sect1>
<sect1 id="sect1-reqpad-request" xreflabel="Request pads">
<title>Request pads</title>
<para>
<quote>Request</quote> pads are similar to sometimes pads, except that
request are created on demand of something outside of the element rather
than something inside the element. This concept is often used in muxers,
where - for each elementary stream that is to be placed in the output
system stream - one sink pad will be requested. It can also be used in
elements with a variable number of input or outputs pads, such as the
<classname>tee</classname> (multi-output), <classname>switch</classname>
or <classname>aggregator</classname> (both multi-input) elements. At the
time of writing this, it is unclear to me who is responsible for cleaning
up the created pad and how or when that should be done. Below is a simple
example of an aggregator based on request pads.
</para>
<programlisting>
static GstPad * gst_my_filter_request_new_pad (GstElement *element,
GstPadTemplate *templ,
const gchar *name);
static void
gst_my_filter_base_init (GstMyFilterClass *klass)
{
GstElementClass *element_class = GST_ELEMENT_CLASS (klass);
static GstStaticPadTemplate sink_factory =
GST_STATIC_PAD_TEMPLATE (
"sink_%d",
GST_PAD_SINK,
GST_PAD_REQUEST,
GST_STATIC_CAPS ("ANY")
);
[..]
gst_element_class_add_pad_template (klass,
gst_static_pad_template_get (&amp;sink_factory));
}
static void
gst_my_filter_class_init (GstMyFilterClass *klass)
{
GstElementClass *element_class = GST_ELEMENT_CLASS (klass);
[..]
element_class->request_new_pad = gst_my_filter_request_new_pad;
}
static GstPad *
gst_my_filter_request_new_pad (GstElement *element,
GstPadTemplate *templ,
const gchar *name)
{
GstPad *pad;
GstMyFilterInputContext *context;
context = g_new0 (GstMyFilterInputContext, 1);
pad = gst_pad_new_from_template (templ, name);
gst_element_set_private_data (pad, context);
/* normally, you would set _link () and _getcaps () functions here */
gst_element_add_pad (element, pad);
return pad;
}
</programlisting>
<para>
The <function>_loop ()</function> function is the same as the one given
previously in <xref linkend="sect1-loopfn-multiinput"/>.
</para>
</sect1>
</chapter>

View file

@ -1,15 +1,361 @@
<chapter id="cha-loopbased-sched">
<title>How scheduling works</title>
<para>
aka pushing and pulling
Scheduling is, in short, a method for making sure that every element gets
called once in a while to process data and prepare data for the next
element. Likewise, a kernel has a scheduler to for processes, and your
brain is a very complex scheduler too in a way.
Randomly calling elements' chain functions won't bring us far, however, so
you'll understand that the schedulers in &GStreamer; are a bit more complex
than this. However, as a start, it's a nice picture.
&GStreamer; currently provides two schedulers: a <emphasis>basic</emphasis>
scheduler and an <emphasis>optimal</emphasis> scheduler. As the name says,
the basic scheduler (<quote>basic</quote>) is an unoptimized, but very
complete and simple scheduler. The optimal scheduler (<quote>opt</quote>),
on the other hand, is optimized for media processing, but therefore also
more complex.
</para>
<para>
Note that schedulers only operate on one thread. If your pipeline contains
multiple threads, each thread will run with a separate scheduler. That is
the reason why two elements running in different threads need a queue-like
element (a <classname>DECOUPLED</classname> element) in between them.
</para>
<sect1 id="sect1-sched-basic" xreflabel="The Basic Scheduler">
<title>The Basic Scheduler</title>
<para>
The <emphasis>basic</emphasis> scheduler assumes that each element is its
own process. We don't use UNIX processes or POSIX threads for this,
however; instead, we use so-called <emphasis>co-threads</emphasis>.
Co-threads are threads that run besides each other, but only one is active
at a time. The advantage of co-threads over normal threads is that they're
lightweight. The disadvantage is that UNIX or POSIX do not provide such a
thing, so we need to include our own co-threads stack for this to run.
</para>
<para>
The task of the scheduler here is to control which co-thread runs at what
time. A well-written scheduler based on co-threads will let an element run
until it outputs one piece of data. Upon pushing one piece of data to the
next element, it will let the next element run, and so on. Whenever a
running element requires data from the previous element, the scheduler will
switch to that previous element and run that element until it has provided
data for use in the next element.
</para>
<para>
This method of running elements as needed has the disadvantage that a lot
of data will often be queued in between two elements, as the one element
has provided data but the other element hasn't actually used it yet. These
storages of in-between-data are called <emphasis>bufpens</emphasis>, and
they can be visualized as a light <quote>queue</quote>.
</para>
<para>
Note that since every element runs in its own (co-)thread, this scheduler
is rather heavy on your system for larger pipelines.
</para>
</sect1>
<sect1 id="sect1-sched-opt" xreflabel="The Optimal Scheduler">
<title>The Optimal Scheduler</title>
<para>
The <emphasis>optimal</emphasis> scheduler takes advantage of the fact that
several elements can be linked together in one thread, with one element
controlling the other. This works as follows: in a series of chain-based
elements, each element has a function that accepts one piece of data, and
it calls a function that provides one piece of data to the next element.
The optimal scheduler will make sure that the <function>gst_pad_push ()</function>
function of the first element <emphasis>directly</emphasis> calls the
chain-function of the second element. This significantly decreases the
latency in a pipeline. It takes similar advantage of other possibilities
of short-cutting the data path from one element to the next.
</para>
<para>
The disadvantage of the optimal scheduler is that it is not fully
implemented. Also it is badly documented; for most developers, the opt
scheduler is one big black box. Features that are not implemented
include pad-unlinking within a group while running, pad-selecting
(i.e. waiting for data to arrive on a list of pads), and it can't really
cope with multi-input/-output elements (with the elements linked to each
of these in-/outputs running in the same thread) right now.
</para>
<para>
Some of our developers are intending to write a new scheduler, similar to
the optimal scheduler (but better documented and more completely
implemented).
</para>
</sect1>
</chapter>
<chapter id="cha-loopbased-loopfn">
<title>How a loopfunc works</title>
<para>
aka pulling and pushing
A <function>_loop ()</function> function is a function that is called by
the scheduler, but without providing data to the element. Instead, the
element will become responsible for acquiring its own data, and it will
still be responsible of sending data over to its source pads. This method
noticeably complicates scheduling; you should only write loop-based
elements when you need to. Normally, chain-based elements are preferred.
Examples of elements that <emphasis>have</emphasis> to be loop-based are
elements with multiple sink pads. Since the scheduler will push data into
the pads as it comes (and this might not be synchronous), you will easily
get ascynronous data on both pads, which means that the data that arrives
on the first pad has a different display timestamp then the data arriving
on the second pad at the same time. To get over these issues, you should
write such elements in a loop-based form. Other elements that are
<emphasis>easier</emphasis> to write in a loop-based form than in a
chain-based form are demuxers and parsers. It is not required to write such
elements in a loop-based form, though.
</para>
<para>
Below is an example of the easiest loop-function that one can write:
</para>
<programlisting>
static void gst_my_filter_loopfunc (GstElement *element);
static void
gst_my_filter_init (GstMyFilter *filter)
{
[..]
gst_element_set_loopfunc (GST_ELEMENT (filter), gst_my_filter_loopfunc);
[..]
}
static void
gst_my_filter_loopfunc (GstElement *element)
{
GstMyFilter *filter = GST_MY_FILTER (element);
GstData *data;
/* acquire data */
data = gst_pad_pull (filter->sinkpad);
/* send data */
gst_pad_push (filter->srcpad, data);
}
</programlisting>
<para>
Obviously, this specific example has no single advantage over a chain-based
element, so you should never write such elements. However, it's a good
introduction to the concept.
</para>
<sect1 id="sect1-loopfn-multiinput" xreflabel="Multi-Input Elements">
<title>Multi-Input Elements</title>
<para>
Elements with multiple sink pads need to take manual control over their
input to assure that the input is synchronized. The following example
code could (should) be used in an aggregator, i.e. an element that takes
input from multiple streams and sends it out intermangled. Not really
useful in practice, but a good example, again.
</para>
<programlisting>
typedef struct _GstMyFilterInputContext {
gboolean eos;
GstBuffer *lastbuf;
} GstMyFilterInputContext;
[..]
static void
gst_my_filter_init (GstMyFilter *filter)
{
GstElementClass *klass = GST_ELEMENT_GET_CLASS (filter);
GstMyFilterInputContext *context;
filter->sinkpad1 = gst_pad_new_from_template (
gst_element_class_get_pad_template (klass, "sink"), "sink_1");
context = g_new0 (GstMyFilterInputContext, 1);
gst_pad_set_private_data (filter->sinkpad1, context);
[..]
filter->sinkpad2 = gst_pad_new_from_template (
gst_element_class_get_pad_template (klass, "sink"), "sink_2");
context = g_new0 (GstMyFilterInputContext, 1);
gst_pad_set_private_data (filter->sinkpad2, context);
[..]
gst_element_set_loopfunc (GST_ELEMENT (filter),
gst_my_filter_loopfunc);
}
[..]
static void
gst_my_filter_loopfunc (GstElement *element)
{
GstMyFilter *filter = GST_MY_FILTER (element);
GList *padlist;
GstMyFilterInputContext *first_context = NULL;
/* Go over each sink pad, update the cache if needed, handle EOS
* or non-responding streams and see which data we should handle
* next. */
for (padlist = gst_element_get_padlist (element);
padlist != NULL; padlist = g_list_next (padlist)) {
GstPad *pad = GST_PAD (padlist->data);
GstMyFilterInputContext *context = gst_pad_get_private_data (pad);
if (GST_PAD_IS_SRC (pad))
continue;
while (GST_PAD_IS_USABLE (pad) &amp;&amp;
!context->eos &amp;&amp; !context->lastbuf) {
GstData *data = gst_pad_pull (pad);
if (GST_IS_EVENT (data)) {
/* We handle events immediately */
GstEvent *event = GST_EVENT (data);
switch (GST_EVENT_TYPE (event)) {
case GST_EVENT_EOS:
context->eos = TRUE;
gst_event_unref (event);
break;
case GST_EVENT_DISCONTINUOUS:
g_warning ("HELP! How do I handle this?");
/* fall-through */
default:
gst_pad_event_default (pad, event);
break;
}
} else {
/* We store the buffer to handle synchronization below */
context->lastbuf = GST_BUFFER (data);
}
}
/* synchronize streams by always using the earliest buffer */
if (context->lastbuf) {
if (!first_context) {
first_context = context;
} else {
if (GST_BUFFER_TIMESTAMP (context->lastbuf) <
GST_BUFFER_TIMESTAMP (first_context->lastbuf))
first_context = context;
}
}
}
/* If we handle no data at all, we're at the end-of-stream, so
* we should signal EOS. */
if (!first_context) {
gst_pad_push (filter->srcpad, GST_DATA (gst_event_new (GST_EVENT_EOS)));
gst_element_set_eos (element);
return;
}
/* So we do have data! Let's forward that to our source pad. */
gst_pad_push (filter->srcpad, GST_DATA (first_context->lastbuf));
first_context->lastbuf = NULL;
}
</programlisting>
<para>
Note that a loop-function is allowed to return. Better yet, a loop
function <emphasis>has to</emphasis> return so the scheduler can
let other elements run (this is particularly true for the optimal
scheduler). Whenever the scheduler feels right, it will call the
loop-function of the element again.
</para>
</sect1>
<sect1 id="sect1-loopfn-bytestream" xreflabel="The Bytestream Object">
<title>The Bytestream Object</title>
<para>
A second type of elements that wants to be loop-based, are the so-called
bytestream-elements. Until now, we've only dealt with elements that
receive of pull full buffers of a random size from other elements. Often,
however, it is wanted to have control over the stream at a byte-level,
such as in stream parsers or demuxers. It is possible to manually pull
buffers and merge them until a certain size; it is easier, however, to
use bytestream, which wraps this behaviour.
</para>
<para>
Bytestream-using elements are ususally stream parsers or demuxers. For
now, we will take a parser as an example. Demuxers require some more
magic that will be dealt with later in this guide:
<xref linkend="cha-advanced-request"/>. The goal of this parser will be
to parse a text-file and to push each line of text as a separate buffer
over its source pad.
</para>
<programlisting>
static void
gst_my_filter_loopfunc (GstElement *element)
{
GstMyFilter *filter = GST_MY_FILTER (element);
gint n, num;
guint8 *data;
for (n = 0; ; n++) {
num = gst_bytestream_peek_bytes (filter->bs, &amp;data, n + 1);
if (num != n + 1) {
GstEvent *event = NULL;
guint remaining;
gst_bytestream_get_status (filter->bs, &amp;remaining, &amp;event);
if (event) {
if (GST_EVENT_TYPE (event) == GST_EVENT_EOS)) {
/* end-of-file */
gst_pad_push (filter->srcpad, GST_DATA (event));
gst_element_set_eos (element);
return;
}
gst_event_unref (event);
}
/* failed to read - throw error and bail out */
gst_element_error (element, STREAM, READ, (NULL), (NULL));
return;
}
/* check if the last character is a newline */
if (data[n] == '\n') {
GstBuffer *buf = gst_buffer_new_and_alloc (n + 1);
/* read the line of text without newline - then flush the newline */
gst_bytestream_peek_data (filter->bs, &amp;data, n);
memcpy (GST_BUFFER_DATA (buf), data, n);
GST_BUFFER_DATA (buf)[n] = '\0';
gst_bytestream_flush_fast (filter->bs, n + 1);
g_print ("Pushing '%s'\n", GST_BUFFER_DATA (buf));
gst_pad_push (filter->srcpad, GST_DATA (buf));
return;
}
}
}
static void
gst_my_filter_change_state (GstElement *element)
{
GstMyFilter *filter = GST_MY_FILTER (element);
switch (GST_STATE_TRANSITION (element)) {
case GST_STATE_READY_TO_PAUSED:
filter->bs = gst_bytestream_new (filter->sinkpad);
break;
case GST_STATE_PAUSED_TO_READY:
gst_bytestream_destroy (filter->bs);
break;
default:
break;
}
if (GST_ELEMENT_CLASS (parent_class)->change_state)
return GST_ELEMENT_CLASS (parent_class)->change_state (element);
return GST_STATE_SUCCESS;
}
</programlisting>
<para>
In the above example, you'll notice how bytestream handles buffering of
data for you. The result is that you can handle the same data multiple
times. Event handling in bytestream is currently sort of
<emphasis>wacky</emphasis>, but it works quite well. The one big
disadvantage of bytestream is that it <emphasis>requires</emphasis>
the element to be loop-based. Long-term, we hope to have a chain-based
usable version of bytestream, too.
</para>
</sect1>
</chapter>
<chapter id="cha-loopbased-secnd">

View file

@ -343,6 +343,27 @@ gst_my_filter_base_init (GstMyFilterClass *klass)
Also, in this function, any supported element type in the plugin should
be registered.
</para>
<programlisting>
static gboolean
plugin_init (GstPlugin *plugin)
{
return gst_element_register (plugin, "my_filter",
GST_RANK_NONE,
GST_TYPE_MY_FILTER);
}
GST_PLUGIN_DEFINE (
GST_VERSION_MAJOR,
GST_VERSION_MINOR,
"my_filter",
"My filter plugin",
plugin_init,
VERSION,
"LGPL",
"GStreamer",
"http://gstreamer.net/"
)
</programlisting>
<para>
Note that the information returned by the plugin_init() function will be
cached in a central registry. For this reason, it is important that the