gstreamer/docs/manual/advanced-dataaccess.xml
Sebastian Dröge 372ccb7943 manual: Fix buffer memory leak in appsrc example
g_signal_emit_by_name() is not like gst_app_src_push_buffer() due to reference
counting limitations of signals, it does *not* take ownership of the buffer.
2016-04-28 20:31:23 +03:00

1614 lines
57 KiB
XML

<chapter id="chapter-dataaccess">
<title>Pipeline manipulation</title>
<para>
This chapter will discuss how you can manipulate your pipeline in several
ways from your application on. Parts of this chapter are very
lowlevel, so be assured that you'll need some programming knowledge
and a good understanding of &GStreamer; before you start reading this.
</para>
<para>
Topics that will be discussed here include how you can insert data into
a pipeline from your application, how to read data from a pipeline,
how to manipulate the pipeline's speed, length, starting point and how
to listen to a pipeline's data processing.
</para>
<sect1 id="section-using-probes">
<title>Using probes</title>
<para>
Probing is best envisioned as a pad listener. Technically, a probe is
nothing more than a callback that can be attached to a pad.
You can attach a probe using <function>gst_pad_add_probe ()</function>.
Similarly, one can use the
<function>gst_pad_remove_probe ()</function>
to remove the callback again. The probe notifies you of any activity
that happens on the pad, like buffers, events and queries. You can
define what kind of notifications you are interested in when you
add the probe.
</para>
<para>
The probe can notify you of the following activity on pads:
</para>
<itemizedlist>
<listitem>
<para>
A buffer is pushed or pulled. You want to specify the
GST_PAD_PROBE_TYPE_BUFFER when registering the probe. Because the
pad can be scheduled in different ways, it is possible to also
specify in what scheduling mode you are interested with the
optional GST_PAD_PROBE_TYPE_PUSH and GST_PAD_PROBE_TYPE_PULL
flags.
</para>
<para>
You can use this probe to inspect, modify or drop the buffer.
See <xref linkend="section-data-probes"/>.
</para>
</listitem>
<listitem>
<para>
A bufferlist is pushed. Use the GST_PAD_PROBE_TYPE_BUFFER_LIST
when registering the probe.
</para>
</listitem>
<listitem>
<para>
An event travels over a pad. Use the GST_PAD_PROBE_TYPE_EVENT_DOWNSTREAM
and GST_PAD_PROBE_TYPE_EVENT_UPSTREAM flags to select downstream
and upstream events. There is also a convenience
GST_PAD_PROBE_TYPE_EVENT_BOTH to be notified of events going both
upstream and downstream. By default, flush events do not cause
a notification. You need to explicitly enable GST_PAD_PROBE_TYPE_EVENT_FLUSH
to receive callbacks from flushing events. Events are always
only notified in push mode.
</para>
<para>
You can use this probe to inspect, modify or drop the event.
</para>
</listitem>
<listitem>
<para>
A query travels over a pad. Use the GST_PAD_PROBE_TYPE_QUERY_DOWNSTREAM
and GST_PAD_PROBE_TYPE_QUERY_UPSTREAM flags to select downstream
and upstream queries. The convenience GST_PAD_PROBE_TYPE_QUERY_BOTH
can also be used to select both directions. Query probes will be
notified twice, once when the query travels upstream/downstream and
once when the query result is returned. You can select in what stage
the callback will be called with the GST_PAD_PROBE_TYPE_PUSH and
GST_PAD_PROBE_TYPE_PULL, respectively when the query is performed
and when the query result is returned.
</para>
<para>
You can use this probe to inspect or modify the query. You can also
answer the query in the probe callback by placing the result value
in the query and by returning GST_PAD_PROBE_DROP from the
callback.
</para>
</listitem>
<listitem>
<para>
In addition to notifying you of dataflow, you can also ask the
probe to block the dataflow when the callback returns. This is
called a blocking probe and is activated by specifying the
GST_PAD_PROBE_TYPE_BLOCK flag. You can use this flag with the
other flags to only block dataflow on selected activity. A pad
becomes unblocked again if you remove the probe or when you return
GST_PAD_PROBE_REMOVE from the callback. You can let only the
currently blocked item pass by returning GST_PAD_PROBE_PASS
from the callback, it will block again on the next item.
</para>
<para>
Blocking probes are used to temporarily block pads because they
are unlinked or because you are going to unlink them. If the
dataflow is not blocked, the pipeline would go into an error
state if data is pushed on an unlinked pad. We will se how
to use blocking probes to partially preroll a pipeline.
See also <xref linkend="section-preroll-probes"/>.
</para>
</listitem>
<listitem>
<para>
Be notified when no activity is happening on a pad. You install
this probe with the GST_PAD_PROBE_TYPE_IDLE flag. You can specify
GST_PAD_PROBE_TYPE_PUSH and/or GST_PAD_PROBE_TYPE_PULL to
only be notified depending on the pad scheduling mode.
The IDLE probe is also a blocking probe in that it will not let
any data pass on the pad for as long as the IDLE probe is
installed.
</para>
<para>
You can use idle probes to dynamically relink a pad. We will see
how to use idle probes to replace an element in the pipeline.
See also <xref linkend="section-dynamic-pipelines"/>.
</para>
</listitem>
</itemizedlist>
<sect2 id="section-data-probes">
<title>Data probes</title>
<para>
Data probes allow you to be notified when there is data passing
on a pad. When adding the probe, specify the GST_PAD_PROBE_TYPE_BUFFER
and/or GST_PAD_PROBE_TYPE_BUFFER_LIST.
</para>
<para>
Data probes run in pipeline streaming thread context, so callbacks
should try to not block and generally not do any weird stuff, since
this could have a negative impact on pipeline performance or, in case
of bugs, cause deadlocks or crashes. More precisely, one should usually
not call any GUI-related functions from within a probe callback, nor try
to change the state of the pipeline. An application may post custom
messages on the pipeline's bus though to communicate with the main
application thread and have it do things like stop the pipeline.
</para>
<para>
In any case, most common buffer operations
that elements can do in <function>_chain ()</function> functions, can
be done in probe callbacks as well. The example below gives a short
impression on how to use them.
</para>
<programlisting>
<!-- example-begin probe.c -->
<![CDATA[
#include <gst/gst.h>
static GstPadProbeReturn
cb_have_data (GstPad *pad,
GstPadProbeInfo *info,
gpointer user_data)
{
gint x, y;
GstMapInfo map;
guint16 *ptr, t;
GstBuffer *buffer;
buffer = GST_PAD_PROBE_INFO_BUFFER (info);
buffer = gst_buffer_make_writable (buffer);
/* Making a buffer writable can fail (for example if it
* cannot be copied and is used more than once)
*/
if (buffer == NULL)
return GST_PAD_PROBE_OK;
/* Mapping a buffer can fail (non-writable) */
if (gst_buffer_map (buffer, &map, GST_MAP_WRITE)) {
ptr = (guint16 *) map.data;
/* invert data */
for (y = 0; y < 288; y++) {
for (x = 0; x < 384 / 2; x++) {
t = ptr[384 - 1 - x];
ptr[384 - 1 - x] = ptr[x];
ptr[x] = t;
}
ptr += 384;
}
gst_buffer_unmap (buffer, &map);
}
GST_PAD_PROBE_INFO_DATA (info) = buffer;
return GST_PAD_PROBE_OK;
}
gint
main (gint argc,
gchar *argv[])
{
GMainLoop *loop;
GstElement *pipeline, *src, *sink, *filter, *csp;
GstCaps *filtercaps;
GstPad *pad;
/* init GStreamer */
gst_init (&argc, &argv);
loop = g_main_loop_new (NULL, FALSE);
/* build */
pipeline = gst_pipeline_new ("my-pipeline");
src = gst_element_factory_make ("videotestsrc", "src");
if (src == NULL)
g_error ("Could not create 'videotestsrc' element");
filter = gst_element_factory_make ("capsfilter", "filter");
g_assert (filter != NULL); /* should always exist */
csp = gst_element_factory_make ("videoconvert", "csp");
if (csp == NULL)
g_error ("Could not create 'videoconvert' element");
sink = gst_element_factory_make ("xvimagesink", "sink");
if (sink == NULL) {
sink = gst_element_factory_make ("ximagesink", "sink");
if (sink == NULL)
g_error ("Could not create neither 'xvimagesink' nor 'ximagesink' element");
}
gst_bin_add_many (GST_BIN (pipeline), src, filter, csp, sink, NULL);
gst_element_link_many (src, filter, csp, sink, NULL);
filtercaps = gst_caps_new_simple ("video/x-raw",
"format", G_TYPE_STRING, "RGB16",
"width", G_TYPE_INT, 384,
"height", G_TYPE_INT, 288,
"framerate", GST_TYPE_FRACTION, 25, 1,
NULL);
g_object_set (G_OBJECT (filter), "caps", filtercaps, NULL);
gst_caps_unref (filtercaps);
pad = gst_element_get_static_pad (src, "src");
gst_pad_add_probe (pad, GST_PAD_PROBE_TYPE_BUFFER,
(GstPadProbeCallback) cb_have_data, NULL, NULL);
gst_object_unref (pad);
/* run */
gst_element_set_state (pipeline, GST_STATE_PLAYING);
/* wait until it's up and running or failed */
if (gst_element_get_state (pipeline, NULL, NULL, -1) == GST_STATE_CHANGE_FAILURE) {
g_error ("Failed to go into PLAYING state");
}
g_print ("Running ...\n");
g_main_loop_run (loop);
/* exit */
gst_element_set_state (pipeline, GST_STATE_NULL);
gst_object_unref (pipeline);
return 0;
}
]]>
<!-- example-end probe.c -->
</programlisting>
<para>
Compare that output with the output of <quote>gst-launch-1.0
videotestsrc ! xvimagesink</quote>, just so you know what you're
looking for.
</para>
<para>
Strictly speaking, a pad probe callback is only allowed to modify the
buffer content if the buffer is writable. Whether this is the case or
not depends a lot on the pipeline and the elements involved. Often
enough, this is the case, but sometimes it is not, and if it is not
then unexpected modification of the data or metadata can introduce
bugs that are very hard to debug and track down. You can check if a
buffer is writable with <function>gst_buffer_is_writable ()</function>.
Since you can pass back a different buffer than the one passed in,
it is a good idea to make the buffer writable in the callback function
with <function>gst_buffer_make_writable ()</function>.
</para>
<para>
Pad probes are suited best for looking at data as it passes through
the pipeline. If you need to modify data, you should better write your
own GStreamer element. Base classes like GstAudioFilter, GstVideoFilter or
GstBaseTransform make this fairly easy.
</para>
<para>
If you just want to inspect buffers as they pass through the pipeline,
you don't even need to set up pad probes. You could also just insert
an identity element into the pipeline and connect to its "handoff"
signal. The identity element also provides a few useful debugging tools
like the "dump" property or the "last-message" property (the latter is
enabled by passing the '-v' switch to gst-launch and by setting the
silent property on the identity to FALSE).
</para>
</sect2>
<sect2 id="section-preroll-probes">
<title>Play a region of a media file</title>
<para>
In this example we will show you how to play back a region of
a media file. The goal is to only play the part of a file
from 2 seconds to 5 seconds and then EOS.
</para>
<para>
In a first step we will set a uridecodebin element to the PAUSED
state and make sure that we block all the source pads that are
created. When all the source pads are blocked, we have data on
all source pads and we say that the uridecodebin is prerolled.
</para>
<para>
In a prerolled pipeline we can ask for the duration of the media
and we can also perform seeks. We are interested in performing a
seek operation on the pipeline to select the range of media
that we are interested in.
</para>
<para>
After we configure the region we are interested in, we can link
the sink element, unblock the source pads and set the pipeline to
the playing state. You will see that exactly the requested
region is played by the sink before it goes to EOS.
</para>
<para>
What follows is an example application that loosly follows this
algorithm.
</para>
<programlisting>
<!-- example-begin blockprobe.c -->
<![CDATA[
#include <gst/gst.h>
static GMainLoop *loop;
static volatile gint counter;
static GstBus *bus;
static gboolean prerolled = FALSE;
static GstPad *sinkpad;
static void
dec_counter (GstElement * pipeline)
{
if (prerolled)
return;
if (g_atomic_int_dec_and_test (&counter)) {
/* all probes blocked and no-more-pads signaled, post
* message on the bus. */
prerolled = TRUE;
gst_bus_post (bus, gst_message_new_application (
GST_OBJECT_CAST (pipeline),
gst_structure_new_empty ("ExPrerolled")));
}
}
/* called when a source pad of uridecodebin is blocked */
static GstPadProbeReturn
cb_blocked (GstPad *pad,
GstPadProbeInfo *info,
gpointer user_data)
{
GstElement *pipeline = GST_ELEMENT (user_data);
if (prerolled)
return GST_PAD_PROBE_REMOVE;
dec_counter (pipeline);
return GST_PAD_PROBE_OK;
}
/* called when uridecodebin has a new pad */
static void
cb_pad_added (GstElement *element,
GstPad *pad,
gpointer user_data)
{
GstElement *pipeline = GST_ELEMENT (user_data);
if (prerolled)
return;
g_atomic_int_inc (&counter);
gst_pad_add_probe (pad, GST_PAD_PROBE_TYPE_BLOCK_DOWNSTREAM,
(GstPadProbeCallback) cb_blocked, pipeline, NULL);
/* try to link to the video pad */
gst_pad_link (pad, sinkpad);
}
/* called when uridecodebin has created all pads */
static void
cb_no_more_pads (GstElement *element,
gpointer user_data)
{
GstElement *pipeline = GST_ELEMENT (user_data);
if (prerolled)
return;
dec_counter (pipeline);
}
/* called when a new message is posted on the bus */
static void
cb_message (GstBus *bus,
GstMessage *message,
gpointer user_data)
{
GstElement *pipeline = GST_ELEMENT (user_data);
switch (GST_MESSAGE_TYPE (message)) {
case GST_MESSAGE_ERROR:
g_print ("we received an error!\n");
g_main_loop_quit (loop);
break;
case GST_MESSAGE_EOS:
g_print ("we reached EOS\n");
g_main_loop_quit (loop);
break;
case GST_MESSAGE_APPLICATION:
{
if (gst_message_has_name (message, "ExPrerolled")) {
/* it's our message */
g_print ("we are all prerolled, do seek\n");
gst_element_seek (pipeline,
1.0, GST_FORMAT_TIME,
GST_SEEK_FLAG_FLUSH | GST_SEEK_FLAG_ACCURATE,
GST_SEEK_TYPE_SET, 2 * GST_SECOND,
GST_SEEK_TYPE_SET, 5 * GST_SECOND);
gst_element_set_state (pipeline, GST_STATE_PLAYING);
}
break;
}
default:
break;
}
}
gint
main (gint argc,
gchar *argv[])
{
GstElement *pipeline, *src, *csp, *vs, *sink;
/* init GStreamer */
gst_init (&argc, &argv);
loop = g_main_loop_new (NULL, FALSE);
if (argc < 2) {
g_print ("usage: %s <uri>", argv[0]);
return -1;
}
/* build */
pipeline = gst_pipeline_new ("my-pipeline");
bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline));
gst_bus_add_signal_watch (bus);
g_signal_connect (bus, "message", (GCallback) cb_message,
pipeline);
src = gst_element_factory_make ("uridecodebin", "src");
if (src == NULL)
g_error ("Could not create 'uridecodebin' element");
g_object_set (src, "uri", argv[1], NULL);
csp = gst_element_factory_make ("videoconvert", "csp");
if (csp == NULL)
g_error ("Could not create 'videoconvert' element");
vs = gst_element_factory_make ("videoscale", "vs");
if (csp == NULL)
g_error ("Could not create 'videoscale' element");
sink = gst_element_factory_make ("autovideosink", "sink");
if (sink == NULL)
g_error ("Could not create 'autovideosink' element");
gst_bin_add_many (GST_BIN (pipeline), src, csp, vs, sink, NULL);
/* can't link src yet, it has no pads */
gst_element_link_many (csp, vs, sink, NULL);
sinkpad = gst_element_get_static_pad (csp, "sink");
/* for each pad block that is installed, we will increment
* the counter. for each pad block that is signaled, we
* decrement the counter. When the counter is 0 we post
* an app message to tell the app that all pads are
* blocked. Start with 1 that is decremented when no-more-pads
* is signaled to make sure that we only post the message
* after no-more-pads */
g_atomic_int_set (&counter, 1);
g_signal_connect (src, "pad-added",
(GCallback) cb_pad_added, pipeline);
g_signal_connect (src, "no-more-pads",
(GCallback) cb_no_more_pads, pipeline);
gst_element_set_state (pipeline, GST_STATE_PAUSED);
g_main_loop_run (loop);
gst_element_set_state (pipeline, GST_STATE_NULL);
gst_object_unref (sinkpad);
gst_object_unref (bus);
gst_object_unref (pipeline);
g_main_loop_unref (loop);
return 0;
}
]]>
<!-- example-end blockprobe.c -->
</programlisting>
<para>
Note that we use a custom application message to signal the
main thread that the uridecidebin is prerolled. The main thread
will then issue a flushing seek to the requested region. The
flush will temporarily unblock the pad and reblock them when
new data arrives again. We detect this second block to remove
the probes. Then we set the pipeline to PLAYING and it should
play from 2 to 5 seconds, then EOS and exit the application.
</para>
</sect2>
</sect1>
<sect1 id="section-data-spoof">
<title>Manually adding or removing data from/to a pipeline</title>
<para>
Many people have expressed the wish to use their own sources to inject
data into a pipeline. Some people have also expressed the wish to grab
the output in a pipeline and take care of the actual output inside
their application. While either of these methods are strongly
discouraged, &GStreamer; offers support for this.
<emphasis>Beware! You need to know what you are doing.</emphasis> Since
you don't have any support from a base class you need to thoroughly
understand state changes and synchronization. If it doesn't work,
there are a million ways to shoot yourself in the foot. It's always
better to simply write a plugin and have the base class manage it.
See the Plugin Writer's Guide for more information on this topic. Also
see the next section, which will explain how to embed plugins statically
in your application.
</para>
<para>
There's two possible elements that you can use for the above-mentioned
purposes. Those are called <quote>appsrc</quote> (an imaginary source)
and <quote>appsink</quote> (an imaginary sink). The same method applies
to each of those elements. Here, we will discuss how to use those
elements to insert (using appsrc) or grab (using appsink) data from a
pipeline, and how to set negotiation.
</para>
<para>
Both appsrc and appsink provide 2 sets of API. One API uses standard
GObject (action) signals and properties. The same API is also
available as a regular C api. The C api is more performant but
requires you to link to the app library in order to use the elements.
</para>
<sect2 id="section-spoof-appsrc">
<title>Inserting data with appsrc</title>
<para>
First we look at some examples for appsrc, which lets you insert data
into the pipeline from the application. Appsrc has some configuration
options that define how it will operate. You should decide about the
following configurations:
</para>
<itemizedlist>
<listitem>
<para>
Will the appsrc operate in push or pull mode. The stream-type
property can be used to control this. stream-type of
<quote>random-access</quote> will activate pull mode scheduling
while the other stream-types activate push mode.
</para>
</listitem>
<listitem>
<para>
The caps of the buffers that appsrc will push out. This needs to
be configured with the caps property. The caps must be set to a
fixed caps and will be used to negotiate a format downstream.
</para>
</listitem>
<listitem>
<para>
If the appsrc operates in live mode or not. This can be configured
with the is-live property. When operating in live-mode it is
important to configure the min-latency and max-latency in appsrc.
The min-latency should be set to the amount of time it takes between
capturing a buffer and when it is pushed inside appsrc.
In live mode, you should timestamp the buffers with the pipeline
running-time when the first byte of the buffer was captured before
feeding them to appsrc. You can let appsrc do the timestaping with
the do-timestamp property (but then the min-latency must be set
to 0 because it timestamps based on the running-time when the buffer
entered appsrc).
</para>
</listitem>
<listitem>
<para>
The format of the SEGMENT event that appsrc will push. The format
has implications for how the running-time of the buffers will
be calculated so you must be sure you understand this. For
live sources you probably want to set the format property to
GST_FORMAT_TIME. For non-live source it depends on the media type
that you are handling. If you plan to timestamp the buffers, you
should probably put a GST_FORMAT_TIME format, otherwise
GST_FORMAT_BYTES might be appropriate.
</para>
</listitem>
<listitem>
<para>
If appsrc operates in random-access mode, it is important to configure
the size property of appsrc with the number of bytes in the stream.
This will allow downstream elements to know the size of the media and
alows them to seek to the end of the stream when needed.
</para>
</listitem>
</itemizedlist>
<para>
The main way of handling data to appsrc is by using the function
<function>gst_app_src_push_buffer ()</function> or by emiting the
push-buffer action signal. This will put the buffer onto a queue from
which appsrc will read from in its streaming thread. It is important
to note that data transport will not happen from the thread that
performed the push-buffer call.
</para>
<para>
The <quote>max-bytes</quote> property controls how much data can be
queued in appsrc before appsrc considers the queue full. A filled
internal queue will always signal the <quote>enough-data</quote>
signal, which signals the application that it should stop pushing
data into appsrc. The <quote>block</quote> property will cause appsrc to
block the push-buffer method until free data becomes available again.
</para>
<para>
When the internal queue is running out of data, the
<quote>need-data</quote> signal is emitted, which signals the application
that it should start pushing more data into appsrc.
</para>
<para>
In addition to the <quote>need-data</quote> and <quote>enough-data</quote>
signals, appsrc can emit the <quote>seek-data</quote> signal when the
<quote>stream-mode</quote> property is set to <quote>seekable</quote>
or <quote>random-access</quote>. The signal argument will contain the
new desired position in the stream expressed in the unit set with the
<quote>format</quote> property. After receiving the seek-data signal,
the application should push-buffers from the new position.
</para>
<para>
When the last byte is pushed into appsrc, you must call
<function>gst_app_src_end_of_stream ()</function> to make it send
an EOS downstream.
</para>
<para>
These signals allow the application to operate appsrc in push and
pull mode as will be explained next.
</para>
<sect3 id="section-spoof-appsrc-push">
<title>Using appsrc in push mode</title>
<para>
When appsrc is configured in push mode (stream-type is stream or
seekable), the application repeatedly calls the push-buffer method
with a new buffer. Optionally, the queue size in the appsrc can be
controlled with the enough-data and need-data signals by respectively
stopping/starting the push-buffer calls. The value of the
min-percent property defines how empty the internal appsrc queue
needs to be before the need-data signal will be fired. You can set
this to some value >0 to avoid completely draining the queue.
</para>
<para>
When the stream-type is set to seekable, don't forget to implement
a seek-data callback.
</para>
<para>
Use this model when implementing various network protocols or
hardware devices.
</para>
</sect3>
<sect3 id="section-spoof-appsrc-pull">
<title>Using appsrc in pull mode</title>
<para>
In the pull model, data is fed to appsrc from the need-data signal
handler. You should push exactly the amount of bytes requested in the
need-data signal. You are only allowed to push less bytes when you are
at the end of the stream.
</para>
<para>
Use this model for file access or other randomly accessable sources.
</para>
</sect3>
<sect3 id="section-spoof-appsrc-ex">
<title>Appsrc example</title>
<para>
This example application will generate black/white (it switches
every second) video to an Xv-window output by using appsrc as a
source with caps to force a format. We use a colorspace
conversion element to make sure that we feed the right format to
your X server. We configure a video stream with a variable framerate
(0/1) and we set the timestamps on the outgoing buffers in such
a way that we play 2 frames per second.
</para>
<para>
Note how we use the pull mode method of pushing new buffers into
appsrc although appsrc is running in push mode.
</para>
<programlisting>
<!-- example-begin appsrc.c -->
<![CDATA[
#include <gst/gst.h>
static GMainLoop *loop;
static void
cb_need_data (GstElement *appsrc,
guint unused_size,
gpointer user_data)
{
static gboolean white = FALSE;
static GstClockTime timestamp = 0;
GstBuffer *buffer;
guint size;
GstFlowReturn ret;
size = 385 * 288 * 2;
buffer = gst_buffer_new_allocate (NULL, size, NULL);
/* this makes the image black/white */
gst_buffer_memset (buffer, 0, white ? 0xff : 0x0, size);
white = !white;
GST_BUFFER_PTS (buffer) = timestamp;
GST_BUFFER_DURATION (buffer) = gst_util_uint64_scale_int (1, GST_SECOND, 2);
timestamp += GST_BUFFER_DURATION (buffer);
g_signal_emit_by_name (appsrc, "push-buffer", buffer, &ret);
gst_buffer_unref (buffer);
if (ret != GST_FLOW_OK) {
/* something wrong, stop pushing */
g_main_loop_quit (loop);
}
}
gint
main (gint argc,
gchar *argv[])
{
GstElement *pipeline, *appsrc, *conv, *videosink;
/* init GStreamer */
gst_init (&argc, &argv);
loop = g_main_loop_new (NULL, FALSE);
/* setup pipeline */
pipeline = gst_pipeline_new ("pipeline");
appsrc = gst_element_factory_make ("appsrc", "source");
conv = gst_element_factory_make ("videoconvert", "conv");
videosink = gst_element_factory_make ("xvimagesink", "videosink");
/* setup */
g_object_set (G_OBJECT (appsrc), "caps",
gst_caps_new_simple ("video/x-raw",
"format", G_TYPE_STRING, "RGB16",
"width", G_TYPE_INT, 384,
"height", G_TYPE_INT, 288,
"framerate", GST_TYPE_FRACTION, 0, 1,
NULL), NULL);
gst_bin_add_many (GST_BIN (pipeline), appsrc, conv, videosink, NULL);
gst_element_link_many (appsrc, conv, videosink, NULL);
/* setup appsrc */
g_object_set (G_OBJECT (appsrc),
"stream-type", 0,
"format", GST_FORMAT_TIME, NULL);
g_signal_connect (appsrc, "need-data", G_CALLBACK (cb_need_data), NULL);
/* play */
gst_element_set_state (pipeline, GST_STATE_PLAYING);
g_main_loop_run (loop);
/* clean up */
gst_element_set_state (pipeline, GST_STATE_NULL);
gst_object_unref (GST_OBJECT (pipeline));
g_main_loop_unref (loop);
return 0;
}
]]>
<!-- example-end appsrc.c -->
</programlisting>
</sect3>
</sect2>
<sect2 id="section-spoof-appsink">
<title>Grabbing data with appsink</title>
<para>
Unlike appsrc, appsink is a little easier to use. It also supports
a pull and push based model of getting data from the pipeline.
</para>
<para>
The normal way of retrieving samples from appsink is by using the
<function>gst_app_sink_pull_sample()</function> and
<function>gst_app_sink_pull_preroll()</function> methods or by using
the <quote>pull-sample</quote> and <quote>pull-preroll</quote>
signals. These methods block until a sample becomes available in the
sink or when the sink is shut down or reaches EOS.
</para>
<para>
Appsink will internally use a queue to collect buffers from the
streaming thread. If the application is not pulling samples fast
enough, this queue will consume a lot of memory over time. The
<quote>max-buffers</quote> property can be used to limit the queue
size. The <quote>drop</quote> property controls whether the
streaming thread blocks or if older buffers are dropped when the
maximum queue size is reached. Note that blocking the streaming thread
can negatively affect real-time performance and should be avoided.
</para>
<para>
If a blocking behaviour is not desirable, setting the
<quote>emit-signals</quote> property to TRUE will make appsink emit
the <quote>new-sample</quote> and <quote>new-preroll</quote> signals
when a sample can be pulled without blocking.
</para>
<para>
The <quote>caps</quote> property on appsink can be used to control
the formats that appsink can receive. This property can contain
non-fixed caps, the format of the pulled samples can be obtained by
getting the sample caps.
</para>
<para>
If one of the pull-preroll or pull-sample methods return NULL, the
appsink is stopped or in the EOS state. You can check for the EOS state
with the <quote>eos</quote> property or with the
<function>gst_app_sink_is_eos()</function> method.
</para>
<para>
The eos signal can also be used to be informed when the EOS state is
reached to avoid polling.
</para>
<para>
Consider configuring the following properties in the appsink:
</para>
<itemizedlist>
<listitem>
<para>
The <quote>sync</quote> property if you want to have the sink
base class synchronize the buffer against the pipeline clock
before handing you the sample.
</para>
</listitem>
<listitem>
<para>
Enable Quality-of-Service with the <quote>qos</quote> property.
If you are dealing with raw video frames and let the base class
sycnhronize on the clock, it might be a good idea to also let
the base class send QOS events upstream.
</para>
</listitem>
<listitem>
<para>
The caps property that contains the accepted caps. Upstream elements
will try to convert the format so that it matches the configured
caps on appsink. You must still check the
<classname>GstSample</classname> to get the actual caps of the
buffer.
</para>
</listitem>
</itemizedlist>
<sect3 id="section-spoof-appsink-ex">
<title>Appsink example</title>
<para>
What follows is an example on how to capture a snapshot of a video
stream using appsink.
</para>
<programlisting>
<!-- example-begin appsink.c -->
<![CDATA[
#include <gst/gst.h>
#ifdef HAVE_GTK
#include <gtk/gtk.h>
#endif
#include <stdlib.h>
#define CAPS "video/x-raw,format=RGB,width=160,pixel-aspect-ratio=1/1"
int
main (int argc, char *argv[])
{
GstElement *pipeline, *sink;
gint width, height;
GstSample *sample;
gchar *descr;
GError *error = NULL;
gint64 duration, position;
GstStateChangeReturn ret;
gboolean res;
GstMapInfo map;
gst_init (&argc, &argv);
if (argc != 2) {
g_print ("usage: %s <uri>\n Writes snapshot.png in the current directory\n",
argv[0]);
exit (-1);
}
/* create a new pipeline */
descr =
g_strdup_printf ("uridecodebin uri=%s ! videoconvert ! videoscale ! "
" appsink name=sink caps=\"" CAPS "\"", argv[1]);
pipeline = gst_parse_launch (descr, &error);
if (error != NULL) {
g_print ("could not construct pipeline: %s\n", error->message);
g_clear_error (&error);
exit (-1);
}
/* get sink */
sink = gst_bin_get_by_name (GST_BIN (pipeline), "sink");
/* set to PAUSED to make the first frame arrive in the sink */
ret = gst_element_set_state (pipeline, GST_STATE_PAUSED);
switch (ret) {
case GST_STATE_CHANGE_FAILURE:
g_print ("failed to play the file\n");
exit (-1);
case GST_STATE_CHANGE_NO_PREROLL:
/* for live sources, we need to set the pipeline to PLAYING before we can
* receive a buffer. We don't do that yet */
g_print ("live sources not supported yet\n");
exit (-1);
default:
break;
}
/* This can block for up to 5 seconds. If your machine is really overloaded,
* it might time out before the pipeline prerolled and we generate an error. A
* better way is to run a mainloop and catch errors there. */
ret = gst_element_get_state (pipeline, NULL, NULL, 5 * GST_SECOND);
if (ret == GST_STATE_CHANGE_FAILURE) {
g_print ("failed to play the file\n");
exit (-1);
}
/* get the duration */
gst_element_query_duration (pipeline, GST_FORMAT_TIME, &duration);
if (duration != -1)
/* we have a duration, seek to 5% */
position = duration * 5 / 100;
else
/* no duration, seek to 1 second, this could EOS */
position = 1 * GST_SECOND;
/* seek to the a position in the file. Most files have a black first frame so
* by seeking to somewhere else we have a bigger chance of getting something
* more interesting. An optimisation would be to detect black images and then
* seek a little more */
gst_element_seek_simple (pipeline, GST_FORMAT_TIME,
GST_SEEK_FLAG_KEY_UNIT | GST_SEEK_FLAG_FLUSH, position);
/* get the preroll buffer from appsink, this block untils appsink really
* prerolls */
g_signal_emit_by_name (sink, "pull-preroll", &sample, NULL);
/* if we have a buffer now, convert it to a pixbuf. It's possible that we
* don't have a buffer because we went EOS right away or had an error. */
if (sample) {
GstBuffer *buffer;
GstCaps *caps;
GstStructure *s;
/* get the snapshot buffer format now. We set the caps on the appsink so
* that it can only be an rgb buffer. The only thing we have not specified
* on the caps is the height, which is dependant on the pixel-aspect-ratio
* of the source material */
caps = gst_sample_get_caps (sample);
if (!caps) {
g_print ("could not get snapshot format\n");
exit (-1);
}
s = gst_caps_get_structure (caps, 0);
/* we need to get the final caps on the buffer to get the size */
res = gst_structure_get_int (s, "width", &width);
res |= gst_structure_get_int (s, "height", &height);
if (!res) {
g_print ("could not get snapshot dimension\n");
exit (-1);
}
/* create pixmap from buffer and save, gstreamer video buffers have a stride
* that is rounded up to the nearest multiple of 4 */
buffer = gst_sample_get_buffer (sample);
/* Mapping a buffer can fail (non-readable) */
if (gst_buffer_map (buffer, &map, GST_MAP_READ)) {
#ifdef HAVE_GTK
pixbuf = gdk_pixbuf_new_from_data (map.data,
GDK_COLORSPACE_RGB, FALSE, 8, width, height,
GST_ROUND_UP_4 (width * 3), NULL, NULL);
/* save the pixbuf */
gdk_pixbuf_save (pixbuf, "snapshot.png", "png", &error, NULL);
#endif
gst_buffer_unmap (buffer, &map);
}
gst_sample_unref (sample);
} else {
g_print ("could not make snapshot\n");
}
/* cleanup and exit */
gst_element_set_state (pipeline, GST_STATE_NULL);
gst_object_unref (pipeline);
exit (0);
}
]]>
<!-- example-end appsink.c -->
</programlisting>
</sect3>
</sect2>
</sect1>
<sect1 id="section-spoof-format">
<title>Forcing a format</title>
<para>
Sometimes you'll want to set a specific format, for example a video
size and format or an audio bitsize and number of channels. You can
do this by forcing a specific <classname>GstCaps</classname> on
the pipeline, which is possible by using
<emphasis>filtered caps</emphasis>. You can set a filtered caps on
a link by using the <quote>capsfilter</quote> element in between the
two elements, and specifying a <classname>GstCaps</classname> as
<quote>caps</quote> property on this element. It will then
only allow types matching that specified capability set for
negotiation. See also <xref linkend="section-caps-filter"/>.
</para>
<sect2 id="section-dynamic-format">
<title>Changing format in a PLAYING pipeline</title>
<para>
It is also possible to dynamically change the format in a pipeline
while PLAYING. This can simply be done by changing the caps
property on a capsfilter. The capsfilter will send a RECONFIGURE
event upstream that will make the upstream element attempt to
renegotiate a new format and allocator. This only works if
the upstream element is not using fixed caps on the source pad.
</para>
<para>
Below is an example of how you can change the caps of a pipeline
while in the PLAYING state:
</para>
<programlisting>
<!-- example-begin dynformat.c -->
<![CDATA[
#include <stdlib.h>
#include <gst/gst.h>
#define MAX_ROUND 100
int
main (int argc, char **argv)
{
GstElement *pipe, *filter;
GstCaps *caps;
gint width, height;
gint xdir, ydir;
gint round;
GstMessage *message;
gst_init (&argc, &argv);
pipe = gst_parse_launch_full ("videotestsrc ! capsfilter name=filter ! "
"ximagesink", NULL, GST_PARSE_FLAG_NONE, NULL);
g_assert (pipe != NULL);
filter = gst_bin_get_by_name (GST_BIN (pipe), "filter");
g_assert (filter);
width = 320;
height = 240;
xdir = ydir = -10;
for (round = 0; round < MAX_ROUND; round++) {
gchar *capsstr;
g_print ("resize to %dx%d (%d/%d) \r", width, height, round, MAX_ROUND);
/* we prefer our fixed width and height but allow other dimensions to pass
* as well */
capsstr = g_strdup_printf ("video/x-raw, width=(int)%d, height=(int)%d",
width, height);
caps = gst_caps_from_string (capsstr);
g_free (capsstr);
g_object_set (filter, "caps", caps, NULL);
gst_caps_unref (caps);
if (round == 0)
gst_element_set_state (pipe, GST_STATE_PLAYING);
width += xdir;
if (width >= 320)
xdir = -10;
else if (width < 200)
xdir = 10;
height += ydir;
if (height >= 240)
ydir = -10;
else if (height < 150)
ydir = 10;
message =
gst_bus_poll (GST_ELEMENT_BUS (pipe), GST_MESSAGE_ERROR,
50 * GST_MSECOND);
if (message) {
g_print ("got error \n");
gst_message_unref (message);
}
}
g_print ("done \n");
gst_object_unref (filter);
gst_element_set_state (pipe, GST_STATE_NULL);
gst_object_unref (pipe);
return 0;
}
]]>
<!-- example-end dynformat.c -->
</programlisting>
<para>
Note how we use <function>gst_bus_poll()</function> with a
small timeout to get messages and also introduce a short
sleep.
</para>
<para>
It is possible to set multiple caps for the capsfilter separated
with a ;. The capsfilter will try to renegotiate to the first
possible format from the list.
</para>
</sect2>
</sect1>
<sect1 id="section-dynamic-pipelines">
<title>Dynamically changing the pipeline</title>
<para>
In this section we talk about some techniques for dynamically
modifying the pipeline. We are talking specifically about changing
the pipeline while it is in the PLAYING state without interrupting
the flow.
</para>
<para>
There are some important things to consider when building dynamic
pipelines:
</para>
<itemizedlist>
<listitem>
<para>
When removing elements from the pipeline, make sure that there
is no dataflow on unlinked pads because that will cause a fatal
pipeline error. Always block source pads (in push mode) or
sink pads (in pull mode) before unlinking pads.
See also <xref linkend="section-dynamic-changing"/>.
</para>
</listitem>
<listitem>
<para>
When adding elements to a pipeline, make sure to put the element
into the right state, usually the same state as the parent, before
allowing dataflow the element. When an element is newly created,
it is in the NULL state and will return an error when it
receives data.
See also <xref linkend="section-dynamic-changing"/>.
</para>
</listitem>
<listitem>
<para>
When adding elements to a pipeline, &GStreamer; will by default
set the clock and base-time on the element to the current values
of the pipeline. This means that the element will be able to
construct the same pipeline running-time as the other elements
in the pipeline. This means that sinks will synchronize buffers
like the other sinks in the pipeline and that sources produce
buffers with a running-time that matches the other sources.
</para>
</listitem>
<listitem>
<para>
When unlinking elements from an upstream chain, always make sure
to flush any queued data in the element by sending an EOS event
down the element sink pad(s) and by waiting that the EOS leaves
the elements (with an event probe).
</para>
<para>
If you do not do this, you will lose the data which is buffered
by the unlinked element. This can result in a simple frame loss
(one or more video frames, several milliseconds of audio). However
if you remove a muxer (and in some cases an encoder or similar elements)
from the pipeline, you risk getting a corrupted file which could not be
played properly, as some relevant metadata (header, seek/index tables, internal
sync tags) will not be stored or updated properly.
</para>
<para>
See also <xref linkend="section-dynamic-changing"/>.
</para>
</listitem>
<listitem>
<para>
A live source will produce buffers with a running-time of the
current running-time in the pipeline.
</para>
<para>
A pipeline without a live source produces buffers with a
running-time starting from 0. Likewise, after a flushing seek,
those pipelines reset the running-time back to 0.
</para>
<para>
The running-time can be changed with
<function>gst_pad_set_offset ()</function>. It is important to
know the running-time of the elements in the pipeline in order
to maintain synchronization.
</para>
</listitem>
<listitem>
<para>
Adding elements might change the state of the pipeline. Adding a
non-prerolled sink, for example, brings the pipeline back to the
prerolling state. Removing a non-prerolled sink, for example, might
change the pipeline to PAUSED and PLAYING state.
</para>
<para>
Adding a live source cancels the preroll stage and put the pipeline
to the playing state. Adding a live source or other live elements
might also change the latency of a pipeline.
</para>
<para>
Adding or removing elements to the pipeline might change the clock
selection of the pipeline. If the newly added element provides a clock,
it might be worth changing the clock in the pipeline to the new
clock. If, on the other hand, the element that provides the clock
for the pipeline is removed, a new clock has to be selected.
</para>
</listitem>
<listitem>
<para>
Adding and removing elements might cause upstream or downstream
elements to renegotiate caps and or allocators. You don't really
need to do anything from the application, plugins largely
adapt themself to the new pipeline topology in order to optimize
their formats and allocation strategy.
</para>
<para>
What is important is that when you add, remove or change elements
in the pipeline, it is possible that the pipeline needs to
negotiate a new format and this can fail. Usually you can fix this
by inserting the right converter elements where needed.
See also <xref linkend="section-dynamic-changing"/>.
</para>
</listitem>
</itemizedlist>
<para>
&GStreamer; offers support for doing about any dynamic pipeline
modification but it requires you to know a bit of details before
you can do this without causing pipeline errors. In the following
sections we will demonstrate a couple of typical use-cases.
</para>
<sect2 id="section-dynamic-changing">
<title>Changing elements in a pipeline</title>
<para>
In the next example we look at the following chain of elements:
</para>
<programlisting>
- ----. .----------. .---- -
element1 | | element2 | | element3
src -> sink src -> sink
- ----' '----------' '---- -
</programlisting>
<para>
We want to change element2 by element4 while the pipeline is in
the PLAYING state. Let's say that element2 is a visualization and
that you want to switch the visualization in the pipeline.
</para>
<para>
We can't just unlink element2's sinkpad from element1's source
pad because that would leave element1's source pad
unlinked and would cause a streaming error in the pipeline when
data is pushed on the source pad.
The technique is to block the dataflow from element1's source pad
before we change element2 by element4 and then resume dataflow
as shown in the following steps:
</para>
<itemizedlist>
<listitem>
<para>
Block element1's source pad with a blocking pad probe. When the
pad is blocked, the probe callback will be called.
</para>
</listitem>
<listitem>
<para>
Inside the block callback nothing is flowing between element1
and element2 and nothing will flow until unblocked.
</para>
</listitem>
<listitem>
<para>
Unlink element1 and element2.
</para>
</listitem>
<listitem>
<para>
Make sure data is flushed out of element2. Some elements might
internally keep some data, you need to make sure not to lose data
by forcing it out of element2. You can do this by pushing EOS into
element2, like this:
</para>
<itemizedlist>
<listitem>
<para>
Put an event probe on element2's source pad.
</para>
</listitem>
<listitem>
<para>
Send EOS to element2's sinkpad. This makes sure the all the
data inside element2 is forced out.
</para>
</listitem>
<listitem>
<para>
Wait for the EOS event to appear on element2's source pad.
When the EOS is received, drop it and remove the event
probe.
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para>
Unlink element2 and element3. You can now also remove element2
from the pipeline and set the state to NULL.
</para>
</listitem>
<listitem>
<para>
Add element4 to the pipeline, if not already added. Link element4
and element3. Link element1 and element4.
</para>
</listitem>
<listitem>
<para>
Make sure element4 is in the same state as the rest of the elements
in the pipeline. It should be at least in the PAUSED state before
it can receive buffers and events.
</para>
</listitem>
<listitem>
<para>
Unblock element1's source pad probe. This will let new data into
element4 and continue streaming.
</para>
</listitem>
</itemizedlist>
<para>
The above algorithm works when the source pad is blocked, i.e. when
there is dataflow in the pipeline. If there is no dataflow, there is
also no point in changing the element (just yet) so this algorithm can
be used in the PAUSED state as well.
</para>
<para>
Let show you how this works with an example. This example changes the
video effect on a simple pipeline every second.
</para>
<programlisting>
<!-- example-begin effectswitch.c -->
<![CDATA[
#include <gst/gst.h>
static gchar *opt_effects = NULL;
#define DEFAULT_EFFECTS "identity,exclusion,navigationtest," \
"agingtv,videoflip,vertigotv,gaussianblur,shagadelictv,edgetv"
static GstPad *blockpad;
static GstElement *conv_before;
static GstElement *conv_after;
static GstElement *cur_effect;
static GstElement *pipeline;
static GQueue effects = G_QUEUE_INIT;
static GstPadProbeReturn
event_probe_cb (GstPad * pad, GstPadProbeInfo * info, gpointer user_data)
{
GMainLoop *loop = user_data;
GstElement *next;
if (GST_EVENT_TYPE (GST_PAD_PROBE_INFO_DATA (info)) != GST_EVENT_EOS)
return GST_PAD_PROBE_PASS;
gst_pad_remove_probe (pad, GST_PAD_PROBE_INFO_ID (info));
/* push current effect back into the queue */
g_queue_push_tail (&effects, gst_object_ref (cur_effect));
/* take next effect from the queue */
next = g_queue_pop_head (&effects);
if (next == NULL) {
GST_DEBUG_OBJECT (pad, "no more effects");
g_main_loop_quit (loop);
return GST_PAD_PROBE_DROP;
}
g_print ("Switching from '%s' to '%s'..\n", GST_OBJECT_NAME (cur_effect),
GST_OBJECT_NAME (next));
gst_element_set_state (cur_effect, GST_STATE_NULL);
/* remove unlinks automatically */
GST_DEBUG_OBJECT (pipeline, "removing %" GST_PTR_FORMAT, cur_effect);
gst_bin_remove (GST_BIN (pipeline), cur_effect);
GST_DEBUG_OBJECT (pipeline, "adding %" GST_PTR_FORMAT, next);
gst_bin_add (GST_BIN (pipeline), next);
GST_DEBUG_OBJECT (pipeline, "linking..");
gst_element_link_many (conv_before, next, conv_after, NULL);
gst_element_set_state (next, GST_STATE_PLAYING);
cur_effect = next;
GST_DEBUG_OBJECT (pipeline, "done");
return GST_PAD_PROBE_DROP;
}
static GstPadProbeReturn
pad_probe_cb (GstPad * pad, GstPadProbeInfo * info, gpointer user_data)
{
GstPad *srcpad, *sinkpad;
GST_DEBUG_OBJECT (pad, "pad is blocked now");
/* remove the probe first */
gst_pad_remove_probe (pad, GST_PAD_PROBE_INFO_ID (info));
/* install new probe for EOS */
srcpad = gst_element_get_static_pad (cur_effect, "src");
gst_pad_add_probe (srcpad, GST_PAD_PROBE_TYPE_BLOCK |
GST_PAD_PROBE_TYPE_EVENT_DOWNSTREAM, event_probe_cb, user_data, NULL);
gst_object_unref (srcpad);
/* push EOS into the element, the probe will be fired when the
* EOS leaves the effect and it has thus drained all of its data */
sinkpad = gst_element_get_static_pad (cur_effect, "sink");
gst_pad_send_event (sinkpad, gst_event_new_eos ());
gst_object_unref (sinkpad);
return GST_PAD_PROBE_OK;
}
static gboolean
timeout_cb (gpointer user_data)
{
gst_pad_add_probe (blockpad, GST_PAD_PROBE_TYPE_BLOCK_DOWNSTREAM,
pad_probe_cb, user_data, NULL);
return TRUE;
}
static gboolean
bus_cb (GstBus * bus, GstMessage * msg, gpointer user_data)
{
GMainLoop *loop = user_data;
switch (GST_MESSAGE_TYPE (msg)) {
case GST_MESSAGE_ERROR:{
GError *err = NULL;
gchar *dbg;
gst_message_parse_error (msg, &err, &dbg);
gst_object_default_error (msg->src, err, dbg);
g_clear_error (&err);
g_free (dbg);
g_main_loop_quit (loop);
break;
}
default:
break;
}
return TRUE;
}
int
main (int argc, char **argv)
{
GOptionEntry options[] = {
{"effects", 'e', 0, G_OPTION_ARG_STRING, &opt_effects,
"Effects to use (comma-separated list of element names)", NULL},
{NULL}
};
GOptionContext *ctx;
GError *err = NULL;
GMainLoop *loop;
GstElement *src, *q1, *q2, *effect, *filter1, *filter2, *sink;
gchar **effect_names, **e;
ctx = g_option_context_new ("");
g_option_context_add_main_entries (ctx, options, NULL);
g_option_context_add_group (ctx, gst_init_get_option_group ());
if (!g_option_context_parse (ctx, &argc, &argv, &err)) {
g_print ("Error initializing: %s\n", err->message);
g_clear_error (&amp;err);
g_option_context_free (ctx);
return 1;
}
g_option_context_free (ctx);
if (opt_effects != NULL)
effect_names = g_strsplit (opt_effects, ",", -1);
else
effect_names = g_strsplit (DEFAULT_EFFECTS, ",", -1);
for (e = effect_names; e != NULL && *e != NULL; ++e) {
GstElement *el;
el = gst_element_factory_make (*e, NULL);
if (el) {
g_print ("Adding effect '%s'\n", *e);
g_queue_push_tail (&effects, el);
}
}
pipeline = gst_pipeline_new ("pipeline");
src = gst_element_factory_make ("videotestsrc", NULL);
g_object_set (src, "is-live", TRUE, NULL);
filter1 = gst_element_factory_make ("capsfilter", NULL);
gst_util_set_object_arg (G_OBJECT (filter1), "caps",
"video/x-raw, width=320, height=240, "
"format={ I420, YV12, YUY2, UYVY, AYUV, Y41B, Y42B, "
"YVYU, Y444, v210, v216, NV12, NV21, UYVP, A420, YUV9, YVU9, IYU1 }");
q1 = gst_element_factory_make ("queue", NULL);
blockpad = gst_element_get_static_pad (q1, "src");
conv_before = gst_element_factory_make ("videoconvert", NULL);
effect = g_queue_pop_head (&effects);
cur_effect = effect;
conv_after = gst_element_factory_make ("videoconvert", NULL);
q2 = gst_element_factory_make ("queue", NULL);
filter2 = gst_element_factory_make ("capsfilter", NULL);
gst_util_set_object_arg (G_OBJECT (filter2), "caps",
"video/x-raw, width=320, height=240, "
"format={ RGBx, BGRx, xRGB, xBGR, RGBA, BGRA, ARGB, ABGR, RGB, BGR }");
sink = gst_element_factory_make ("ximagesink", NULL);
gst_bin_add_many (GST_BIN (pipeline), src, filter1, q1, conv_before, effect,
conv_after, q2, sink, NULL);
gst_element_link_many (src, filter1, q1, conv_before, effect, conv_after,
q2, sink, NULL);
gst_element_set_state (pipeline, GST_STATE_PLAYING);
loop = g_main_loop_new (NULL, FALSE);
gst_bus_add_watch (GST_ELEMENT_BUS (pipeline), bus_cb, loop);
g_timeout_add_seconds (1, timeout_cb, loop);
g_main_loop_run (loop);
gst_element_set_state (pipeline, GST_STATE_NULL);
gst_object_unref (pipeline);
return 0;
}
]]>
<!-- example-end effectswitch.c -->
</programlisting>
<para>
Note how we added videoconvert elements before and after the effect.
This is needed because some elements might operate in different
colorspaces than other elements. By inserting the conversion elements
you ensure that the right format can be negotiated at any time.
</para>
</sect2>
</sect1>
</chapter>