docs: update element example pipelines

- gst-launch -> gst-launch-1.0
- use autoaudiosink and audiovideosink more often
- review pipeline examples and descriptions
This commit is contained in:
Tim-Philipp Müller 2015-05-09 22:33:26 +01:00
parent bf683b9a6c
commit ec5c93f169
35 changed files with 118 additions and 154 deletions

View file

@ -24,13 +24,13 @@
* SECTION:element-alsasink * SECTION:element-alsasink
* @see_also: alsasrc * @see_also: alsasrc
* *
* This element renders raw audio samples using the ALSA api. * This element renders raw audio samples using the ALSA audio API.
* *
* <refsect2> * <refsect2>
* <title>Example pipelines</title> * <title>Example pipelines</title>
* |[ * |[
* gst-launch -v filesrc location=sine.ogg ! oggdemux ! vorbisdec ! audioconvert ! audioresample ! alsasink * gst-launch-1.0 -v uridecodebin uri=file:///path/to/audio.ogg ! audioconvert ! audioresample ! autoaudiosink
* ]| Play an Ogg/Vorbis file. * ]| Play an Ogg/Vorbis file and output audio via ALSA.
* </refsect2> * </refsect2>
*/ */

View file

@ -28,7 +28,7 @@
* <refsect2> * <refsect2>
* <title>Example pipelines</title> * <title>Example pipelines</title>
* |[ * |[
* gst-launch -v alsasrc ! audioconvert ! vorbisenc ! oggmux ! filesink location=alsasrc.ogg * gst-launch-1.0 -v alsasrc ! queue ! audioconvert ! vorbisenc ! oggmux ! filesink location=alsasrc.ogg
* ]| Record from a sound card using ALSA and encode to Ogg/Vorbis. * ]| Record from a sound card using ALSA and encode to Ogg/Vorbis.
* </refsect2> * </refsect2>
*/ */

View file

@ -28,8 +28,8 @@
* <refsect2> * <refsect2>
* <title>Example pipelines</title> * <title>Example pipelines</title>
* |[ * |[
* gst-launch -v filesrc location=test.ogg ! oggdemux ! vorbisdec ! audioconvert ! alsasink * gst-launch-1.0 -v filesrc location=test.ogg ! oggdemux ! vorbisdec ! audioconvert ! audioresample ! autoaudiosink
* ]| Decodes the vorbis audio stored inside an ogg container. * ]| Decodes a vorbis audio stream stored inside an ogg container and plays it.
* </refsect2> * </refsect2>
*/ */

View file

@ -22,55 +22,6 @@
* Boston, MA 02110-1301, USA. * Boston, MA 02110-1301, USA.
*/ */
/**
* SECTION:element-textoverlay
* @see_also: #GstTextRender, #GstClockOverlay, #GstTimeOverlay, #GstSubParse
*
* This plugin renders text on top of a video stream. This can be either
* static text or text from buffers received on the text sink pad, e.g.
* as produced by the subparse element. If the text sink pad is not linked,
* the text set via the "text" property will be rendered. If the text sink
* pad is linked, text will be rendered as it is received on that pad,
* honouring and matching the buffer timestamps of both input streams.
*
* The text can contain newline characters and text wrapping is enabled by
* default.
*
* <refsect2>
* <title>Example launch lines</title>
* |[
* gst-launch -v videotestsrc ! textoverlay text="Room A" valign=top halign=left ! xvimagesink
* ]| Here is a simple pipeline that displays a static text in the top left
* corner of the video picture
* |[
* gst-launch -v filesrc location=subtitles.srt ! subparse ! txt. videotestsrc ! timeoverlay ! textoverlay name=txt shaded-background=yes ! xvimagesink
* ]| Here is another pipeline that displays subtitles from an .srt subtitle
* file, centered at the bottom of the picture and with a rectangular shading
* around the text in the background:
* <para>
* If you do not have such a subtitle file, create one looking like this
* in a text editor:
* |[
* 1
* 00:00:03,000 --> 00:00:05,000
* Hello? (3-5s)
*
* 2
* 00:00:08,000 --> 00:00:13,000
* Yes, this is a subtitle. Don&apos;t
* you like it? (8-13s)
*
* 3
* 00:00:18,826 --> 00:01:02,886
* Uh? What are you talking about?
* I don&apos;t understand (18-62s)
* ]|
* </para>
* </refsect2>
*/
/* FIXME: alloc segment as part of instance struct */
#ifdef HAVE_CONFIG_H #ifdef HAVE_CONFIG_H
#include <config.h> #include <config.h>
#endif #endif

View file

@ -31,10 +31,10 @@
* <refsect2> * <refsect2>
* <title>Example launch lines</title> * <title>Example launch lines</title>
* |[ * |[
* gst-launch -v videotestsrc ! clockoverlay ! xvimagesink * gst-launch-1.0 -v videotestsrc ! clockoverlay ! autovideosink
* ]| Display the current time in the top left corner of the video picture * ]| Display the current wall clock time in the top left corner of the video picture
* |[ * |[
* gst-launch -v videotestsrc ! clockoverlay halign=right valign=bottom text="Edge City" shaded-background=true ! videoconvert ! ximagesink * gst-launch-1.0 -v videotestsrc ! clockoverlay halignment=right valignment=bottom text="Edge City" shaded-background=true font-desc="Sans, 36" ! videoconvert ! autovideosink
* ]| Another pipeline that displays the current time with some leading * ]| Another pipeline that displays the current time with some leading
* text in the bottom right corner of the video picture, with the background * text in the bottom right corner of the video picture, with the background
* of the text being shaded in order to make it more legible on top of a * of the text being shaded in order to make it more legible on top of a

View file

@ -40,11 +40,11 @@
* <refsect2> * <refsect2>
* <title>Example launch lines</title> * <title>Example launch lines</title>
* |[ * |[
* gst-launch -v videotestsrc ! textoverlay text="Room A" valign=top halign=left ! xvimagesink * gst-launch-1.0 -v gst-launch-1.0 videotestsrc ! textoverlay text="Room A" valignment=top halignment=left font-desc="Sans, 72" ! autovideosink
* ]| Here is a simple pipeline that displays a static text in the top left * ]| Here is a simple pipeline that displays a static text in the top left
* corner of the video picture * corner of the video picture
* |[ * |[
* gst-launch -v filesrc location=subtitles.srt ! subparse ! txt. videotestsrc ! timeoverlay ! textoverlay name=txt shaded-background=yes ! xvimagesink * gst-launch-1.0 -v filesrc location=subtitles.srt ! subparse ! txt. videotestsrc ! timeoverlay ! textoverlay name=txt shaded-background=yes ! autovideosink
* ]| Here is another pipeline that displays subtitles from an .srt subtitle * ]| Here is another pipeline that displays subtitles from an .srt subtitle
* file, centered at the bottom of the picture and with a rectangular shading * file, centered at the bottom of the picture and with a rectangular shading
* around the text in the background: * around the text in the background:
@ -66,15 +66,6 @@
* Uh? What are you talking about? * Uh? What are you talking about?
* I don&apos;t understand (18-62s) * I don&apos;t understand (18-62s)
* ]| * ]|
* One can also feed arbitrary live text into the element:
* |[
* gst-launch fdsrc fd=0 ! text/x-raw,format=utf8 ! txt. videotestsrc ! \
* textoverlay name=txt shaded-background=yes font-desc="Serif 40" wait-text=false ! \
* xvimagesink
* ]| This shows new text as entered on the terminal (stdin). This is not suited
* for subtitles as the test overlay is not timed. Subtitles should use
* timestamped formats. For the above use case one can also read the text from
* the application as set the #GstTextOverlay:text property.
* </para> * </para>
* </refsect2> * </refsect2>
*/ */

View file

@ -34,7 +34,7 @@
* <refsect2> * <refsect2>
* <title>Example launch lines</title> * <title>Example launch lines</title>
* |[ * |[
* gst-launch -v filesrc location=subtitles.srt ! subparse ! textrender ! xvimagesink * gst-launch-1.0 -v filesrc location=subtitles.srt ! subparse ! textrender ! videoconvert ! autovideosink
* ]| * ]|
* </refsect2> * </refsect2>
*/ */

View file

@ -30,11 +30,10 @@
* *
* <refsect2> * <refsect2>
* |[ * |[
* gst-launch -v videotestsrc ! timeoverlay ! xvimagesink * gst-launch-1.0 -v videotestsrc ! timeoverlay ! autovideosink
* ]| Display the time stamps in the top left * ]| Display the time stamps in the top left corner of the video picture.
* corner of the video picture.
* |[ * |[
* gst-launch -v videotestsrc ! timeoverlay halign=right valign=bottom text="Stream time:" shaded-background=true ! xvimagesink * gst-launch-1.0 -v videotestsrc ! timeoverlay halignment=right valignment=bottom text="Stream time:" shaded-background=true font-desc="Sans, 24" ! autovideosink
* ]| Another pipeline that displays the time stamps with some leading * ]| Another pipeline that displays the time stamps with some leading
* text in the bottom right corner of the video picture, with the background * text in the bottom right corner of the video picture, with the background
* of the text being shaded in order to make it more legible on top of a * of the text being shaded in order to make it more legible on top of a

View file

@ -32,9 +32,9 @@
* <refsect2> * <refsect2>
* <title>Example pipeline</title> * <title>Example pipeline</title>
* |[ * |[
* gst-launch -v filesrc location=videotestsrc.ogg ! oggdemux ! theoradec ! xvimagesink * gst-launch-1.0 -v filesrc location=videotestsrc.ogg ! oggdemux ! theoradec ! videoconvert ! videoscale ! autovideosink
* ]| This example pipeline will decode an ogg stream and decodes the theora video. Refer to * ]| This example pipeline will decode an ogg stream and decodes the theora video in it.
* the theoraenc example to create the ogg file. * Refer to the theoraenc example to create the ogg file.
* </refsect2> * </refsect2>
*/ */

View file

@ -45,7 +45,7 @@
* <refsect2> * <refsect2>
* <title>Example pipeline</title> * <title>Example pipeline</title>
* |[ * |[
* gst-launch -v videotestsrc num-buffers=1000 ! theoraenc ! oggmux ! filesink location=videotestsrc.ogg * gst-launch-1.0 -v videotestsrc num-buffers=500 ! video/x-raw,width=1280,height=720 ! queue ! progressreport ! theoraenc ! oggmux ! filesink location=videotestsrc.ogg
* ]| This example pipeline will encode a test video source to theora muxed in an * ]| This example pipeline will encode a test video source to theora muxed in an
* ogg container. Refer to the theoradec documentation to decode the create * ogg container. Refer to the theoradec documentation to decode the create
* stream. * stream.

View file

@ -43,11 +43,11 @@
* <refsect2> * <refsect2>
* <title>Example pipelines</title> * <title>Example pipelines</title>
* |[ * |[
* gst-launch -v filesrc location=video.ogg ! oggdemux ! theoraparse ! fakesink * gst-launch-1.0 -v filesrc location=video.ogg ! oggdemux ! theoraparse ! fakesink
* ]| This pipeline shows that the streamheader is set in the caps, and that each * ]| This pipeline shows that the streamheader is set in the caps, and that each
* buffer has the timestamp, duration, offset, and offset_end set. * buffer has the timestamp, duration, offset, and offset_end set.
* |[ * |[
* gst-launch filesrc location=video.ogg ! oggdemux ! theoraparse \ * gst-launch-1.0 filesrc location=video.ogg ! oggdemux ! theoraparse \
* ! oggmux ! filesink location=video-remuxed.ogg * ! oggmux ! filesink location=video-remuxed.ogg
* ]| This pipeline shows remuxing. video-remuxed.ogg might not be exactly the same * ]| This pipeline shows remuxing. video-remuxed.ogg might not be exactly the same
* as video.ogg, but they should produce exactly the same decoded data. * as video.ogg, but they should produce exactly the same decoded data.

View file

@ -24,12 +24,14 @@
* This element decodes a Vorbis stream to raw float audio. * This element decodes a Vorbis stream to raw float audio.
* <ulink url="http://www.vorbis.com/">Vorbis</ulink> is a royalty-free * <ulink url="http://www.vorbis.com/">Vorbis</ulink> is a royalty-free
* audio codec maintained by the <ulink url="http://www.xiph.org/">Xiph.org * audio codec maintained by the <ulink url="http://www.xiph.org/">Xiph.org
* Foundation</ulink>. * Foundation</ulink>. As it outputs raw float audio you will often need to
* put an audioconvert element after it.
*
* *
* <refsect2> * <refsect2>
* <title>Example pipelines</title> * <title>Example pipelines</title>
* |[ * |[
* gst-launch -v filesrc location=sine.ogg ! oggdemux ! vorbisdec ! audioconvert ! alsasink * gst-launch-1.0 -v filesrc location=sine.ogg ! oggdemux ! vorbisdec ! audioconvert ! audioresample ! autoaudiosink
* ]| Decode an Ogg/Vorbis. To create an Ogg/Vorbis file refer to the documentation of vorbisenc. * ]| Decode an Ogg/Vorbis. To create an Ogg/Vorbis file refer to the documentation of vorbisenc.
* </refsect2> * </refsect2>
*/ */

View file

@ -29,12 +29,12 @@
* <refsect2> * <refsect2>
* <title>Example pipelines</title> * <title>Example pipelines</title>
* |[ * |[
* gst-launch -v audiotestsrc wave=sine num-buffers=100 ! audioconvert ! vorbisenc ! oggmux ! filesink location=sine.ogg * gst-launch-1.0 -v audiotestsrc wave=sine num-buffers=100 ! audioconvert ! vorbisenc ! oggmux ! filesink location=sine.ogg
* ]| Encode a test sine signal to Ogg/Vorbis. Note that the resulting file * ]| Encode a test sine signal to Ogg/Vorbis. Note that the resulting file
* will be really small because a sine signal compresses very well. * will be really small because a sine signal compresses very well.
* |[ * |[
* gst-launch -v alsasrc ! audioconvert ! vorbisenc ! oggmux ! filesink location=alsasrc.ogg * gst-launch-1.0 -v autoaudiosrc ! audioconvert ! vorbisenc ! oggmux ! filesink location=alsasrc.ogg
* ]| Record from a sound card using ALSA and encode to Ogg/Vorbis. * ]| Record from a sound card and encode to Ogg/Vorbis.
* </refsect2> * </refsect2>
*/ */
#ifdef HAVE_CONFIG_H #ifdef HAVE_CONFIG_H

View file

@ -36,11 +36,11 @@
* <refsect2> * <refsect2>
* <title>Example pipelines</title> * <title>Example pipelines</title>
* |[ * |[
* gst-launch -v filesrc location=sine.ogg ! oggdemux ! vorbisparse ! fakesink * gst-launch-1.0 -v filesrc location=sine.ogg ! oggdemux ! vorbisparse ! fakesink
* ]| This pipeline shows that the streamheader is set in the caps, and that each * ]| This pipeline shows that the streamheader is set in the caps, and that each
* buffer has the timestamp, duration, offset, and offset_end set. * buffer has the timestamp, duration, offset, and offset_end set.
* |[ * |[
* gst-launch filesrc location=sine.ogg ! oggdemux ! vorbisparse \ * gst-launch-1.0 filesrc location=sine.ogg ! oggdemux ! vorbisparse \
* ! oggmux ! filesink location=sine-remuxed.ogg * ! oggmux ! filesink location=sine-remuxed.ogg
* ]| This pipeline shows remuxing. sine-remuxed.ogg might not be exactly the same * ]| This pipeline shows remuxing. sine-remuxed.ogg might not be exactly the same
* as sine.ogg, but they should produce exactly the same decoded data. * as sine.ogg, but they should produce exactly the same decoded data.

View file

@ -37,8 +37,8 @@
* <refsect2> * <refsect2>
* <title>Example pipelines</title> * <title>Example pipelines</title>
* |[ * |[
* gst-launch -v filesrc location=foo.ogg ! oggdemux ! vorbistag ! oggmux ! filesink location=bar.ogg * gst-launch-1.0 -v filesrc location=foo.ogg ! oggdemux ! vorbistag ! oggmux ! filesink location=bar.ogg
* ]| This element is not useful with gst-launch, because it does not support * ]| This element is not useful with gst-launch-1.0, because it does not support
* setting the tags on a #GstTagSetter interface. Conceptually, the element * setting the tags on a #GstTagSetter interface. Conceptually, the element
* will usually be used in this order though. * will usually be used in this order though.
* </refsect2> * </refsect2>

View file

@ -29,10 +29,14 @@
* The adder currently mixes all data received on the sinkpads as soon as * The adder currently mixes all data received on the sinkpads as soon as
* possible without trying to synchronize the streams. * possible without trying to synchronize the streams.
* *
* Check out the audiomixer element in gst-plugins-bad for a better-behaving
* audio mixing element: It will sync input streams correctly and also handle
* live inputs properly.
*
* <refsect2> * <refsect2>
* <title>Example launch line</title> * <title>Example launch line</title>
* |[ * |[
* gst-launch audiotestsrc freq=100 ! adder name=mix ! audioconvert ! alsasink audiotestsrc freq=500 ! mix. * gst-launch-1.0 audiotestsrc freq=100 ! adder name=mix ! audioconvert ! autoaudiosink audiotestsrc freq=500 ! mix.
* ]| This pipeline produces two sine waves mixed together. * ]| This pipeline produces two sine waves mixed together.
* </refsect2> * </refsect2>
*/ */

View file

@ -26,18 +26,21 @@
* *
* Audioconvert converts raw audio buffers between various possible formats. * Audioconvert converts raw audio buffers between various possible formats.
* It supports integer to float conversion, width/depth conversion, * It supports integer to float conversion, width/depth conversion,
* signedness and endianness conversion and channel transformations. * signedness and endianness conversion and channel transformations
* (ie. upmixing and downmixing), as well as dithering and noise-shaping.
* *
* <refsect2> * <refsect2>
* <title>Example launch line</title> * <title>Example launch line</title>
* |[ * |[
* gst-launch -v -m audiotestsrc ! audioconvert ! audio/x-raw,format=S8,channels=2 ! level ! fakesink silent=TRUE * gst-launch-1.0 -v -m audiotestsrc ! audioconvert ! audio/x-raw,format=S8,channels=2 ! level ! fakesink silent=TRUE
* ]| This pipeline converts audio to 8-bit. The level element shows that * ]| This pipeline converts audio to 8-bit. The level element shows that
* the output levels still match the one for a sine wave. * the output levels still match the one for a sine wave.
* |[ * |[
* gst-launch -v -m audiotestsrc ! audioconvert ! vorbisenc ! fakesink silent=TRUE * gst-launch-1.0 -v -m uridecodebin uri=file:///path/to/audio.flac ! audioconvert ! vorbisenc ! oggmux ! filesink location=audio.ogg
* ]| The vorbis encoder takes float audio data instead of the integer data * ]| The vorbis encoder takes float audio data instead of the integer data
* generated by audiotestsrc. * output by most other audio elements. This pipeline decodes a FLAC audio file
* (or any other audio file for which decoders are installed) and re-encodes
* it into an Ogg/Vorbis audio file.
* </refsect2> * </refsect2>
*/ */

View file

@ -51,9 +51,15 @@
* <refsect2> * <refsect2>
* <title>Example pipelines</title> * <title>Example pipelines</title>
* |[ * |[
* gst-launch -v alsasrc ! audiorate ! wavenc ! filesink location=alsa.wav * gst-launch-1.0 -v autoaudiosrc ! audiorate ! audioconvert ! wavenc ! filesink location=alsa.wav
* ]| Capture audio from an ALSA device, and turn it into a perfect stream * ]| Capture audio from the sound card and turn it into a perfect stream
* for saving in a raw audio file. * for saving in a raw audio file.
* |[
* gst-launch-1.0 -v uridecodebin uri=file:///path/to/audio.file ! audiorate ! audioconvert ! wavenc ! filesink location=alsa.wav
* ]| Decodes an audio file and transforms it into a perfect stream for saving
* in a raw audio WAV file. Without the audio rate, the timing might not be
* preserved correctly in the WAV file in case the decoded stream is jittery
* or there are samples missing.
* </refsect2> * </refsect2>
*/ */

View file

@ -36,9 +36,10 @@
* <refsect2> * <refsect2>
* <title>Example launch line</title> * <title>Example launch line</title>
* |[ * |[
* gst-launch -v filesrc location=sine.ogg ! oggdemux ! vorbisdec ! audioconvert ! audioresample ! audio/x-raw, rate=8000 ! alsasink * gst-launch-1.0 -v uridecodebin uri=file:///path/to/audio.ogg ! audioconvert ! audioresample ! audio/x-raw, rate=8000 ! autoaudiosink
* ]| Decode an Ogg/Vorbis downsample to 8Khz and play sound through alsa. * ]| Decode an audio file and downsample it to 8Khz and play sound.
* To create the Ogg/Vorbis file refer to the documentation of vorbisenc. * To create the Ogg/Vorbis file refer to the documentation of vorbisenc.
* This assumes there is an audio sink that will accept/handle 8kHz audio.
* </refsect2> * </refsect2>
*/ */

View file

@ -25,11 +25,11 @@
* <refsect2> * <refsect2>
* <title>Example launch line</title> * <title>Example launch line</title>
* |[ * |[
* gst-launch audiotestsrc ! audioconvert ! alsasink * gst-launch-1.0 audiotestsrc ! audioconvert ! autoaudiosink
* ]| This pipeline produces a sine with default frequency, 440 Hz, and the * ]| This pipeline produces a sine with default frequency, 440 Hz, and the
* default volume, 0.8 (relative to a maximum 1.0). * default volume, 0.8 (relative to a maximum 1.0).
* |[ * |[
* gst-launch audiotestsrc wave=2 freq=200 ! audioconvert ! tee name=t ! queue ! alsasink t. ! queue ! libvisual_lv_scope ! videoconvert ! xvimagesink * gst-launch-1.0 audiotestsrc wave=2 freq=200 ! tee name=t ! queue ! audioconvert ! autoaudiosink t. ! queue ! audioconvert ! libvisual_lv_scope ! videoconvert ! autovideosink
* ]| In this example a saw wave is generated. The wave is shown using a * ]| In this example a saw wave is generated. The wave is shown using a
* scope visualizer from libvisual, allowing you to visually verify that * scope visualizer from libvisual, allowing you to visually verify that
* the saw wave is correct. * the saw wave is correct.

View file

@ -48,15 +48,15 @@
* <refsect2> * <refsect2>
* <title>Example pipelines</title> * <title>Example pipelines</title>
* |[ * |[
* gst-launch -v filesrc location=input.xyz ! giosink location=file:///home/joe/out.xyz * gst-launch-1.0 -v filesrc location=input.xyz ! giosink location=file:///home/joe/out.xyz
* ]| The above pipeline will simply copy a local file. Instead of giosink, * ]| The above pipeline will simply copy a local file. Instead of giosink,
* we could just as well have used the filesink element here. * we could just as well have used the filesink element here.
* |[ * |[
* gst-launch -v filesrc location=foo.mp3 ! mad ! flacenc ! giosink location=smb://othercomputer/foo.flac * gst-launch-1.0 -v uridecodebin uri=file:///path/to/audio.file ! audioconvert ! flacenc ! giosink location=smb://othercomputer/foo.flac
* ]| The above pipeline will re-encode an mp3 file into FLAC format and store * ]| The above pipeline will re-encode an audio file into FLAC format and store
* it on a remote host using the Samba protocol. * it on a remote host using the Samba protocol.
* |[ * |[
* gst-launch -v audiotestsrc num-buffers=100 ! vorbisenc ! oggmux ! giosink location=file:///home/foo/bar.ogg * gst-launch-1.0 -v audiotestsrc num-buffers=100 ! vorbisenc ! oggmux ! giosink location=file:///home/foo/bar.ogg
* ]| The above pipeline will encode a 440Hz sine wave to Ogg Vorbis and stores * ]| The above pipeline will encode a 440Hz sine wave to Ogg Vorbis and stores
* it in the home directory of user foo. * it in the home directory of user foo.
* </refsect2> * </refsect2>
@ -65,7 +65,7 @@
/* FIXME: We would like to mount the enclosing volume of an URL /* FIXME: We would like to mount the enclosing volume of an URL
* if it isn't mounted yet but this is possible async-only. * if it isn't mounted yet but this is possible async-only.
* Unfortunately this requires a running main loop from the * Unfortunately this requires a running main loop from the
* default context and we can't guarantuee this! * default context and we can't guarantee this!
* *
* We would also like to do authentication while mounting. * We would also like to do authentication while mounting.
*/ */

View file

@ -43,18 +43,18 @@
* <refsect2> * <refsect2>
* <title>Example launch lines</title> * <title>Example launch lines</title>
* |[ * |[
* gst-launch -v giosrc location=file:///home/joe/foo.xyz ! fakesink * gst-launch-1.0 -v giosrc location=file:///home/joe/foo.xyz ! fakesink
* ]| The above pipeline will simply read a local file and do nothing with the * ]| The above pipeline will simply read a local file and do nothing with the
* data read. Instead of giosrc, we could just as well have used the * data read. Instead of giosrc, we could just as well have used the
* filesrc element here. * filesrc element here.
* |[ * |[
* gst-launch -v giosrc location=smb://othercomputer/foo.xyz ! filesink location=/home/joe/foo.xyz * gst-launch-1.0 -v giosrc location=smb://othercomputer/foo.xyz ! filesink location=/home/joe/foo.xyz
* ]| The above pipeline will copy a file from a remote host to the local file * ]| The above pipeline will copy a file from a remote host to the local file
* system using the Samba protocol. * system using the Samba protocol.
* |[ * |[
* gst-launch -v giosrc location=http://music.foobar.com/demo.mp3 ! mad ! audioconvert ! audioresample ! alsasink * gst-launch-1.0 -v giosrc location=smb://othercomputer/demo.mp3 ! decodebin ! audioconvert ! audioresample ! autoaudiosink
* ]| The above pipeline will read and decode and play an mp3 file from a * ]| The above pipeline will read and decode and play an mp3 file from a
* web server using the http protocol. * SAMBA server.
* </refsect2> * </refsect2>
*/ */

View file

@ -199,18 +199,18 @@
* <refsect2> * <refsect2>
* <title>Examples</title> * <title>Examples</title>
* |[ * |[
* gst-launch -v playbin uri=file:///path/to/somefile.avi * gst-launch-1.0 -v playbin uri=file:///path/to/somefile.mp4
* ]| This will play back the given AVI video file, given that the video and * ]| This will play back the given AVI video file, given that the video and
* audio decoders required to decode the content are installed. Since no * audio decoders required to decode the content are installed. Since no
* special audio sink or video sink is supplied (not possible via gst-launch), * special audio sink or video sink is supplied (via playbin's audio-sink or
* playbin will try to find a suitable audio and video sink automatically * video-sink properties) playbin will try to find a suitable audio and
* using the autoaudiosink and autovideosink elements. * video sink automatically using the autoaudiosink and autovideosink elements.
* |[ * |[
* gst-launch -v playbin uri=cdda://4 * gst-launch-1.0 -v playbin uri=cdda://4
* ]| This will play back track 4 on an audio CD in your disc drive (assuming * ]| This will play back track 4 on an audio CD in your disc drive (assuming
* the drive is detected automatically by the plugin). * the drive is detected automatically by the plugin).
* |[ * |[
* gst-launch -v playbin uri=dvd:// * gst-launch-1.0 -v playbin uri=dvd://
* ]| This will play back the DVD in your disc drive (assuming * ]| This will play back the DVD in your disc drive (assuming
* the drive is detected automatically by the plugin). * the drive is detected automatically by the plugin).
* </refsect2> * </refsect2>

View file

@ -29,8 +29,8 @@
* <refsect2> * <refsect2>
* <title>Examples</title> * <title>Examples</title>
* |[ * |[
* gst-launch -v filesrc location=test.mkv ! matroskademux name=demux ! "video/x-h264" ! queue2 ! decodebin ! subtitleoverlay name=overlay ! videoconvert ! autovideosink demux. ! "subpicture/x-dvd" ! queue2 ! overlay. * gst-launch-1.0 -v filesrc location=test.mkv ! matroskademux name=demux ! video/x-h264 ! queue ! decodebin ! subtitleoverlay name=overlay ! videoconvert ! autovideosink demux. ! subpicture/x-dvd ! queue ! overlay.
* ]| This will play back the given Matroska file with h264 video and subpicture subtitles. * ]| This will play back the given Matroska file with h264 video and dvd subpicture style subtitles.
* </refsect2> * </refsect2>
*/ */

View file

@ -30,8 +30,9 @@
* # server: * # server:
* nc -l -p 3000 * nc -l -p 3000
* # client: * # client:
* gst-launch fdsink fd=1 ! tcpclientsink port=3000 * gst-launch-1.0 fdsink fd=1 ! tcpclientsink port=3000
* ]| everything you type in the client is shown on the server * ]| everything you type in the client is shown on the server (fd=1 means
* standard input which is the command line input file descriptor)
* </refsect2> * </refsect2>
*/ */

View file

@ -30,7 +30,7 @@
* # server: * # server:
* nc -l -p 3000 * nc -l -p 3000
* # client: * # client:
* gst-launch tcpclientsrc port=3000 ! fdsink fd=2 * gst-launch-1.0 tcpclientsrc port=3000 ! fdsink fd=2
* ]| everything you type in the server is shown on the client * ]| everything you type in the server is shown on the client
* </refsect2> * </refsect2>
*/ */

View file

@ -26,9 +26,9 @@
* <title>Example launch line</title> * <title>Example launch line</title>
* |[ * |[
* # server: * # server:
* gst-launch fdsrc fd=1 ! tcpserversink port=3000 * gst-launch-1.0 fdsrc fd=1 ! tcpserversink port=3000
* # client: * # client:
* gst-launch tcpclientsrc port=3000 ! fdsink fd=2 * gst-launch-1.0 tcpclientsrc port=3000 ! fdsink fd=2
* ]| * ]|
* </refsect2> * </refsect2>
*/ */

View file

@ -28,9 +28,9 @@
* <title>Example launch line</title> * <title>Example launch line</title>
* |[ * |[
* # server: * # server:
* gst-launch tcpserversrc port=3000 ! fdsink fd=2 * gst-launch-1.0 tcpserversrc port=3000 ! fdsink fd=2
* # client: * # client:
* gst-launch fdsrc fd=1 ! tcpclientsink port=3000 * gst-launch-1.0 fdsrc fd=1 ! tcpclientsink port=3000
* ]| * ]|
* </refsect2> * </refsect2>
*/ */

View file

@ -28,8 +28,10 @@
* <refsect2> * <refsect2>
* <title>Example launch line</title> * <title>Example launch line</title>
* |[ * |[
* gst-launch -v videotestsrc ! video/x-raw,format=\(string\)YUY2 ! videoconvert ! ximagesink * gst-launch-1.0 -v videotestsrc ! video/x-raw,format=YUY2 ! videoconvert ! autovideosink
* ]| * ]| This will output a test video (generated in YUY2 format) in a video
* window. If the video sink selected does not support YUY2 videoconvert will
* automatically convert the video to a format understood by the video sink.
* </refsect2> * </refsect2>
*/ */

View file

@ -55,13 +55,16 @@
* <refsect2> * <refsect2>
* <title>Example pipelines</title> * <title>Example pipelines</title>
* |[ * |[
* gst-launch -v filesrc location=videotestsrc.ogg ! oggdemux ! theoradec ! videorate ! video/x-raw,framerate=15/1 ! xvimagesink * gst-launch-1.0 -v uridecodebin uri=file:///path/to/video.ogg ! videoconvert ! videoscale ! videorate ! video/x-raw,framerate=15/1 ! autovideosink
* ]| Decode an Ogg/Theora file and adjust the framerate to 15 fps before playing. * ]| Decode a video file and adjust the framerate to 15 fps before playing.
* To create the test Ogg/Theora file refer to the documentation of theoraenc. * To create a test Ogg/Theora file refer to the documentation of theoraenc.
* |[ * |[
* gst-launch -v v4l2src ! videorate ! video/x-raw,framerate=25/2 ! theoraenc ! oggmux ! filesink location=recording.ogg * gst-launch-1.0 -v v4l2src ! videorate ! video/x-raw,framerate=25/2 ! theoraenc ! oggmux ! filesink location=recording.ogg
* ]| Capture video from a V4L device, and adjust the stream to 12.5 fps before * ]| Capture video from a V4L device, and adjust the stream to 12.5 fps before
* encoding to Ogg/Theora. * encoding to Ogg/Theora.
* |[
* gst-launch-1.0 -v uridecodebin uri=file:///path/to/video.ogg ! videoconvert ! videoscale ! videorate ! video/x-raw,framerate=1/5 ! jpegenc ! multifilesink location=snapshot-%05d.jpg
* ]| Decode a video file and save a snapshot every 5 seconds as consecutively numbered jpeg file.
* </refsect2> * </refsect2>
*/ */

View file

@ -34,15 +34,14 @@
* <refsect2> * <refsect2>
* <title>Example pipelines</title> * <title>Example pipelines</title>
* |[ * |[
* gst-launch -v filesrc location=videotestsrc.ogg ! oggdemux ! theoradec ! videoconvert ! videoscale ! ximagesink * gst-launch-1.0 -v filesrc location=videotestsrc.ogg ! oggdemux ! theoradec ! videoconvert ! videoscale ! autovideosink
* ]| Decode an Ogg/Theora and display the video using ximagesink. Since * ]| Decode an Ogg/Theora and display the video. If the video sink chosen
* ximagesink cannot perform scaling, the video scaling will be performed by * cannot perform scaling, the video scaling will be performed by videoscale
* videoscale when you resize the video window. * when you resize the video window.
* To create the test Ogg/Theora file refer to the documentation of theoraenc. * To create the test Ogg/Theora file refer to the documentation of theoraenc.
* |[ * |[
* gst-launch -v filesrc location=videotestsrc.ogg ! oggdemux ! theoradec ! videoscale ! video/x-raw, width=50 ! xvimagesink * gst-launch-1.0 -v filesrc location=videotestsrc.ogg ! oggdemux ! theoradec ! videoconvert ! videoscale ! video/x-raw,width=100 ! autovideosink
* ]| Decode an Ogg/Theora and display the video using xvimagesink with a width * ]| Decode an Ogg/Theora and display the video with a width of 100.
* of 50.
* </refsect2> * </refsect2>
*/ */

View file

@ -28,8 +28,8 @@
* <refsect2> * <refsect2>
* <title>Example launch line</title> * <title>Example launch line</title>
* |[ * |[
* gst-launch -v videotestsrc pattern=snow ! ximagesink * gst-launch-1.0 -v videotestsrc pattern=snow ! video/x-raw,width=1280,height=720 ! autovideosink
* ]| Shows random noise in an X window. * ]| Shows random noise in a video window.
* </refsect2> * </refsect2>
*/ */

View file

@ -30,7 +30,7 @@
* <refsect2> * <refsect2>
* <title>Example launch line</title> * <title>Example launch line</title>
* |[ * |[
* gst-launch -v -m audiotestsrc ! volume volume=0.5 ! level ! fakesink silent=TRUE * gst-launch-1.0 -v -m audiotestsrc ! volume volume=0.5 ! level ! fakesink silent=TRUE
* ]| This pipeline shows that the level of audiotestsrc has been halved * ]| This pipeline shows that the level of audiotestsrc has been halved
* (peak values are around -6 dB and RMS around -9 dB) compared to * (peak values are around -6 dB and RMS around -9 dB) compared to
* the same pipeline without the volume element. * the same pipeline without the volume element.

View file

@ -74,14 +74,14 @@
* <refsect2> * <refsect2>
* <title>Examples</title> * <title>Examples</title>
* |[ * |[
* gst-launch -v videotestsrc ! queue ! ximagesink * gst-launch-1.0 -v videotestsrc ! queue ! ximagesink
* ]| A pipeline to test reverse negotiation. When the test video signal appears * ]| A pipeline to test reverse negotiation. When the test video signal appears
* you can resize the window and see that scaled buffers of the desired size are * you can resize the window and see that scaled buffers of the desired size are
* going to arrive with a short delay. This illustrates how buffers of desired * going to arrive with a short delay. This illustrates how buffers of desired
* size are allocated along the way. If you take away the queue, scaling will * size are allocated along the way. If you take away the queue, scaling will
* happen almost immediately. * happen almost immediately.
* |[ * |[
* gst-launch -v videotestsrc ! navigationtest ! videoconvert ! ximagesink * gst-launch-1.0 -v videotestsrc ! navigationtest ! videoconvert ! ximagesink
* ]| A pipeline to test navigation events. * ]| A pipeline to test navigation events.
* While moving the mouse pointer over the test signal you will see a black box * While moving the mouse pointer over the test signal you will see a black box
* following the mouse pointer. If you press the mouse button somewhere on the * following the mouse pointer. If you press the mouse button somewhere on the
@ -89,7 +89,7 @@
* the button and a red one where you released it. (The navigationtest element * the button and a red one where you released it. (The navigationtest element
* is part of gst-plugins-good.) * is part of gst-plugins-good.)
* |[ * |[
* gst-launch -v videotestsrc ! video/x-raw, pixel-aspect-ratio=(fraction)4/3 ! videoscale ! ximagesink * gst-launch-1.0 -v videotestsrc ! video/x-raw, pixel-aspect-ratio=(fraction)4/3 ! videoscale ! ximagesink
* ]| This is faking a 4/3 pixel aspect ratio caps on video frames produced by * ]| This is faking a 4/3 pixel aspect ratio caps on video frames produced by
* videotestsrc, in most cases the pixel aspect ratio of the display will be * videotestsrc, in most cases the pixel aspect ratio of the display will be
* 1/1. This means that videoscale will have to do the scaling to convert * 1/1. This means that videoscale will have to do the scaling to convert

View file

@ -73,17 +73,20 @@
* <refsect2> * <refsect2>
* <title>Examples</title> * <title>Examples</title>
* |[ * |[
* gst-launch -v videotestsrc ! xvimagesink * gst-launch-1.0 -v videotestsrc ! xvimagesink
* ]| A pipeline to test hardware scaling. * ]| A pipeline to test hardware scaling.
* When the test video signal appears you can resize the window and see that * When the test video signal appears you can resize the window and see that
* video frames are scaled through hardware (no extra CPU cost). * video frames are scaled through hardware (no extra CPU cost). By default
* the image will never be distorted when scaled, instead black borders will
* be added if needed.
* |[ * |[
* gst-launch -v videotestsrc ! xvimagesink force-aspect-ratio=true * gst-launch-1.0 -v videotestsrc ! xvimagesink force-aspect-ratio=false
* ]| Same pipeline with #GstXvImageSink:force-aspect-ratio property set to true * ]| Same pipeline with #GstXvImageSink:force-aspect-ratio property set to
* You can observe the borders drawn around the scaled image respecting aspect * false. You can observe that no borders are drawn around the scaled image
* ratio. * now and it will be distorted to fill the entire frame instead of respecting
* the aspect ratio.
* |[ * |[
* gst-launch -v videotestsrc ! navigationtest ! xvimagesink * gst-launch-1.0 -v videotestsrc ! navigationtest ! xvimagesink
* ]| A pipeline to test navigation events. * ]| A pipeline to test navigation events.
* While moving the mouse pointer over the test signal you will see a black box * While moving the mouse pointer over the test signal you will see a black box
* following the mouse pointer. If you press the mouse button somewhere on the * following the mouse pointer. If you press the mouse button somewhere on the
@ -95,15 +98,14 @@
* position. This also handles borders correctly, limiting coordinates to the * position. This also handles borders correctly, limiting coordinates to the
* image area * image area
* |[ * |[
* gst-launch -v videotestsrc ! video/x-raw, pixel-aspect-ratio=(fraction)4/3 ! xvimagesink * gst-launch-1.0 -v videotestsrc ! video/x-raw, pixel-aspect-ratio=4/3 ! xvimagesink
* ]| This is faking a 4/3 pixel aspect ratio caps on video frames produced by * ]| This is faking a 4/3 pixel aspect ratio caps on video frames produced by
* videotestsrc, in most cases the pixel aspect ratio of the display will be * videotestsrc, in most cases the pixel aspect ratio of the display will be
* 1/1. This means that XvImageSink will have to do the scaling to convert * 1/1. This means that XvImageSink will have to do the scaling to convert
* incoming frames to a size that will match the display pixel aspect ratio * incoming frames to a size that will match the display pixel aspect ratio
* (from 320x240 to 320x180 in this case). Note that you might have to escape * (from 320x240 to 320x180 in this case).
* some characters for your shell like '\(fraction\)'.
* |[ * |[
* gst-launch -v videotestsrc ! xvimagesink hue=100 saturation=-100 brightness=100 * gst-launch-1.0 -v videotestsrc ! xvimagesink hue=100 saturation=-100 brightness=100
* ]| Demonstrates how to use the colorbalance interface. * ]| Demonstrates how to use the colorbalance interface.
* </refsect2> * </refsect2>
*/ */