mirror of
https://github.com/matthew1000/gstreamer-cheat-sheet.git
synced 2024-11-23 08:41:00 +00:00
Fix Markdown
This commit is contained in:
parent
8b06f2f834
commit
5bf6e94f79
13 changed files with 25 additions and 25 deletions
10
basics.md
10
basics.md
|
@ -8,7 +8,7 @@ These examples assume that bash variable `SRC` to be set to a video file (e.g. a
|
|||
export SRC=/home/me/videos/test.mp4
|
||||
```
|
||||
|
||||
### Play a video (with audio)
|
||||
### Play a video (with audio)
|
||||
|
||||
The magical element `playbin` can play anything:
|
||||
|
||||
|
@ -40,13 +40,13 @@ which could also have been done as:
|
|||
gst-launch-1.0 -v filesrc location="$SRC" ! decodebin ! autovideosink
|
||||
```
|
||||
|
||||
### Play just the audio from a video
|
||||
### Play just the audio from a video
|
||||
|
||||
```
|
||||
gst-launch-1.0 -v uridecodebin uri="file://$SRC" ! autoaudiosink
|
||||
```
|
||||
|
||||
### Visualise the audio:
|
||||
### Visualise the audio:
|
||||
|
||||
```
|
||||
gst-launch-1.0 filesrc location=$SRC ! qtdemux name=demux demux.audio_0 ! queue ! decodebin ! audioconvert ! wavescope ! autovideosink
|
||||
|
@ -70,7 +70,7 @@ gst-launch-1.0 -v filesrc location="$SRC" ! decodebin ! videoconvert ! vertigotv
|
|||
|
||||
Try also ‘rippletv’, ‘streaktv’, ‘radioactv’, ‘optv’, ‘quarktv’, ‘revtv’, ‘shagadelictv’, ‘warptv’ (I like), ‘dicetv’, ‘agingtv’ (great), ‘edgetv’ (could be great on real stuff)
|
||||
|
||||
### Add a clock
|
||||
### Add a clock
|
||||
|
||||
```
|
||||
gst-launch-1.0 -v filesrc location="$SRC" ! decodebin ! clockoverlay font-desc="Sans, 48" ! videoconvert ! autovideosink
|
||||
|
@ -113,7 +113,7 @@ gst-launch-1.0 filesrc location=$SRC ! \
|
|||
autovideosink
|
||||
```
|
||||
|
||||
### Play an MP3 audio file
|
||||
### Play an MP3 audio file
|
||||
|
||||
Set the environment variable `$AUDIO_SRC` to be the location of the MP3 file. Then:
|
||||
|
||||
|
|
|
@ -46,7 +46,7 @@ The `pngenc` element can create a single PNG:
|
|||
gst-launch-1.0 videotestsrc ! pngenc ! filesink location=foo.png
|
||||
```
|
||||
|
||||
### Capture an image as JPEG
|
||||
### Capture an image as JPEG
|
||||
|
||||
The `jpegenc` element can create a single JPEG:
|
||||
|
||||
|
|
|
@ -40,7 +40,7 @@ You can also compile and install yourself - see (https://gstreamer.freedesktop.o
|
|||
|
||||
The GStreamer APT packages are excellent.
|
||||
|
||||
For a good install command, see `https://gstreamer.freedesktop.org/documentation/installing/on-linux.html?gi-language=c`.
|
||||
For a good install command, see (https://gstreamer.freedesktop.org/documentation/installing/on-linux.html?gi-language=c).
|
||||
|
||||
|
||||
|
||||
|
|
10
mixing.md
10
mixing.md
|
@ -98,7 +98,7 @@ gst-launch-1.0 \
|
|||
Use the `audiomixer` element to mix audio. It replaces the `adder` element, which struggles under some circumstances (according to the [GStreamer 1.14 release notes](https://gstreamer.freedesktop.org/releases/1.14/)).
|
||||
|
||||
|
||||
### Mix two (or more) test audio streams
|
||||
### Mix two (or more) test audio streams
|
||||
|
||||
Here we use two different frequencies (tones):
|
||||
|
||||
|
@ -110,7 +110,7 @@ gst-launch-1.0 \
|
|||
```
|
||||
|
||||
|
||||
### Mix two test streams, dynamically
|
||||
### Mix two test streams, dynamically
|
||||
|
||||
[This Python example](python_examples/audio_dynamic_add.py) shows a dynamic equivalent of this example - the second test source is only mixed when the user presses Enter.
|
||||
|
||||
|
@ -127,7 +127,7 @@ gst-launch-1.0 \
|
|||
```
|
||||
|
||||
|
||||
### Mix a test stream with an MP3 file
|
||||
### Mix a test stream with an MP3 file
|
||||
|
||||
Because the audio streams are from different sources, they must each be passed through `audioconvert`.
|
||||
|
||||
|
@ -139,7 +139,7 @@ gst-launch-1.0 \
|
|||
```
|
||||
|
||||
|
||||
## Mixing video & audio together
|
||||
## Mixing video & audio together
|
||||
|
||||
|
||||
### Mix two fake video sources and two fake audio Sources
|
||||
|
@ -256,7 +256,7 @@ gst-launch-1.0 \
|
|||
demux2. ! queue2 ! decodebin ! videoconvert ! videoscale ! video/x-raw,width=320,height=180 ! videomix.
|
||||
```
|
||||
|
||||
## Fading
|
||||
## Fading
|
||||
It's often nicer to fade between sources than to abruptly cut betwen them. This can be done both with video (temporarily blending using alpha channel) or audio (lowering the volume of one whilst raising it on another).
|
||||
|
||||
It's not possible to do this on the command line... alhough `alpha` and `volume` can be set, they can only be set to discrete values.
|
||||
|
|
|
@ -83,13 +83,13 @@ gst-launch-1.0 \
|
|||
rtpmp2tdepay ! decodebin name=decoder ! autoaudiosink decoder. ! autovideosink
|
||||
```
|
||||
|
||||
### How to receive with VLC
|
||||
### How to receive with VLC
|
||||
|
||||
To receive a UDP stream, an `sdp` file is required. An example can be found at https://gist.github.com/nebgnahz/26a60cd28f671a8b7f522e80e75a9aa5
|
||||
|
||||
## TCP
|
||||
|
||||
### Audio via TCP
|
||||
### Audio via TCP
|
||||
|
||||
To send a test stream:
|
||||
|
||||
|
|
|
@ -22,7 +22,7 @@ gst-launch-1.0 videotestsrc ! queue ! autovideosink
|
|||
Queues add latency, so the general advice is not to add them unless you need them.
|
||||
|
||||
|
||||
## Queue2
|
||||
## Queue2
|
||||
|
||||
Confusingly, `queue2` is not a replacement for `queue`. It's not obvious when to use one or the other.
|
||||
|
||||
|
|
2
rtmp.md
2
rtmp.md
|
@ -144,7 +144,7 @@ gst-launch-1.0 videotestsrc is-live=true ! \
|
|||
voaacenc bitrate=96000 ! audio/mpeg ! aacparse ! audio/mpeg, mpegversion=4 ! mux.
|
||||
```
|
||||
|
||||
### Send a file over RTMP
|
||||
### Send a file over RTMP
|
||||
|
||||
Audio & video:
|
||||
|
||||
|
|
|
@ -77,7 +77,7 @@ I've used `proxysink` and `proxysrc` to split larger pipelines into smaller ones
|
|||
Unlike _inter_ below, _proxy_ will keep timing in sync. This is great if it's what you want... but if you want pipelines to have their own timing, it might not be right for your needs..
|
||||
|
||||
|
||||
### gstproxy documentation
|
||||
### gstproxy documentation
|
||||
|
||||
* Introduced by the blog mentioned above (http://blog.nirbheek.in/2018/02/decoupling-gstreamer-pipelines.html)
|
||||
* Example code on proxysrc here: https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-bad-plugins/html/gst-plugins-bad-plugins-proxysrc.html
|
||||
|
@ -108,7 +108,7 @@ A slightly more interesting example can be found at
|
|||
* that when the video ends, the other pipelines continue.
|
||||
|
||||
|
||||
## inter (intervideosink/intervideosrc and their audio & subtitle counterparts)
|
||||
## inter (intervideosink/intervideosrc and their audio & subtitle counterparts)
|
||||
|
||||
The 'inter' versions of 'proxy' are dumber. They don't attempt to sync timings. But this can be useful if you want pipelines to be more independent. (Pros and cons on this discussed [here](http://gstreamer-devel.966125.n4.nabble.com/How-to-connect-intervideosink-and-intervideosrc-for-IPC-pipelines-td4684567.html).)
|
||||
|
||||
|
|
2
srt.md
2
srt.md
|
@ -20,7 +20,7 @@ And create a receiver client like this:
|
|||
gst-launch-1.0 -v srtclientsrc uri="srt://127.0.0.1:8888" ! decodebin ! autovideosink
|
||||
```
|
||||
|
||||
## Server receiving the AV
|
||||
## Server receiving the AV
|
||||
|
||||
To have the server receiving, rather than sending, swap 'srtclientsrc' for 'srcserversrc'.
|
||||
Likewise, to have the client sending rather than receiving, swap 'srtserversink' for 'srtclientsink'.
|
||||
|
|
6
tee.md
6
tee.md
|
@ -2,7 +2,7 @@
|
|||
|
||||
This page describes the `tee` element, which allows audio & video streams to be sent to more than one place.
|
||||
|
||||
## Tee to two local video outputs
|
||||
## Tee to two local video outputs
|
||||
|
||||
Here's a simple example that sends shows video test source twice (using `autovideosink`)
|
||||
|
||||
|
@ -14,7 +14,7 @@ gst-launch-1.0 \
|
|||
t. ! queue ! videoconvert ! autovideosink
|
||||
```
|
||||
|
||||
## Tee to two different video outputs
|
||||
## Tee to two different video outputs
|
||||
|
||||
Here's an example that sends video to both `autovideosink` and a TCP server (`tcpserversink`).
|
||||
Note how `async=false` is required on both sinks, because the encoding step on the TCP branch takes longer, and so the timing will be different.
|
||||
|
@ -44,7 +44,7 @@ gst-launch-1.0 videotestsrc ! \
|
|||
t. ! queue ! x264enc ! mpegtsmux ! tcpserversink port=7001 host=127.0.0.1 recover-policy=keyframe sync-method=latest-keyframe
|
||||
```
|
||||
|
||||
## Tee on inputs
|
||||
## Tee on inputs
|
||||
|
||||
You can also use `tee` in order to do multiple things with inputs. This example combines two audio visualisations:
|
||||
|
||||
|
|
|
@ -60,7 +60,7 @@ You can change the *volume* by setting the `volume` property between `0` and `1`
|
|||
gst-launch-1.0 audiotestsrc volume=0.1 ! autoaudiosink
|
||||
```
|
||||
|
||||
### White noise
|
||||
### White noise
|
||||
|
||||
Set `wave` to `white-noise`:
|
||||
|
||||
|
@ -70,7 +70,7 @@ gst-launch-1.0 audiotestsrc wave="white-noise" ! autoaudiosink
|
|||
|
||||
There are variations (e.g. _red noise_) - see the [docs](https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-base-plugins/html/gst-plugins-base-plugins-audiotestsrc.html) for a complete list.
|
||||
|
||||
### Silence
|
||||
### Silence
|
||||
|
||||
If you need an audio stream with nothing in:
|
||||
|
||||
|
|
|
@ -5,7 +5,7 @@ The [`filesink`](https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstre
|
|||
Note that, when using the command-line, the `-e` parameter ensures the output file is correctly completed on exit.
|
||||
|
||||
|
||||
### Write to an mp4 file
|
||||
### Write to an mp4 file
|
||||
|
||||
This example creates a test video (animated ball moving, with clock), and writes it as an MP4 file.
|
||||
Also added is an audio test source - a short beep every second.
|
||||
|
|
Loading…
Reference in a new issue