Measures the audio latency between the source pad and the sink pad by
outputting period ticks on the source pad and measuring how long they
take to arrive on the sink pad.
Very useful for quantifying latency improvements in audio pipelines.
This plugin was particularly useful during development of the
low-latency features of the wasapi plugin.
https://bugzilla.gnome.org/show_bug.cgi?id=793839
This keep-it-simple plugin is useful when you want to pipe arbitrary
data to a different pipeline within the same process. Some advantages
over appsink/appsrc, the inter elements, etc:
* Ease of use. Buffers, events, and caps are transmitted as-is without
copying or serialization.
* Enables zerocopy (especially DMABUF) transparently without any
special-casing.
* Enables usage with sinks or elements that are unreliable and may
throw errors and need re-initialization, such as a network sink, a
USB device sink (v4l2), etc.
* Transmits arbitrary data, not just audio/video/subs
* Can easily implement 1 producer pipeline -> N dynamic consumer
pipelines within a single process when combined with the `tee`
element.
All queries, events, buffers, and buffer lists are proxied. State
changes, clocks, and base times for the two pipelines are independent
since the upstream and downstreams continue to be different pipelines.
https://bugzilla.gnome.org/show_bug.cgi?id=788200
SDP's are generated and consumed according to the W3C PeerConnection API
available from https://www.w3.org/TR/webrtc/
The SDP is either created initially from the connected
sink pads/attached transceivers as in the case of generating an offer or
intersected with the connected sink pads/attached transceivers as in
the case for creating an answer. In both cases, the rtp payloaded streams
sent by the peer are exposed as separate src pads.
The implementation supports trickle ICE, RTCP muxing, reduced size RTCP.
With contributions from:
Nirbheek Chauhan <nirbheek@centricular.com>
Mathieu Duponchelle <mathieu@centricular.com>
Edward Hervey <edward@centricular.com>
https://bugzilla.gnome.org/show_bug.cgi?id=792523
For libsrtp 1, add defines that translate the new namespaced identifiers
to the old unnamespaced ones. Also move the code for setting and getting
a stream's ROC into two compat functions that match libsrtp2's API.
It seems that libsrtp2 properly supports changing the ROC without having
to touch the sequence numbers afterwards, given that srtp_set_stream_roc
sets a pending_roc field, so the entire roc_changed dance should not be
needed anymore. The compat functions for libsrtp 1 just contain our
preexisting hacks, however, so it's still needed there.
libsrtp2 has no means of discovering the streams in the session, so to
create the stats structure we need to iterate over our own set of SSRCs.
For this we also need to re-add the previously removed ssrcs_set to the
encoder.
https://bugzilla.gnome.org/show_bug.cgi?id=776901
This plugin is useful when you want to pipe arbitrary data to
a different pipeline within the same process. Buffers, events, and caps
are transmitted as-is without copying or manipulation.
SRT[0] is an open source transport technology[1] that optimizes
streaming performance across unpredictable networks.
Although SRT is based on UDP, it works like connection-oriented
protocol. However, it doesn't mean that the SRT server or client
is necessarily to link to a receiver or a sender so, here, the
pairs of source and sink elements are introduced.
- srtserversink: SRT server to feed SRT stream
- srtclientsrc: SRT client to get SRT stream from srtserversink
- srtclientsink: SRT client to send SRT stream
- srtserversrc: SRT server to listen from srtclientsink
[0] https://github.com/Haivision/srt
[1] http://www.srtalliance.org/https://bugzilla.gnome.org/show_bug.cgi?id=785730
Add support for parsing linear time code from
an audio source using libltc
https://github.com/x42/libltc
The user can now choose between 3 different and independently
running timecode sources. The old override-existing property
has been replaced by timecode-source.
https://bugzilla.gnome.org/show_bug.cgi?id=784295
OpenJPEG 2.3 installs its headers to /usr/include/openjpeg-2.3. However,
since libopenjp2.pc seems to provide the right includedir CFLAGS at
least since version 2.1, instead of adding yet another version check,
just remove the subdir and the check for 2.2.
https://bugzilla.gnome.org/show_bug.cgi?id=788703
OpenJPEG 2.2 has some API changes and thus ships its headers in a new
include path. Add a configure check (to both meson and autoconf) to
detect the newer version of OpenJPEG and add conditional includes.
Fix the autoconf test for OpenJPEG 2.1, which checked for HAVE_OPENJPEG,
which was always set even for 2.0.
https://bugzilla.gnome.org/show_bug.cgi?id=786250
ipcpipeline1 is a very simple test that shows a short videotestsrc fragment.
ipc-play is a clone of gst-play that splits the pipeline in two
processes, running the source & demuxer on the master process
and the decoders & sinks on the slave.
These elements allow splitting a pipeline across several processes,
with communication done by the ipcpipelinesink and ipcpipelinesrc
elements. The main use case is to split a playback pipeline into
a process that runs networking, parser & demuxer and another process
that runs the decoder & sink, for security reasons.
https://bugzilla.gnome.org/show_bug.cgi?id=752214
The QTKit framework had been deprecated for long in favour of AVFundation
framework and we already have avfvideosrc that provides the same
functionality.
https://bugzilla.gnome.org/show_bug.cgi?id=782078
Don't hide build behind --enable-experimental. Our goal is to not
autoplug it for now, so let's just always build it if the dependencies
are there and hide autoplugging enablement behind an env var.
This element transforms a given number of input channels into a given number of
output channels according to a given transformation matrix. The matrix
coefficients must be between -1 and 1. In the auto mode, input/output channels
are automatically negotiated and the transformation matrix is a truncated or
zero-padded identity matrix.
https://bugzilla.gnome.org/show_bug.cgi?id=777376
If they were not ported after 4+ years it seems unlikely that anybody is
ever going to need them again. They're still in the GIT history if
needed.
https://bugzilla.gnome.org/show_bug.cgi?id=774530
This is useful e.g. if audio buffers should be exactly the duration of a
video frame, or if a audio buffers should never be too large because of
latency constraints.
The element is taking a fractional buffer duration, to allow working
with e.g. 1001/30000 as output duration and it accumulates rounding
errors in the buffer durations and compensates for them by making some
buffers one sample larger than the others.
https://bugzilla.gnome.org/show_bug.cgi?id=774689
libkms should not be used, because it imposes limitations on the DRM
API, especially regarding bpp and stride. Instead the DRM IOCTL should
be used directly.
Switch from libkms to the IOCTL interface. Set bpp and height for
framebuffer allocation to properly handle planar video formats.
https://bugzilla.gnome.org/show_bug.cgi?id=773473
Signed-off-by: Víctor Jáquez <vjaquez@igalia.com>
This was used by MSN messenger in prehistoric times, it's safe
to say no one needs or wants this any more these days. For
decoding old recordings there's still a decoder in ffmpeg.
https://bugzilla.gnome.org/show_bug.cgi?id=597616
It only offers one metric for now, "dssim", available if
https://github.com/pornel/dssim was installed on the system
at the time the plugin was compiled.
The spearman correlation for dssim against the TID2008 dataset
is 0.81, against 0.70 for the old ssim implementation, and
it runs 15 times faster.
https://bugzilla.gnome.org/show_bug.cgi?id=751324