ipcpipeline1 is a very simple test that shows a short videotestsrc fragment.
ipc-play is a clone of gst-play that splits the pipeline in two
processes, running the source & demuxer on the master process
and the decoders & sinks on the slave.
These elements allow splitting a pipeline across several processes,
with communication done by the ipcpipelinesink and ipcpipelinesrc
elements. The main use case is to split a playback pipeline into
a process that runs networking, parser & demuxer and another process
that runs the decoder & sink, for security reasons.
https://bugzilla.gnome.org/show_bug.cgi?id=752214
The QTKit framework had been deprecated for long in favour of AVFundation
framework and we already have avfvideosrc that provides the same
functionality.
https://bugzilla.gnome.org/show_bug.cgi?id=782078
Don't hide build behind --enable-experimental. Our goal is to not
autoplug it for now, so let's just always build it if the dependencies
are there and hide autoplugging enablement behind an env var.
This element transforms a given number of input channels into a given number of
output channels according to a given transformation matrix. The matrix
coefficients must be between -1 and 1. In the auto mode, input/output channels
are automatically negotiated and the transformation matrix is a truncated or
zero-padded identity matrix.
https://bugzilla.gnome.org/show_bug.cgi?id=777376
If they were not ported after 4+ years it seems unlikely that anybody is
ever going to need them again. They're still in the GIT history if
needed.
https://bugzilla.gnome.org/show_bug.cgi?id=774530
This is useful e.g. if audio buffers should be exactly the duration of a
video frame, or if a audio buffers should never be too large because of
latency constraints.
The element is taking a fractional buffer duration, to allow working
with e.g. 1001/30000 as output duration and it accumulates rounding
errors in the buffer durations and compensates for them by making some
buffers one sample larger than the others.
https://bugzilla.gnome.org/show_bug.cgi?id=774689
libkms should not be used, because it imposes limitations on the DRM
API, especially regarding bpp and stride. Instead the DRM IOCTL should
be used directly.
Switch from libkms to the IOCTL interface. Set bpp and height for
framebuffer allocation to properly handle planar video formats.
https://bugzilla.gnome.org/show_bug.cgi?id=773473
Signed-off-by: Víctor Jáquez <vjaquez@igalia.com>
This was used by MSN messenger in prehistoric times, it's safe
to say no one needs or wants this any more these days. For
decoding old recordings there's still a decoder in ffmpeg.
https://bugzilla.gnome.org/show_bug.cgi?id=597616
It only offers one metric for now, "dssim", available if
https://github.com/pornel/dssim was installed on the system
at the time the plugin was compiled.
The spearman correlation for dssim against the TID2008 dataset
is 0.81, against 0.70 for the old ssim implementation, and
it runs 15 times faster.
https://bugzilla.gnome.org/show_bug.cgi?id=751324
Simplify the PKG_CHECK_MODULES related to Wayland to avoid the confusion
of NOT_FOUND cases when there are 3 nested checks. Group those 3 checks
together since there are no conditions specific to each one.
Thanks to https://ci.gstreamer.net/ for alerting of the problem.
https://bugzilla.gnome.org/show_bug.cgi?id=773927
If the NOT_FOUND part of the check PKG_CHECK_MODULES is not written, it
defaults to error. Addind the else clause of this check as
HAVE_WAYLAND="no"
https://bugzilla.gnome.org/show_bug.cgi?id=773927
- xcb is supposedly thread-safe!
videotestsrc ! glimagesink now doesn't spuriously result in a
'call XInitThreads()' error however if anybody else is using X11,
then XInitThreads() still needs to be called and multiple glimagesink's
still need XInitThreads().
Everything still takes libX11 handles as they are compatible with the xcb
variants. Unfortunately we cannot move fully over to xcb due to GLX being
entirely based on Xlib. It's also impossible to transform a xcb_connection
to a Display which means we require X11 handles.
ttml was recently added but it won't compile unless libxml2 version 2.9.2
or later is available. In that version the first parameter of xmlGetProp
switched to being a const. In previous versions the compiler complains
about passing a const value to a non const argument.
Compiler would complain about include directory that didn't
exist because QPA_INCLUDE_PATH gets subst-ed regardless
(and if it didn't we'd have just an empty -I argument).
https://bugzilla.gnome.org/show_bug.cgi?id=767553
This DSP library can be used to enhance voice signal for real time
communication call. In implements multiple filters like noise reduction,
high pass filter, echo cancellation, automatic gain control, etc.
The webrtcdsp element can be used along, or with the help of the
webrtcechoprobe if echo cancellation is enabled. The echo probe should
be placed as close as possible to the audio sink, while the DSP is
generally place close to the audio capture. For local testing, one can
use an echo loop pipeline like the following:
autoaudiosrc ! webrtcdsp ! webrtcechoprobe ! autoaudiosink
This pipeline should produce a single echo rather then repeated echo.
Those elements works if they are placed in the same top level pipeline.
https://bugzilla.gnome.org/show_bug.cgi?id=767800
The dc1394src is a PushSrc element for IIDC cameras based on libdc1394.
The implementation from the 0.x series is deffective:
caps negotiation does not work, and some video formats
provided by the camera are not supported.
Refactor the code to port it to 1.X and enhance the support
for the full set of video options of IIDC cameras:
- The IIDC specification includes a set of camera video modes
(video format, frame size, and frame rates).
They do not map perfectly to Gstreamer formats, but those that
do not match are very rare (if used at all by any camera).
In addition, although the specification includes a raw format,
some cameras use mono video formats to capture in Bayer format.
Map corresponding video modes to Gstreamer formats in capabilities,
allowing both gray raw and Bayer video formats for mono video modes.
- The specification includes scalable video modes (Format7),
where the frame size and rate can be set to arbitrary values
(within the limits of the camera and the bus transport).
Allow the use of such mode, using the frame size and rate
from the negotiatied caps, and set the camera frame rate
adjusting the packet size as in:
<http://damien.douxchamps.net/ieee1394/libdc1394/faq/#How_do_I_set_the_frame_rate>
The scalable modes also allow for a custom ROI offset.
Support for it can be easily added later using properties.
- Camera operation using libdc1394 is as follows:
1. Enumerate cameras on the system and open the camera
identified the enumeration index or by a GUID (64bit hex code).
2. Query the video formats supported by the camera.
3. Configure the camera for the desired video format.
4. Setup the capture resources for the configured video format
and start the camera transmission.
5. Capture frames from the camera and release them when not used.
6. Stop the camera transmission and clear the capture resources.
7. Close the camera freeing its resources.
Do steps 2 and 3 when getting and setting the caps respectively.
Ideally 4 and 6 would be done when going from PAUSED to PLAYING
and viceversa, but since caps might not be set yet, the video mode
is not properly configured leaving the camera in a broken state.
Hence, setup capture and start transmission in the set caps method,
and consequently clear the capture and stop the transmission
when going from PAUSED to READY (instead of PLAYING to PAUSED).
Symmetrycally, open the camera when going from READY to PAUSED,
allowing to probe the camera caps in the negotiation stage.
Implement that using the `start` and `stop` methods of `GstBaseSrc`,
instead of the `change-state` method of `GstElement`.
Stop the camera before setting new caps and restarting it again
to handle caps reconfiguration while in PLAYING (it has no effect
if the camera is not started).
- Create buffers copying the bytes of the captured frames.
Alternatively, the buffers could just wrap the bytes of the frames,
releasing the frame in the buffer's destroy notify function,
if all buffers were destroyed before going from PLAYING to PAUSED.
- No timestamp nor offset is set when creating buffers.
Timestamping is delegated to the parent class BaseSrc,
setting `gst_base_src_set_live` TRUE, `gst_base_src_set_format`
with GST_FORMAT_TIME and `gst_base_src_set_do_timestamp`.
Captured frames have a timestamp field with the system time
at the completion of the transmission of the frame,
but it is not sure that this comes from a monotonic clock,
and it seems to be left NULL in Windows.
- Use GUID and unit properties to select the camera to operate on.
The camera number used in version 0.X does not uniquely identify
the device (it depends on the set of cameras currently detected).
Since the GUID is 64bit identifier (same as MAC address),
handle it with a string property with its hexadecimal representation.
For practicality, operate on the first camera available if the GUID
is null (default) and match any camera unit number if unit is -1.
Alternatively, the GUID could be handed with an unsigned 64 bit
integer type property, using `0xffffffffffffffff` as default value
to select the first camera available (it is not a valid GUID value).
- Keep name `GstDc1394` and prefix `gst_dc1394` as in version 0.X,
although `GstDC1394Src` and `gst_dc1394_src` are more descriptive.
- Adjust build files to reenable the compilation of the plugin.
Remove dc1394 from the list of unported plugins in configure.ac.
Add the missing flags and libraries to Makefile.
Use `$()` for variable substitution, as many plugins do,
although other plugins use `@@` instead.
https://bugzilla.gnome.org/show_bug.cgi?id=763026
In OpenBSD there is no "actual" librt that programs can link with,
instead the system/base libc provides the functions one would
customarily find there.
https://bugzilla.gnome.org/show_bug.cgi?id=766441
The port was trivial, and according to the NEWS file nothing else has changed,
but it is possible that other API was changed without proper notification.
OpenJPEG upstream has shipped a pkg-config file for the past 4 years, and all
distros should be shipping it by now.
https://bugzilla.gnome.org/show_bug.cgi?id=766213
Some platforms provide an old version of GLES2/gl2.h and GLES2/gl2ext.h that
will fail when including GLES3/gl3.h due to missing typedef's.
Seen on the RPi.
This patch will enable the import of dmabufs into a KMS buffer using
the PRIME kernel interface.
If the driver does not support prime import, the method is skipped.
It has been tested with a Freescale I.MX6 board.
https://bugzilla.gnome.org/show_bug.cgi?id=761059
This is simple video sink that use libdrm/libkms API to render frames.
The element uses planes to render through drmModeSetPlane().
It has been tested in an Exynos4412 board and in a Freescale I.MX6 board.
https://bugzilla.gnome.org/show_bug.cgi?id=761059
configure.ac was changed to work with new versions of OpenCV 3.X.
A new include is added gstopencvutils.cpp because it contains
the previous. This keeps compatibility with 2.4.
https://bugzilla.gnome.org/show_bug.cgi?id=760473
Properly separate files as we will not have only one single base class
for all elements as we used to with 0.10, but the same way it is done
with ladspa, we subclass GstAudioFilter, GstBaseSource etc...
https://bugzilla.gnome.org/show_bug.cgi?id=678207
it's exposed in public API so hiding it in an AC_DEFINE for config.h only
works when building libgstgl itself. Attempting to use libgstgl (especially
on egl platforms) will throw a compilation error.
The plugin doesn't need the wayland-scanner package to be built
or run, it only needs the wayland-scanner program during compile time.
When cross-compiling, build systems might not have the wayland-scanner
package for the target system as it is a developer's tool, while it should
still be possible to use wayland-scanner from the host system.
This patch fixes it by not requiring the wayland-scanner package but
just the binary itself.
Note that the check is done outside of the PKG_CHECK_MODULES
as it doesn't work inside of it.
https://bugzilla.gnome.org/show_bug.cgi?id=752688
Just check whether LATM is defined which is only available
in 2.7 and later. Allows us to simplify the configure check
a little and we can get rid of some hackish workarounds for
problems with earlier version headers.
It's useful enough already to be used in other elements for audio aggregation,
let's give people the opportunity to use it and give it some API testing.
https://bugzilla.gnome.org/show_bug.cgi?id=760733
GCC automatically disable redundance warnings for system headers. As
soon as we start using a non-system installed mesa, we would start
having issues. The test for both wasn't setting any flags, so it would
work but then fail at runtime.
This is being fixed by disabling in the code (where needed only) that
GCC warning. The test is also fixed to avoid the false positive we had.
The videoframe-audiolevel element acts like a synchronized audio/video "level"
element. For each video frame, it posts a level-style message containing the
RMS value of the corresponding audio frames. This element needs both video and
audio to pass through it. Furthermore, it needs a queue after its video
source.
https://bugzilla.gnome.org/show_bug.cgi?id=748259
New subclass with a similar behaviour as the old liveadder, but
a slightly different API as the latency is in nanoseconds, not
milliseconds. Also, the new liveadder has a effective latency that
is latency + output-buffer-duration. In practice, just setting a non-zero
latency with the new audiomixer gives you the right behavior in 99% of the
cases.