When outputting more than two channels, a channel-mask has to be
specified in the output caps.
We follow the same heuristic as other cases, when downstream
does not specify a channel-mask, we use that of the first
configured pad, and if there was none we generate a fallback
mask.
https://bugzilla.gnome.org/show_bug.cgi?id=794257
gudev/gudev.h is included unconditionally, so only enable
the gbm backend if gudev was actually found. This also
matches the meson build behaviour.
Should fix build on GNOME SDK builder.
The various id3v2 specs handle the extended header sizes differently
(because hey, it wouldn't be fun otherwise).
http://id3.org/id3v2.3.0 states:
"Where the 'Extended header size', currently 6 or 10 bytes, excludes
itself."
http://id3.org/id3v2.4.0-structure states:
Extended header size 4 * %0xxxxxxx
Number of flag bytes $01
Extended Flags $xx
Where the 'Extended header size' is the size of the whole extended
header, stored as a 32 bit synchsafe integer. An extended header can
thus never have a size of fewer than six bytes.
So in id3v2.4.0 it's the *whole* extended header size (a-la ISOBMFF
atom), whereas in id3v2.3.0 it's the extended header size *excluding*
those 4 initial bytes.
And for other versions, god knows..
Fixes regression introduced in commit da607005.
https://bugzilla.gnome.org/show_bug.cgi?id=792983
Allow for sub-classes to change pad templates to
support other texture targets, and bind input textures
accordingly.
When setting the caps, also store the texture target.
By default, glfilter only reports 2D texture targets
in the default caps, but sub-classes can change that
and it would be nice if they could easily find out
which texture targets were negotiated.
This adds 2 fields to the public struct, but since
it's unreleased -base API, it's not an ABI break.
This prevents cross compilation errors like:
usr/include/xf86drm.h:40:10: fatal error: drm.h: No such file or directory
These are caused because gstgldisplay_gbm.h includes xf86drm.h .
https://bugzilla.gnome.org/show_bug.cgi?id=793837
It is the only thing gst_pb_utils_init() does and it could be
automatically called from the places in pbutils it is needed.
After 1.14 we should deprecate gst_pb_utils_init().
https://bugzilla.gnome.org/show_bug.cgi?id=793611
When doing a 3D/multiview transformation and rescaling to
match the output window size, the resulting PAR may
not match the input any more and needs recalculating,
or else the GstSample reported to client-draw has the
wrong PAR.
The current GstVideoRegionOfInterestMeta API allows elements to detect
and name ROI but doesn't tell anything about how this information is
meant to be consumed by downstream elements.
Typically, encoders may want to tweak their encoding settings for a
given ROI to increase or decrease their quality.
Each encoder has its own set of settings so that's not something that
can be standardized.
This patch adds encoder-specific parameters to the meta which can be
used to configure the encoding of a specific ROI.
A typical use case would be: source ! roi-detector ! encoder
with a buffer probe on the encoder sink pad set by the application.
Thanks to the probe the application will be able to tell to the encoder
how this specific region should be encoded.
Users could also develop their specific roi detectors meant to be used with a
specific encoder and directly putting the encoder parameters when
detecting the ROI.
https://bugzilla.gnome.org/show_bug.cgi?id=793338
test_negotiation would occasionally time out, for unknown reasons.
Simplify the test setup and get rid of the main loop, busses, and
notify signals. With this I can no longer easily reproduce the
timeout. Fingers crossed.
Performance optimisation: Keep track whenever the streaming
thread or the application thread are waiting on the GCond for
more space or new data, and only signal on the GCond if someone
is actually waiting. Avoids unnecessary syscalls and thus
context switches.
Performance optimisation: Keep track whenever the streaming
thread or the application thread are waiting on the GCond
for more space or new data, and only signal on the GCond if
someone is actually waiting. Avoids unnecessary syscalls and
thus context switches.
These are very much artificial of course, but got to
measure something. appsink one contains lots of buffer
creation/free overhead, while appsrc one does not.
When trying to create a wayland display, it may fail because there
is not actually display to connect. It this case NULL is returned
but the created instance is not freed.
This patch unrefs the failed display.
https://bugzilla.gnome.org/show_bug.cgi?id=793483