The matrices were in the wrong order.
Instead of the conversion matrix being
_ XYZ_TO_RGB_output * RGB_TO_XYZ_input * input_RGB
It was
_ RGB_TO_XYZ_input * XYZ_TO_RGB_output * input_RGB
This function might be revisited with different channel position mapping
while audio source goes into play so the reorder flag needs to be reset
before the checks happen.
Instead initialize the map infos, etc to NULL like gst_buffer_map()
would be doing on a zero-sized buffer.
This fixes a crash in audioresample if the first output buffer would
contain zero samples.
There was a typo in the extension name which resulted in the modifiers
to never be set when doing DMABuf import. That triggered the modifiers
lookup in Intel driver, which was in fact hiding bugs in the gldownload
to glupload path when doing DMABuf.
Note, this changes breaks pipeline the following pipeline on Intel and
some other drivers:
gltestsrc ! gldownload ! video/x-raw\(memory:DMABuf\) ! glimagsink
A fix for this was added to Mesa recently:
Related to https://gitlab.freedesktop.org/mesa/mesa/merge_requests/1338
Fixes 5d0e191710
We don't support modififers and that would result in bad image being
displayed. Note that this was fixes recently in Mesa MR 1138, prior to
that, the reported modifier is always 0, which makes this change a
no-op.
Fixes#441
Related to https://gitlab.freedesktop.org/mesa/mesa/merge_requests/1338
It's either this or replacing all the object lock usage in gldisplay
with a recursive mutex which is not backwards compatible
The failure case is effectively:
1. The user has locked the display object lock
2. a glcontext loses it's last ref and attempts to quit the window
3. gst_gl_window_quit() attempts to remove the window from the display
4. gst_gl_display_remove_window attempts to take the display object lock
The only concern with changing the locking for the window list in the
display is that gst_gl_display_create_window() has documentation requiring
the object lock to be held which must continue to work correctly.
Returning a transfer none value for a value checked by a lock is not
thread safe as the reference could disappear before the caller can take
its reference.
Following the [design document] encodebin needs to handle sources that
output multiple streams, for that purpose and to make it simpler,
we ensure that a single segment is outputted to the encoders by using
an `identity single-segment=true` at the beginning of streams chains.
Added API to enable or disable the use of that new feature.
Added support for the encoding profile parser for that new property,
keeping backward compatibility
[design document]: https://gstreamer.freedesktop.org/documentation/additional/design/encoding.html?gi-language=c#rendering-timelines
I'm going to use this new API in gst-omx so an encoder can request
v4l2src to produce buffers matching the encoder stride and slice heights
preventing copies of incoming buffers.
Especially for interlaced input make sure to
a) never mix both fields
b) never read lines after the end of the input frame
c) allocate enough space in the temporary lines to not write outside
the allocated memory area
This fixes various memory corruptions and rescaling artefacts.
At the moment, we only posted QoS messages when frame_drop() was
called, but not in finish_frame() when QoS triggered a late push.
This should fix applications that tries to account the dropped
frames. We also emit a warning on drops so it's more clear what is
happening.
By adding this field, buffer producers can now explicitly set the exact
geometry of planes, allowing users to easily know the padded size and
height of each plane.
GstVideoMeta is always heap allocated by GStreamer itself so we can
safely extend it.
When using gst_video_info_align() user had no easy way to retrieve the
padded size and height of each plane.
This can easily be implemented in fill_planes() as it's already called
in align() with the padded height.
Ideally we'd add a plane_size field to GstVideoInfo but the remaining
padding is too small so that would be an ABI break.
Fix#618
We want to round up when halfing height.
I do have a test for this but it relies on my new video-align tests so
it's part of the next commit. Recording the fix separately if we want to
backport this fix to the stable branch.
Similar to gst_video_info_from_caps() which allows encoded video format,
don't error gst_audio_info_from_caps() with encoded audio format.
Because gst_audio_info_set_format() supports encoded format, current
behavior does not seem to be consistent.
We need to provide twice as many lines as usual to the scaling function
as every second lines would be skipped.
Without this we read from random memory and produce colorful output and
crashes.
Without this, scaling e.g. interlaced UYVY causes corrupted output with
lines as follows: f1 f1 f2 f2, i.e. two lines of each field and only
then the other field.
The watch->messages_bytes is not decreased when the write operation
from the backlog is only partly successfull.
This commit decreases the watch->messages_bytes for the successfully
sent messages.
Fixes#679
Y210 is a 10-bit YUY2, so we may re-use the YUY2 shaders but gl format
is set to RG16
Sample pipeline:
gst-launch-1.0 videotestsrc ! video/x-raw,format=Y210 ! glimagesink
NV16/NV61 is basically the same as NV12/NV21 with a higher chroma resolution.
Since only the size of the UV plane/texture is different, the same shaders are used as for NV12/NV21.
This is done by reusing `gst_gl_memory_setup_buffer` avoiding to
duplicate code.
Without a VideoMeta, mapping those buffers lead to GstBuffer mapping the
buffer in system memory even when specifying the GL flags (through the
buffer merging mechanism) making the result totally broken.
The newly exposed vmethods are pause, resume, stop and clear_all.
The existing reset vmethod is deprecated.
The audio sink will fallback to calling reset if pause or stop
are not provided and will fallback to calling start if
resume is not provided. There is no default clear_all
implementation.
Existing audio sinks continue to work as before.
This change is useful for sinks that need to distinguish
between a pause and a stop (currently both are handled
by a reset) and is needed for https://bugzilla.gnome.org/show_bug.cgi?id=788362https://bugzilla.gnome.org/show_bug.cgi?id=788361
Universal Windows Platform apps are not allowed to use LoadLibrary to
load arbitrary DLLs from the filesystem. They can only use
LoadPackagedLibrary to load DLLs that have been packaged with the app
as assets.
See also: https://gitlab.freedesktop.org/gstreamer/gstreamer/merge_requests/190
If the remainder is not evenly divisable by 4, we'd miss the check
for zero and continue the loop until crashing. Change the branch
to take into account negatives as well.
This more closely matches the SSE loop.
We are using ARC to cleanup after ourselves.
../gst-libs/gst/gl/cocoa/gstglwindow_cocoa.m:159:20: error: unused variable 'queue' [-Werror,-Wunused-variable]
dispatch_queue_t queue = (__bridge_transfer dispatch_queue_t) window->priv->gl_queue;
^
Matroskademux will send gap event when lag of video and audio is over 3 seconds.
audiodecoder needs to handle gap event and set default output caps.
Only audio info is set, while output caps is ignored. This cause the assertion failed.
Need to fill output caps in gst_audio_decoder_negotiate_default_caps() with
negotiated caps to avoid critical info printed when check it later.
This is needed for using GstGL with ANGLE as the GLES implementation
in Universal Windows Platform apps that use the Windows Runtime
(WinRT) instead of Win32, which is deprecated and not allowed in
Windows Store apps.
This has been tested with Servo on the Microsoft HoloLens 2, and seems
to work quite well.
By passing NULL to `g_signal_new` instead of a marshaller, GLib will
actually internally optimize the signal (if the marshaller is available
in GLib itself) by also setting the valist marshaller. This makes the
signal emission a bit more performant than the regular marshalling,
which still needs to box into `GValue` and call libffi in case of a
generic marshaller.
Note that for custom marshallers, one would use
`g_signal_set_va_marshaller()` with the valist marshaller instead.
Simple addition for supporting EXT_platform_device typed display.
It's a kind of special display type (part of EGL specification)
which has no window at all.
To use EGLDevice explicitly, set environment "GST_GL_WINDOW=egl-device"
See also https://www.khronos.org/registry/EGL/extensions/EXT/EGL_EXT_platform_device.txt
* Fix typo
s/nunormalized/normalized/g
* Update GstVideoMasteringDisplayInfo description
Each values are not array.
* Add missing newline between arguments description and
detailed comment.
The gltestsrc element was refactored to inherit from this base class which
handles the GL context. The sub-class only needs to implement the gl_start,
gl_stop and fill_gl_memory vfuncs, along with properly advertizing the GL APIs
it supports through the supported_gl_api GstGLBaseSrc class attribute.
The caps and thus the video info have preference. If the field order is
set in there then it applies to all frames.
This works around issues where the tff field order is only set in the
caps but not additionally in the buffer flags.
Commit c71dd72b "gl/wayland: fix glib mainloop integration" was overeager
in removing the poll result test from the check function. This caused
dispatch to be called even if no new events are available on the
Wayland connection, which in turn would wake up the glib mainloop,
causing effectively a tight loop without ever blocking on the poll.
Fixes#603
basedepayload generates its own segment in a pretty unconventional
manner, relying on information in the caps such as npt-start or
npt-stop, usually set by rtspsrc.
In ONVIF mode, rtspsrc will generate the correct segment and this
logic in rtpbasedepayload will not be needed, this commit allows
rtspsrc to signal that through the caps.
While we can convert between all formats apart from the rate, we
actually need to make sure that we comply with a) the rate of the first
configured pad and b) also all the allowed rates from downstream.
We were previously only fixating the rate in the getcaps
implementation when downstream was requiring a discrete value,
causing negotiation to fail when upstream was capable of rate
conversion, but not made aware that it had to occur.
Instead of fixating the rate, we can simply update our sink
template caps with whatever GValue the downstream caps are holding
as their rate field.
Allows negotiation to successfully complete with pipelines such as:
audiotestsrc ! audio/x-raw, rate=48000 ! audioresample ! audiomixer name=m ! \
audio/x-raw, rate={800, 1000} ! autoaudiosink \
audiotestsrc ! audio/x-raw, rate=44100 ! audioresample ! m.
... and also as known as ITU-T H.273.
The conversion has been handled per plugin for now. That causes
code duplication a lot also some plugins might not be updated with newly introduced
color{matrix,transfer,primaries} enum value(s).
Instead of handling it per plugin, centralized handling can remove such
code duplication and make plugins be up-to-dated.
The extmap attribute allows mapping RTP extension header IDs to
well-known RTP extension header specifications. See RFC8285 for details.
We store the extmap attribute either as string in the caps
extmap-X=extensionname
where X is the integer extension header ID, or as 3-tuple of strings
extmap-X=<direction,extensionname,extensionattributes>
where direction or extensionattributes are allowed to be the empty
string.
Both formats are allowed because usually only the extension name is
given and it's much simpler to handle in caps.
We use this property in gst_gl_display_egl_from_gl_display, to set
foreign_display for the new GstGLDisplayEGL instance. This fixes a
problem where gst_gl_display_egl_finalize calls EglTerminate on a
pre-existing EGL connection.
It seems that eglCreatePlatformWindowSurfaceEXT is failing (with
EGL_BAD_ALLOC) because it thinks an EGL surface has already been created
for the wl_egl_window. The reason is that the "driver_private" field of
the wl_egl_window is getting clobbered by the function
wl_proxy_set_queue().
Since a wl_egl_window is not a wl_proxy, it shouldn't be passed to
wl_proxy_set_queue(). It just wraps a wl_surface (which is a wl_proxy).
And it looks like the queue for that surface is getting set earlier on
in the function anyway.
See https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/issues/621#note_184582
Body_offset mean that so much data have been written.
Without this patch n_vectors somtimes becomes one more than it should
and then there will be an vector that have a random size causing
writev_bytes to cause a "Bad address" error.
This patch fixes the following critical warning:
CRITICAL **: 11:33:32.843: Unknown GL format 0x0 provided
It would happen during the setup of a second pipeline involving the DMABuf
uploader, typically with a v4l2src element. The warning was raised because the
uploader had a cached EGLImage already filled but the formats were not
synchronized accordingly.
The "field-order" is related for all interlace_mode modes except the
"progressive" mode. So instead of or'ing each mode we can use the
already supported GST_VIDEO_INFO_IS_INTERLACED macro.
This makes a pipeline below works:
little endian:
gst-launch-1.0 videotestsrc ! video/x-raw,format=P010_10LE ! glimagesink
big endian:
gst-launch-1.0 videotestsrc ! video/x-raw,format=P010_10BE ! glimagesink
gst_meta_api_type_register() assumes that the last tags element is null, but it wasn't
==17422==ERROR: AddressSanitizer: global-buffer-overflow on address 0x7f4e2a67c998 at pc 0x7f4e2a0c92ac bp 0x7ffcc41f80b0 sp 0x7ffcc41f80a0
READ of size 8 at 0x7f4e2a67c998 thread T0
#0 0x7f4e2a0c92ab in gst_meta_api_type_register ../subprojects/gstreamer/gst/gstmeta.c:94
#1 0x7f4e2a5582c3 in gst_video_afd_meta_api_get_type ../subprojects/gst-plugins-base/gst-libs/gst/video/video-anc.c:1146
#2 0x404c7c in invoke_get_type (/home/ubuntu/gst-build/build/tmp-introspect5gv1rovo/GstVideo-1.0+0x404c7c)
#3 0x406b5c in dump_irepository (/home/ubuntu/gst-build/build/tmp-introspect5gv1rovo/GstVideo-1.0+0x406b5c)
#4 0x407089 in main (/home/ubuntu/gst-build/build/tmp-introspect5gv1rovo/GstVideo-1.0+0x407089)
#5 0x7f4e295b4b6a in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x26b6a)
#6 0x404479 in _start (/home/ubuntu/gst-build/build/tmp-introspect5gv1rovo/GstVideo-1.0+0x404479)
0x7f4e2a67c998 is located 40 bytes to the left of global variable 'tags' defined in '../subprojects/gst-plugins-base/gst-libs/gst/video/video-anc.c:1232:25' (0x7f4e2a67c9c0) of size 24
0x7f4e2a67c998 is located 0 bytes to the right of global variable 'tags' defined in '../subprojects/gst-plugins-base/gst-libs/gst/video/video-anc.c:1141:25' (0x7f4e2a67c980) of size 24
SUMMARY: AddressSanitizer: global-buffer-overflow ../subprojects/gstreamer/gst/gstmeta.c:94 in gst_meta_api_type_register
Add max-reorder property to make the old hard coded reordering limit of
100 configurable. It's particularly useful in some scenarios to set
max-reorder=0 to disable the behavior that the depayloader will drop
packets.
Note that although the default value is 100, the default limit has
increased with one because of the changed if-test. This was done to
allow the max-reorder value to be more intuitive. See tests.
Instead of checking if the requested GL API is GLES2 (because ANY can
be set) the string is matched with the GLES2 prefix, and if so, then
the string is offset.
RFC 7826 recommends (but does not require) starting at 0,
but at least one known server implementation fails to copy
request sequence numbers <1 into responses due to an
incorrect null check.
The server known to exhibit this behavior is the Parrot
Streaming Server, serving video from their UAV devices.
A fix has been submitted upstream as well:
https://github.com/Parrot-Developers/librtsp/pull/2
The Parrot developers are known to have tested with LibVLC.
In WireShark debugging, LibVLC appears to start with a CSeq
of 2, which is likely why this bug went unnoticed.
This reverts 487595a7d6, which set this to 0 citing the
RFC. The switch to 0 was thus a recent one; it's therefore
possible server implementors relied on the previous
GStreamer client behavior in their tests as well.
Fixes#624.
Since we started depending on GLib 2.44, we can be sure this macro is
defined (it will be a no-op on compilers that don't support it). For
plugins we should just start using `G_DECLARE_FINAL_TYPE` which means we
no longer need the macro there, but for most types in base/gst-libs we
don't want to break ABI, which means it's better to just keep it like it
is (and use the `#ifdef` instead).
The problem is that Gobject Introspections does not understand the const
gfloat matrix[16] as an matrix but as an array of gfloasts but as just
one gfloat.
To fix this i added the annotation to the parameter
descriptions.
This came up in the case where v4l2 sets caps with colorimetry=NULL, and
then tries to parse back the colorimetry, causing a crash in
gst_video_get_colorimetry() because of g_str_equal(). We fix this by
making sure the only caller of the function never calls it with a null
colorimetry string.
SMPTE ST 2084 transfer characteristics (a.k.a ITU-R BT.2100-1 perceptual quantization, PQ)
is used for various HDR standard.
With ST 2084, we can represent BT 2100 (Rec. 2100). BT 2100 defines
various aspect of HDR such as resolution, transfer functions, matrix, primaries
and etc. It uses BT2020 color space (primaries and matrix) with PQ or HLG
transfer functions.
The code for this is mostly lifted from audiobuffersplit, it
allows use cases such as keeping the buffers output by compositor
on one branch and audiomixer on another perfectly aligned, by
requiring the compositor to output a n/d frame rate, and setting
output-buffer-duration to d/n on the audiomixer.
The old output-buffer-duration property now simply maps to its
fractional counterpart, the last set property wins.
Packed 10 bits per each R, G and B channel with MSB 2bits alpha channel.
This format is mapped to Windows' DXGI_FORMAT_R10G10B10A2_UNORM format which is
required for 10bits HDR rendering.
Note that this RGB10A2_LE format is R - B channel swapped version of BGR10A2_LE
... if subclass didn't update values. Note that the mastering-display-info
and content-light-level might be updated by user defined value (e.g., encoding option).
Introduce HDR signalling methods
* GstVideoMasteringDisplayInfo: Representing display color volume info.
Defined by SMPTE ST 2086
* GstVideoContentLightLevel: Representing content light level specified in
CEA-861.3, Appendix A.
Closes https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/issues/400
By using strtoul(), invalid values will get mapped to MAXULONG and we
would have to check errno. They won't get mapped to 0.
To solve this, use the signed g_ascii_strtoll(). This will map errors to
0 or G_MAXINT64 or G_MININT64, and the valid range for GstDateTime is >
0 and <= 9999 so we can directly check for this here.
Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/issues/384
As part of commit 808e7127, we prefixed the `GstWlWindow`'s `shell`
field with wl_, to differentiate it from the other types of shells a
Wayland compositor might support. However, this is apparently a struct
that we expose to our users, so changing it means we have an API break.
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/issues/592
Add the possible to limit the Content-Length
Define an appropriate request size limit and reject requests exceeding
the limit (413 Request Entity Too Large)
When the glupload element renegotiates the caps, set_caps will reset the
method_impl to NULL, but the method will be kept. transform_caps tries
to use the method_impl to transform the caps, because a method is set,
but will segfault.
Make rtspconnection a little more strict to RFC2326.
Make sure that CSeq is in every RTSP message and that CSeq is valid.
Also break the build_next loop if any parsing fails, By acting on
the builder->status code.
video-anc.h💯 Error: GstVideo: identifier not found on the first line:
* Active Format Description (AFD) support
^
video-anc.h:207: Error: GstVideo: identifier not found on the first line:
* Bar data support
^
video-anc.h:228: Warning: GstVideo: "@top_bar_flag" parameter unexpected at this location:
* @top_bar_flag : flag indicating presence of top bar field
^
This is inconsistent with other add_meta methods such as
gst_buffer_add_video_meta , which will return NULL without
logging when gst_video_info_set_format fails.
It is up to the caller to check the return value of the
function, and log if appropriate.
It's invalid to have a 'interlace-mode=alternate' without the Interlaced caps
feature as well.
Modify gst_video_info_from_caps() to reject such case so we can easily
spot them in bugged elements.
gst_gl_memory_setup_buffer() was marked as introspectable=0
anyway, so might just as well mark it as '(skip)' and suppress
the warning. Reason is the (element-type gpointer) on wrapped_data.
gstglmemory.c:1426: Warning: GstGL: gst_gl_memory_setup_buffer: argument wrapped_data: Missing (element-type) annotation
gstglmemory.c:1426: Warning: GstGL: gst_gl_memory_setup_buffer: argument wrapped_data: Missing (element-type) annotation
egl/gstegl.h:40: Warning: GstGL: symbol='EGL_EGLEXT_PROTOTYPES': Unknown namespace for symbol 'EGL_EGLEXT_PROTOTYPES'
gstaudiometa.c:382: Warning: GstAudio: gst_buffer_add_audio_meta: return value: Invalid non-constant return of bare structure or union; register as boxed type or (skip)
The function rtcp_packet_min_length() returns a length for each known type
and -1 for unknown types. This change fixes the test accordingly and silences
the following warning.
gstrtcpbuffer.c:567:12: error: comparison of constant -1 with expression of type 'GstRTCPType' is always false
[-Werror,-Wtautological-constant-out-of-range-compare]
if (type == -1)
Fix the following warnings by adding casts.
gstdiscoverer.c:1801:17: error: format specifies type 'unsigned long' but the argument has type 'off_t' (aka 'long long') [-Werror,-Wformat]
location, file_status.st_size, file_status.st_mtime);
^~~~~~~~~~~~~~~~~~~
gstdiscoverer.c:1801:38: error: format specifies type 'long long' but the argument has type '__darwin_time_t' (aka 'long') [-Werror,-Wformat]
location, file_status.st_size, file_status.st_mtime);
https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/issues/570
Before a gap event is pushed downstream a segment event must be pushed
since the gap event can cause packet concealment downstream and hence
data flow. Since concealment before receiving any data packets usually
doesn't make any sense, the gap event is not sent downstream.
Alternatively one could generate a default caps and segment event, but
no need to complicate things until it's proven necessary.
https://bugzilla.gnome.org/show_bug.cgi?id=773104https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/issues/301
The former code allowed an attacker to create a heap overflow by
sending a longer than allowed session id in a response and including a
semicolon to change the maximum length. With this change, the parser
will never go beyond 512 bytes.
Using a single condition variable for synchronization across all GL
messages is very slow on Windows and uses up to 20% CPU usage in some
workloads due to lock contention and false broadcasts.
Using per-message event handles reduces the CPU usage to negligible
amounts despite having to allocate a new event handle for each
message.
Implement the prepare and check functions according to the
documentation by returning TRUE when events should be dispatched
via the dispatch function.
As wl_display_read_events never blocks we can call it unconditionally
without looking at the poll status.
This simplifies the implementation and gets rid of a race where the
mainloop could get blocked due to nobody actually reading the events
from the wayland connection.
The ->skip_buffer implementation in videoaggregator replicates
the behaviour of the aggregate method to determine whether a
buffer can be skipped
(https://bugzilla.gnome.org/show_bug.cgi?id=781928).
This fixes a typo that made it so the start time of the buffer
was calculated against the output segment, not the segment of
the relevant sinkpad, which caused buffers to be skipped when
for example a sinkpad had received a segment which base had
been modified by a pad offset somewhere along the way.
This simply makes the calculation of the buffer start time
identical to the calculation in aggregate()
Doing so involves retrieving the current viewport from OpenGL which as
with any glGet operation, is expensive.
This means that the various sinks need to reset the viewport on draw.
In the process, fix resizing on cocoa.
If we only ever make it to READY, transform_caps can create an
internal convert object that will never be freed by basetransform's
stop vmethod (PAUSED->READY).
This allows us to output audio samples without discarding
any input frames, which is useful for some formats/codecs
(e.g. the MonkeysAudio decoder implementation in ffmpeg
which will might return e.g. 16 output buffers for an
input buffer for certain files).
In the past decoder implementations just concatenated
the returned audio buffers until a full frame had been
decoded, but that's no longer possible to do efficiently
when the decoder returns audio samples in non-interleaved
layout.
Allowing subframes to be output before the entire input
frame is decoded can also be useful to decrease startup
latency/delay.
https://gitlab.freedesktop.org/gstreamer/gst-libav/issues/49
The use of mediump as a specifier in GLSL shaders will have limited
resolution and when used as texture coordinates may become inaccurate
over texture sizes of 1024.
The function fill_bytes could sometimes return a value greater than zero
and in the same time set the GError.
Function read_bytes calls fill_bytes in a while loop. In the special
case above it would call fill_bytes with error already set.
Thus resulting in "GError set over the top of a previous GError".
Solved this by clearing GError when return value is greater than zero.
Actions are taken depending on error type by caller of read_bytes. Eg.
with EWOULDBLOCK gst_rtsp_source_dispatch_read will try to read the
missing bytes again (GST_RTSP_EINTR )
https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/issues/445
gst_video_decoder_negotiate_default_caps() is meant to pick a default output
format when we need one earlier because of an incoming GAP.
It tries to use the input caps as a base if available and fallback to a default
format (I420 1280x720@30) for the missing fields.
But the framerate and pixel-aspect were not explicitly passed to
gst_video_decoder_set_output_state() which is solely relying on the input format
as reference to get the framerate anx pixel-aspect-ratio.
So there is no need to manually handling those two fields as
gst_video_decoder_set_output_state() will already use the ones from
upstream if available, and they will be ignored anyway if there are not.
This also prevent confusing debugging output where we claim to use a
specific framerate while actually none was set.
gstrtspconnection.c: In function ‘writev_bytes’:
gstrtspconnection.c:1348:10: error: ‘res’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
return res;
^
When using multichannel audio data and being needed to reorder channels,
audio data is not copied correctly because destination address of
memcpy is wrong.
For example, the following command
$ gst-launch-1.0 pulsesrc ! audio/x-raw,channels=6,format=S16LE ! filesink location=test.raw
will reproduce this issue if there is 6-ch audio input device.
This commit fixes that.
The detailed process of this issue is as follows:
1. gst-launch-1.0 calls gst_pulsesrc_prepare (gst-plugins-good/ext/pulse/pulsesrc.c)
1466 gst_pulsesrc_prepare (GstAudioSrc * asrc, GstAudioRingBufferSpec * spec)
1467 {
(skip...)
1480 {
1481 GstAudioRingBufferSpec s = *spec;
1482 const pa_channel_map *m;
1483
1484 m = pa_stream_get_channel_map (pulsesrc->stream);
1485 gst_pulse_channel_map_to_gst (m, &s);
1486 gst_audio_ring_buffer_set_channel_positions (GST_AUDIO_BASE_SRC
1487 (pulsesrc)->ringbuffer, s.info.position);
1488 }
In my environment, after line 1485 is processed, position of spec and s are
spec->info.position[0] = 0
spec->info.position[1] = 1
spec->info.position[2] = 2
spec->info.position[3] = 6
spec->info.position[4] = 7
spec->info.position[5] = 8
s.info.position[0] = 0
s.info.position[1] = 6
s.info.position[2] = 2
s.info.position[3] = 1
s.info.position[4] = 7
s.info.position[5] = 8
The values of spec->info.positions equal
GST_AUDIO_BASE_SRC(pulsesrc)->ringbuffer->spec->info.positions.
2. gst_audio_ring_buffer_set_channel_positions calls
gst_audio_get_channel_reorder_map.
3. Arguments of gst_audio_get_channel_reorder_map are
from = s.info.position
to = GST_AUDIO_BASE_SRC(pulsesrc)->ringbuffer->spec->info.positions
At the end of this function, reorder_map is set to
reorder_map[0] = 0
reorder_map[1] = 3
reorder_map[2] = 2
reorder_map[3] = 1
reorder_map[4] = 4
reorder_map[5] = 5
4. Go back to gst_audio_ring_buffer_set_channel_positions and
2065 buf->need_reorder = TRUE;
is processed.
5. Finally, in gst_audio_ring_buffer_read,
1821 if (need_reorder) {
(skip...)
1829 memcpy (data + i * bpf + reorder_map[j] * bps, ptr + j * bps, bps);
is processed and makes this issue.
Otherwise we would return EOF if nothing was written in any case, even
if this was actually a case of TIMEOUT or EWOULDBLOCK for example.
Thanks to Edward Hervey for debugging and finding this issue.
Fixes 2 problems:
1) Number of unmapped memories does not always match number of mmaped ones in
dispatch_write().
2) When dispatch_write() is dispatched second time after an incomplete write,
already set offsets will not be taken into account, thus corrupt RTP data will
be sent.
This makes it unnecessary for callers to first merge together all
memories, and it allows API like GstRTSPConnection to write them out
without first copying all memories together or using writev()-style API
to write multiple memories out in one go.
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/issues/370
g_source_remove() works only for a GSource which was attached
to default GMainContext, but the GSource might be attached to
custom context depending on how gst_discoverer_start() was called.
Whatever the attached context was, g_source_destroy() can clean it up.
Make consistent with what autotools puts into enabled_gl_apis
variable. Autotools puts 'gl' in there instead of 'opengl'.
This would cause problems when building -bad glmixers plugin
in meson against a -base that was built with autotools.
See https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/871
Otherwise surface_width/surface_height stored in GstGLWindowPrivate
isn't changed, sometimes an unnecessary reconfigure event is sent on
sinkpad, then result in upstream reconfiguring.
Example pipeline:
gst-launch-1.0 videotestsrc ! msdkvpp ! glimagesink
The start_time and end_time in this context have already
been adjusted for the input's rate by converting them to running
time above. What is needed afterwards is to compare these
with the output's start/stop running time, which also takes
into account the rate, so we are comparing equal things.
Multiplying these with the output's rate here is only breaking
this logic. In most cases the input and output rate is the same,
so this multiplication effectively reverses the rate adjustment
that happened while converting to running time, which is why
we see the video playing with the original rate in tests.
Fixes#541
Binding the vertex array to 0 will unbind everything else already.
In the previous order older versions of the Intel GL driver caused
errors to be printed for every single call when disabling the vertex
attrib arrays after binding the vertex array to 0.
We make an allocator for temporary lines and then use this for all
the steps in the conversion that can do in-place processing.
Keep track of the number of lines each step needs and use this to
allocate the right number of lines.
Previously we would not always allocate enough lines and we would
end up with conversion errors as lines would be reused prematurely.
Fixes#350
ISO 14496-3 defines that audioObjectType 5 is a special case that
indicates SBR is present and that an additional field has to be
parsed to find the true audioObjectType.
There are two ways of signaling SBR within an AAC stream - implicit
and explicit (see [1] section 4.2). When explicit signaling is used,
the presence of SBR data is signaled by means of the SBR
audioObjectType in the AudioSpecificConfig data.
Normally the sample rate is specified by an index into a
table of common sample rates. However index 0x0f is a special case
that indicates that the next 24 bits contain the real sample rate.
[1] https://www.telosalliance.com/support/A-closer-look-into-MPEG-4-High-Efficiency-AACFixes#39
Checking the address distance between given begin/end sequence
doesn't make sense. They are output params.
This is to fix weird failure of libs_rtp on Windows
Code in g_return_*() must not have side effects, as it
might be compiled out if -DG_DISABLE_CHECKS is used, in
which case we would read garbage off the stack.
It breaks all the calculations. While it can make sense during
initialization, there's very little API that can be called with such
timecodes without ending up with wrong results.
The old API would only assert or return an invalid timecode, the new API
returns a boolean or NULL. We can't change the existing API
unfortunately but can at least deprecate it.
audioconvert's passthrough status can no longer be determined
strictly from input / output caps equality, as a mix-matrix can
now be specified.
We now call gst_base_transform_set_passthrough dynamically, based
on the return from the new gst_audio_converter_is_passthrough()
API, which takes the mix matrix into account.
The presence of `key-mgmt` attribute will set the mikey appropriately.
We therefore don't need to check the return value (which will
be overwritten afterwards).
CEA608_IN_CEA708_RAW is the same format as CEA708_RAW. It's only
difference is that it must contain only CEA608 and a format like this
does not exist in practice. In practice every element that handles raw
cc_data triplets must check each triplet for their actual content and
handle them accordingly.
For CC-only streams a parser could signal the existence of CEA608 and/or
CEA708 inside the caps but for metas this can only potentially be
signalled via the ALLOCATION query for negotiation purposes.
A separate format for this is not very useful and instead it should be a
format qualifier.
CEA608_S334_1A is the format defined by SMPTE S334-1 Annex A and which
is used for transferring CEA608 over SDI instead of CEA708 CDP packets.
According to RFC3611, the extended report blocks in XR packet can
have variable length. To visit each block, the iterator should look
into block header. Once XR type is extracted, users can parse the
detailed information by given functions.
Loss/Duplicate RLE
The Loss RLE and the Duplicate RLE have same format so
they can share parsers. For unit test, randomly generated
pseudo packet is used.
Packet Receipt Times
The packet receipt times report block has a list of receipt
times which are in [begin_seq, end_seq).
Receiver Reference Time paser for XR packet
The receiver reference time has ntptime which is 64 bit type.
DLRR
The DLRR report block consists of sub-blocks which has ssrc, last RR,
and delay since last RR. The number of sub-blocks should be calculated
from block length.
Statistics Summary
The Statistics Summary report block provides fixed length
information.
VoIP Metrics
VoIP Metrics consists of several metrics even though they are in
a report block. Data retrieving functions are added per metrics.
https://bugzilla.gnome.org/show_bug.cgi?id=789822
Non-direct dmabuf uploads, just as direct dmabuf uploads, create EGL
images and thus GL textures of the same width as the imported image.
The input dmabuf line stride is not relevant to the resulting texture
in both cases.
This fixes the case where non-direct uploads of input dmabufs with line
stride larger than the width will for example cause glcolorconvert to
sample only the left part (width * bytes per pixel / stride) of the
image, causing a horizontally stretched and cropped output image.
Pull in video frame fields into local variables. Without this the
compiler must assume that they could've changed on every use and read
them from memory again.
This reduces the inner loop from 6 memory reads per pixels to 4, and the
number of writes stays at 3.
gst_rtsp_connection_send() adds the Authorization header to the request.
If this function is being called multiple times with the same request
it will add one more Authorization header every time.
To fix to this issue do not append a new Authorization header on
top of an existing ones. Remove any existing Authorization headers first
and then add the new one.
Fixes gst-plugins-good#425
If multiple DRM connectors are connected, currently the first one is
picked. Improve this by adding an environment variable that allows for
choosing a connector by name. The connector name has been made so they
are compatible with modetest/modeprint DRM utilities.
Related to #490
* List all connectors, modes, and encoders, even after picking one
* Add missing DRM_MODE_CONNECTOR_DPI string for logging and improve
existing strings
* Make sure the names matches modetest/modeprint from DRM utilities
Related to #490
If we use the main loop it might happen that the caller (e.g. our unit
test) already shut down the loop once the result was received and in
that case the pipeline would never ever be shut down (and our unit test
would hang).
This adds a few missing gst_object_unref calls for the opengl context in
gstglwindow_gbm_egl.c, as well as the missing close call for the
drm node fd in gst_gl_display_gbm_finalize.
While this creates a circular reference between the pipeline and the
context, this ensures that the context stays alive for as long as any
callbacks could be called on it. The circular reference is broken once
the conversion is finished (or error, or timeout), which will then cause
everything to be freed.
Previously it was possible that a callback could be called on the
context right after it was freed already.
Also use only a single context structure, the second structure does not
simplify anything and duplicates storage.
This patch adds API in the audio decoder base class for setting the arbitrary
caps on the source pad. Previously only caps converted from audio info were
possible. This is particularly useful when subclass wants to set caps features
for audio decoder producing metadata.
The Y210 format was added in the middle of the formats enum and list,
introducing an ABI break.
This issue was detected thanks to the gstreamer-rs test harness.
We assume here the same data format for the user data as for the
DID/SDID: 10 bits with parity in the upper 2 bits. In theory some
standards could define this differently and even have full 10 bits of
user data but there does not seem to be a single such standard after
all these years.
Currently in Python it would become a signed 64 bit value but should
actually be an unsigned 32 bit value with all bits set.
This is the same problem as with GST_MESSAGE_TYPE_ANY.
See https://bugzilla.gnome.org/show_bug.cgi?id=732633
Rather then letting gst_gl_memory_setup_buffer guess the GL format used
for an eglimage after importing a dmabuf be explicit about it. This
fixes issues where dmabuf import may have used another format then
gst_gl_format_from_video_info would guess on the basis of the available
GL extensions.
In particular on etnaviv the gst_gl_format_from_video_info would
assuming a luminance + alpha GL format is used for YUY2, but the dmabuf
import will always use RG88. Which causes images to end up somewhat pink when
displayed on the screen.
When importing an egl image from dmabuf gst_gl_format_from_video_info
was used to work what the result GL format will be. Unfortunately that
will only work if the conventional format and the choosen DRM fourcc for
the format match up.
On etnaviv platforms there is no support for GL_EXT_texture_rg, so the
GL format chosen for YUY2 ends up being GST_GL_LUMINANCE_ALPHA. However
DRM does not do luminance + alpha as it's a legacy GL thing, so the
dmabuf import ends up using DRM_FORMAT_GR88.
To fix this, tie the DRM_FORMAT and the GL format together so they
always match up.
There is new code that ensures that we renegotiate after an
uploader transition if the negotiated caps have changed.
The problem is that the raw uploader will not really try and
fixate the input caps, but instead of return a subset with the
only the supported target texture.
This had two effect, raw uploads was always done renegotiated
once and the raw upload unit test was now failing as it didn't
expect a renegotiation.
As it's a valid check, simply relax the gst_caps_is_equal() check
and use a gst_caps_is_subset() instead.
https://bugzilla.gnome.org/show_bug.cgi?id=783521
The direct dmabuf upload does color conversion, so when it transforms
the caps, it replaces the format with all formats found through the
format query. When this uploader can't be used, it makes the upstream
source pick a unsupported format.
To fix this, we only append the caps with a list of format. So the
source will only pick one of these formats if the downstream preferred
format is not supported. A negotiation failure after this would be
normal.
This fixes pipelines without a glcolorconvert element.
https://bugzilla.gnome.org/show_bug.cgi?id=783521
The idea is that some GPUs (like the Vivante series) can actually
perform the YUV->RGB conversion internally, so no custom conversion
shaders are needed. To make use of this feature, we need an additional
uploader that can import DMABUF FDs and also directly pass the pixel
format, relying on the GPU to do the conversion.
Based on patches from Nicolas Dufresne <nicolas.dufresne@collabora.com> and
Carlos Rafael Giani <dv@pseudoterminal.org>.
https://bugzilla.gnome.org/show_bug.cgi?id=783521
If a upload method is selected then use it exclusively in transform_caps().
Also, reconfigure if the current caps don't match the current upload
method.
https://bugzilla.gnome.org/show_bug.cgi?id=783521
This should not be necessary, but currently not all plugins that provide
dmabuf memory announce this with caps features, e.g. v4l2.
The static caps already contain the system memory. It didn't break before
because other upload methods provide the necessary transformation.
https://bugzilla.gnome.org/show_bug.cgi?id=783521
Reconfigure will trigger a set_caps which clears the upload method.
Remember the method in this case and start with it.
Wrap around once to try all methods if necessary.
https://bugzilla.gnome.org/show_bug.cgi?id=783521
The colorspace conversion happens during the upload so the necessary hints
must be provided to ensure that the conversion works correctly.
At least the Mesa Intel driver will create a texture without error but
produces an incorrect result. Use eglQueryDmaBufModifiersEXT() to check if
non-external upload is supported for the given format.
Based on a patch from Carlos Rafael Giani <dv@pseudoterminal.org>.
https://bugzilla.gnome.org/show_bug.cgi?id=783521
gst_gl_memory_setup_buffer() was not properly using the number
of pointers to wrapped. This also fixes the validation, as we
only support 1 wrapper per view, or num_planes * views wrapper.
https://bugzilla.gnome.org/show_bug.cgi?id=783521
rtsp_connection_send takes care of adding those already,
and some reverse proxies such as nginx will reject the request
altogether if the Authorization header is present twice,
even with the same value.
https://bugzilla.gnome.org/show_bug.cgi?id=797272
Add a source-info property that will read/write meta to the buffers
about RTP source information. The GstRTPSourceMeta can be used to
transport information about the origin of a buffer, e.g. the sources
that is included in a mixed audio buffer.
A new function gst_rtp_base_payload_allocate_output_buffer() is added
for payloaders to use to allocate the output RTP buffer with the correct
number of CSRCs according to the meta and fill it.
RTPSourceMeta does not make sense on RTP buffers since the information
is in the RTP header. So the payloader will strip the meta from the
output buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=761947
Using the correct blend modes for each case or converting to
premultipled in the very unlikely case that separate blend modes are
unavailable on ancient opengl hardware.
It was checking for GST_IS_CAPS only and that would fail if the new
restriction caps was NULL and its documentation says it accepts NULL as
valid input.