With commit "basepayload: Expose onvif-no-rate-control property" the rtp
timestamp changed behaviour when rate control is disabled.
When disabling rate control, we must take care of the stream time to
avoid the timestamps to begin from zero again.
This simply implies not trying to "prepare" those buffers,
as mapping an empty buffer to a video frame does not make
much sense.
This also adds a simple test in compositor that performs
some trivial checking of the handling of gap events, the test
was not failing before, but an error was logged, this is
no longer the case.
Fixes#717
If there's no known value in the best caps then the functions to convert
them to strings will return NULL. Having the fields not in the caps is
not a problem, having them with a NULL value however will cause
negotiation failures.
Save push/push_list helper flow return and in case of failure, return it
in the process function. This allow forwarding downstream flow return
even if the subclass is using the push/push_list helper.
In prepare_frame, it is not enough for the target info
(conversion_info) to not have changed to decide not to update
the converter, as the vpad info may have changed as well.
Fixes#714
Posting any message to parent seems to be pointless. That might break
parent window.
Regardless of the posting, parent window can catch mouse event
and also any keyboard events will be handled by parent window by default.
Fixes: https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/issues/634
This marker is optional, its name refer to RTP marker bit. This mark can
be use to reduce latency in various use cases. With the split between
finish_frame() and finish_subframe() we will now be able to identitfy
the last subframe with no latency.
In order to detail the use of GST_BUFFER_FLAG_MARKER in a video
use case, the flag GST_VIDEO_BUFFER_FLAG_MARKER has been introduced
with a proper documentation clarifying marker's role.
Introduce a new API so encoders can split the encoding in subframes.
This can be useful to reduce the overall latency as we no longer need to
wait for the full frame to be encoded to start decoding or sending it.
The matrices were in the wrong order.
Instead of the conversion matrix being
_ XYZ_TO_RGB_output * RGB_TO_XYZ_input * input_RGB
It was
_ RGB_TO_XYZ_input * XYZ_TO_RGB_output * input_RGB
This function might be revisited with different channel position mapping
while audio source goes into play so the reorder flag needs to be reset
before the checks happen.
Instead initialize the map infos, etc to NULL like gst_buffer_map()
would be doing on a zero-sized buffer.
This fixes a crash in audioresample if the first output buffer would
contain zero samples.
There was a typo in the extension name which resulted in the modifiers
to never be set when doing DMABuf import. That triggered the modifiers
lookup in Intel driver, which was in fact hiding bugs in the gldownload
to glupload path when doing DMABuf.
Note, this changes breaks pipeline the following pipeline on Intel and
some other drivers:
gltestsrc ! gldownload ! video/x-raw\(memory:DMABuf\) ! glimagsink
A fix for this was added to Mesa recently:
Related to https://gitlab.freedesktop.org/mesa/mesa/merge_requests/1338
Fixes 5d0e191710
We don't support modififers and that would result in bad image being
displayed. Note that this was fixes recently in Mesa MR 1138, prior to
that, the reported modifier is always 0, which makes this change a
no-op.
Fixes#441
Related to https://gitlab.freedesktop.org/mesa/mesa/merge_requests/1338
It's either this or replacing all the object lock usage in gldisplay
with a recursive mutex which is not backwards compatible
The failure case is effectively:
1. The user has locked the display object lock
2. a glcontext loses it's last ref and attempts to quit the window
3. gst_gl_window_quit() attempts to remove the window from the display
4. gst_gl_display_remove_window attempts to take the display object lock
The only concern with changing the locking for the window list in the
display is that gst_gl_display_create_window() has documentation requiring
the object lock to be held which must continue to work correctly.
Returning a transfer none value for a value checked by a lock is not
thread safe as the reference could disappear before the caller can take
its reference.
Following the [design document] encodebin needs to handle sources that
output multiple streams, for that purpose and to make it simpler,
we ensure that a single segment is outputted to the encoders by using
an `identity single-segment=true` at the beginning of streams chains.
Added API to enable or disable the use of that new feature.
Added support for the encoding profile parser for that new property,
keeping backward compatibility
[design document]: https://gstreamer.freedesktop.org/documentation/additional/design/encoding.html?gi-language=c#rendering-timelines
I'm going to use this new API in gst-omx so an encoder can request
v4l2src to produce buffers matching the encoder stride and slice heights
preventing copies of incoming buffers.
Especially for interlaced input make sure to
a) never mix both fields
b) never read lines after the end of the input frame
c) allocate enough space in the temporary lines to not write outside
the allocated memory area
This fixes various memory corruptions and rescaling artefacts.
At the moment, we only posted QoS messages when frame_drop() was
called, but not in finish_frame() when QoS triggered a late push.
This should fix applications that tries to account the dropped
frames. We also emit a warning on drops so it's more clear what is
happening.
By adding this field, buffer producers can now explicitly set the exact
geometry of planes, allowing users to easily know the padded size and
height of each plane.
GstVideoMeta is always heap allocated by GStreamer itself so we can
safely extend it.
When using gst_video_info_align() user had no easy way to retrieve the
padded size and height of each plane.
This can easily be implemented in fill_planes() as it's already called
in align() with the padded height.
Ideally we'd add a plane_size field to GstVideoInfo but the remaining
padding is too small so that would be an ABI break.
Fix#618
We want to round up when halfing height.
I do have a test for this but it relies on my new video-align tests so
it's part of the next commit. Recording the fix separately if we want to
backport this fix to the stable branch.
Similar to gst_video_info_from_caps() which allows encoded video format,
don't error gst_audio_info_from_caps() with encoded audio format.
Because gst_audio_info_set_format() supports encoded format, current
behavior does not seem to be consistent.
We need to provide twice as many lines as usual to the scaling function
as every second lines would be skipped.
Without this we read from random memory and produce colorful output and
crashes.
Without this, scaling e.g. interlaced UYVY causes corrupted output with
lines as follows: f1 f1 f2 f2, i.e. two lines of each field and only
then the other field.
The watch->messages_bytes is not decreased when the write operation
from the backlog is only partly successfull.
This commit decreases the watch->messages_bytes for the successfully
sent messages.
Fixes#679
Y210 is a 10-bit YUY2, so we may re-use the YUY2 shaders but gl format
is set to RG16
Sample pipeline:
gst-launch-1.0 videotestsrc ! video/x-raw,format=Y210 ! glimagesink
NV16/NV61 is basically the same as NV12/NV21 with a higher chroma resolution.
Since only the size of the UV plane/texture is different, the same shaders are used as for NV12/NV21.
This is done by reusing `gst_gl_memory_setup_buffer` avoiding to
duplicate code.
Without a VideoMeta, mapping those buffers lead to GstBuffer mapping the
buffer in system memory even when specifying the GL flags (through the
buffer merging mechanism) making the result totally broken.
The newly exposed vmethods are pause, resume, stop and clear_all.
The existing reset vmethod is deprecated.
The audio sink will fallback to calling reset if pause or stop
are not provided and will fallback to calling start if
resume is not provided. There is no default clear_all
implementation.
Existing audio sinks continue to work as before.
This change is useful for sinks that need to distinguish
between a pause and a stop (currently both are handled
by a reset) and is needed for https://bugzilla.gnome.org/show_bug.cgi?id=788362https://bugzilla.gnome.org/show_bug.cgi?id=788361
Universal Windows Platform apps are not allowed to use LoadLibrary to
load arbitrary DLLs from the filesystem. They can only use
LoadPackagedLibrary to load DLLs that have been packaged with the app
as assets.
See also: https://gitlab.freedesktop.org/gstreamer/gstreamer/merge_requests/190
If the remainder is not evenly divisable by 4, we'd miss the check
for zero and continue the loop until crashing. Change the branch
to take into account negatives as well.
This more closely matches the SSE loop.
We are using ARC to cleanup after ourselves.
../gst-libs/gst/gl/cocoa/gstglwindow_cocoa.m:159:20: error: unused variable 'queue' [-Werror,-Wunused-variable]
dispatch_queue_t queue = (__bridge_transfer dispatch_queue_t) window->priv->gl_queue;
^
Matroskademux will send gap event when lag of video and audio is over 3 seconds.
audiodecoder needs to handle gap event and set default output caps.
Only audio info is set, while output caps is ignored. This cause the assertion failed.
Need to fill output caps in gst_audio_decoder_negotiate_default_caps() with
negotiated caps to avoid critical info printed when check it later.
This is needed for using GstGL with ANGLE as the GLES implementation
in Universal Windows Platform apps that use the Windows Runtime
(WinRT) instead of Win32, which is deprecated and not allowed in
Windows Store apps.
This has been tested with Servo on the Microsoft HoloLens 2, and seems
to work quite well.
By passing NULL to `g_signal_new` instead of a marshaller, GLib will
actually internally optimize the signal (if the marshaller is available
in GLib itself) by also setting the valist marshaller. This makes the
signal emission a bit more performant than the regular marshalling,
which still needs to box into `GValue` and call libffi in case of a
generic marshaller.
Note that for custom marshallers, one would use
`g_signal_set_va_marshaller()` with the valist marshaller instead.
Simple addition for supporting EXT_platform_device typed display.
It's a kind of special display type (part of EGL specification)
which has no window at all.
To use EGLDevice explicitly, set environment "GST_GL_WINDOW=egl-device"
See also https://www.khronos.org/registry/EGL/extensions/EXT/EGL_EXT_platform_device.txt
* Fix typo
s/nunormalized/normalized/g
* Update GstVideoMasteringDisplayInfo description
Each values are not array.
* Add missing newline between arguments description and
detailed comment.
The gltestsrc element was refactored to inherit from this base class which
handles the GL context. The sub-class only needs to implement the gl_start,
gl_stop and fill_gl_memory vfuncs, along with properly advertizing the GL APIs
it supports through the supported_gl_api GstGLBaseSrc class attribute.
The caps and thus the video info have preference. If the field order is
set in there then it applies to all frames.
This works around issues where the tff field order is only set in the
caps but not additionally in the buffer flags.