It's possible that a buffer might be within the segment proper,
but not within the "valid" part we're playing, which is only
things after the 'offset' part of the segment. In that case,
the running-times of the buffer-start and buffer-stop will be
GST_CLOCK_TIME_NONE, and we'd better not schedule playback that
far in the future.
This commit modifies GstVideoMasteringDisplayInfo and GstVideoContentLightLevel
structs so that each value is to be more like hdr_metadata_infoframe struct
of linux drm header and DXGI_HDR_METADATA_HDR10 struct of Windows.
So each value is no more fraction but normalized one as per CTA 861.G spec.
Also the unit of each value will be consistent with H.264, H.265
specifications, hdr_metadata_infoframe struct for linux and
DXGI_HDR_METADATA_HDR10 struct for Windows.
[38/1301] Generating GstVideo-1.0.gir with a custom command.
../subprojects/gst-plugins-base/gst-libs/gst/video/gstvideoaggregator.c:231: Error: GstVideo: identifier not found on the first line:
*
^
Because the color value is stored in MSB, so we can reuse the
Y210 code for P012_LE / P012_BE
Sample pipeline:
gst-launch-1.0 videotestsrc ! video/x-raw,format=Y212_LE ! glimagesink
This can be used to have compositor display either the background
or a stream on a lower zorder after a live input stream freezes
for a certain amount of time, for example because of network
issues.
gst_gl_window_quit() will attempt to send a message but will be called
from GstGLContext's finalize handler and so the weak ref that backs
gst_gl_window_get_context will return NULL as it has already been
cleared. We need that context in send_message_async to decide whether
to run the provided callback immediately or queue in GCD
Without this fix, it is possible that outbuf is not initialized, which
will result in segfault when call gst_buffer_replace (&outbuf, NULL). In
addition, the patch fixes potential memory leak in restart path.
The segfault can be reproduced by the pipeline below:
GST_GL_PLATFORM=egl \
gst-launch-1.0 videotestsrc ! msdkh265enc ! msdkh265dec ! \
'video/x-raw(memory:DMABuf)' ! glimagesink
In the situation that the direct dmabuf path is chosen, but with an
unsupported texture format, this causes accept to fail rather than
continue and fail at the upload stage. It is also possibly necessary to
reconfigure after falling back from direct to non-direct dmabuf upload
paths.
The wordlen ("length") MUST represent the total "number of 32-bit words
in the extension, excluding the four-octet extension header" (rfc3550).
There are cases where already existent padding is reused for adding
the new extension. So the new wordlen should be updated if the new
added extension makes it to increase.
This patch introduces a new API to send and parse mouse scroll events. Mouse
event coordinates are sent relative to the display space of the related output
area. This is usually the size in pixels of the window associated with the
element implementing the GstNavigation interface.
This patch introduces a property which, if set to FALSE, prevents RTP
basepayloader from scaling the RTP time when a segment's rate is not
equal to 1.0. The specification is ambiguous on this subject and some
clients expect the timestamps not to be scaled.
This allows us to remove races when setting the wl_queue on wayland
objects with wl_proxy_set_queue() as each created object is created with
the queue already set.
We can also move all our initilization code into the window as we
can retrieve all wayland objects from each window instance. This
removes a possible race when integrating with external API's as we would
always attempt to immediately retrieve a small set of wayland objects.
That is no longer the case with the objects from each window instance.
Automatic negotiation of texture-target=external-oes does not work
without separating the external-oes support out of the DirectDmabuf
uploader into a separate DirectDmabufExternal uploader.
gst_gl_upload_transform_caps() is missing a NULL pointer check in case
the current upload method's transform_caps returns a NULL pointer. In
the following loop over all upload methods, NULL pointer return values
are already handled correctly.
Some drivers support directly importing DMA buffers in some formats into
external-oes textures only, for example because the hardware contains
native YUV samplers.
Note that in these cases colorimetry can only be passed as hints and
there is no feedback whether the driver supports the required YUV
encoding matrix and quantization range.
Allow creating EGL images from DMA buffers in formats that the driver
only supports for the external-oes texture target.
Pass the intended texture target to gst_egl_image_from_dmabuf_direct so
that _gst_egl_image_check_dmabuf_direct can decide whether to create an
EGL image for a format that can only be targeted at external-oes
textures by the driver. Allow creating GstGLMemoryEGL objects from these
DMA buffers.
The GST_VIDEO_BUFFER_FLAG_TOP_FIELD flag is a superset of
GST_VIDEO_BUFFER_FLAG_BOTTOM_FIELD as they are defined using other
flags. As a result we can't use GST_BUFFER_FLAG_IS_SET() to check for
those flags.
ANGLE_surface_d3d_render_to_back_buffer extension is only available
with Microsoft fork of ANGLE. Note that Microsoft's ANGLE repository
has been deprecated.
Previously we would simply use them without any locking at all, while
using the object lock for setting them. Nothing prevented new callbacks
to be set in the meantime, potentially calling a callback with already
freed user_data.
To prevent this move the callbacks into a reference counted struct and
use the appsrc/appsink mutex to protect access to it, which is used in
all functions calling the callbacks already anyway.
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/issues/729
By setting the extension-ID for TWCC (Transport Wide Congestion Control),
the payloader will embed sequencenumbers as a RTP header-extension
according to https://tools.ietf.org/html/draft-holmer-rmcat-transport-wide-cc-extensions-01#section-2
The negotiation of this being enabled with downstream elements
is done with caps reflecting the way this is communicated using SDP.
With commit "basepayload: Expose onvif-no-rate-control property" the rtp
timestamp changed behaviour when rate control is disabled.
When disabling rate control, we must take care of the stream time to
avoid the timestamps to begin from zero again.
This simply implies not trying to "prepare" those buffers,
as mapping an empty buffer to a video frame does not make
much sense.
This also adds a simple test in compositor that performs
some trivial checking of the handling of gap events, the test
was not failing before, but an error was logged, this is
no longer the case.
Fixes#717
If there's no known value in the best caps then the functions to convert
them to strings will return NULL. Having the fields not in the caps is
not a problem, having them with a NULL value however will cause
negotiation failures.
Save push/push_list helper flow return and in case of failure, return it
in the process function. This allow forwarding downstream flow return
even if the subclass is using the push/push_list helper.
In prepare_frame, it is not enough for the target info
(conversion_info) to not have changed to decide not to update
the converter, as the vpad info may have changed as well.
Fixes#714
Posting any message to parent seems to be pointless. That might break
parent window.
Regardless of the posting, parent window can catch mouse event
and also any keyboard events will be handled by parent window by default.
Fixes: https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/issues/634
This marker is optional, its name refer to RTP marker bit. This mark can
be use to reduce latency in various use cases. With the split between
finish_frame() and finish_subframe() we will now be able to identitfy
the last subframe with no latency.
In order to detail the use of GST_BUFFER_FLAG_MARKER in a video
use case, the flag GST_VIDEO_BUFFER_FLAG_MARKER has been introduced
with a proper documentation clarifying marker's role.
Introduce a new API so encoders can split the encoding in subframes.
This can be useful to reduce the overall latency as we no longer need to
wait for the full frame to be encoded to start decoding or sending it.
The matrices were in the wrong order.
Instead of the conversion matrix being
_ XYZ_TO_RGB_output * RGB_TO_XYZ_input * input_RGB
It was
_ RGB_TO_XYZ_input * XYZ_TO_RGB_output * input_RGB
This function might be revisited with different channel position mapping
while audio source goes into play so the reorder flag needs to be reset
before the checks happen.
Instead initialize the map infos, etc to NULL like gst_buffer_map()
would be doing on a zero-sized buffer.
This fixes a crash in audioresample if the first output buffer would
contain zero samples.
There was a typo in the extension name which resulted in the modifiers
to never be set when doing DMABuf import. That triggered the modifiers
lookup in Intel driver, which was in fact hiding bugs in the gldownload
to glupload path when doing DMABuf.
Note, this changes breaks pipeline the following pipeline on Intel and
some other drivers:
gltestsrc ! gldownload ! video/x-raw\(memory:DMABuf\) ! glimagsink
A fix for this was added to Mesa recently:
Related to https://gitlab.freedesktop.org/mesa/mesa/merge_requests/1338
Fixes 5d0e191710