-base plugins will always be found in the build directory, and
core plugins will be found either also via the build directory
(if both core and -base are a subproject) or by getting the
pluginsdir via pkg-config if core is installed.
The GST_PLUGIN_LOADING_WHITELIST env var will make sure we only
pick up plugins from core/base and base plugins only from the
builddir.
There is no reason to look for -base plugins in the install dir.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/582>
If there are two elements and threads attempting to query each other for
an OpenGL context. The locking may result in a deadlock.
We need to unlock each element's context_lock when querying another
element for the OpenGL context in order to allow any other element to
take the lock when the other element is querying for an OpenGL context.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/642>
This commit modifies GstVideoMasteringDisplayInfo and GstVideoContentLightLevel
structs so that each value is to be more like hdr_metadata_infoframe struct
of linux drm header and DXGI_HDR_METADATA_HDR10 struct of Windows.
So each value is no more fraction but normalized one as per CTA 861.G spec.
Also the unit of each value will be consistent with H.264, H.265
specifications, hdr_metadata_infoframe struct for linux and
DXGI_HDR_METADATA_HDR10 struct for Windows.
The wordlen ("length") MUST represent the total "number of 32-bit words
in the extension, excluding the four-octet extension header" (rfc3550).
There are cases where already existent padding is reused for adding
the new extension. So the new wordlen should be updated if the new
added extension makes it to increase.
This patch introduces a new API to send and parse mouse scroll events. Mouse
event coordinates are sent relative to the display space of the related output
area. This is usually the size in pixels of the window associated with the
element implementing the GstNavigation interface.
The GST_VIDEO_BUFFER_FLAG_TOP_FIELD flag is a superset of
GST_VIDEO_BUFFER_FLAG_BOTTOM_FIELD as they are defined using other
flags. As a result we can't use GST_BUFFER_FLAG_IS_SET() to check for
those flags.
By setting the extension-ID for TWCC (Transport Wide Congestion Control),
the payloader will embed sequencenumbers as a RTP header-extension
according to https://tools.ietf.org/html/draft-holmer-rmcat-transport-wide-cc-extensions-01#section-2
The negotiation of this being enabled with downstream elements
is done with caps reflecting the way this is communicated using SDP.
With commit "basepayload: Expose onvif-no-rate-control property" the rtp
timestamp changed behaviour when rate control is disabled.
When disabling rate control, we must take care of the stream time to
avoid the timestamps to begin from zero again.
This simply implies not trying to "prepare" those buffers,
as mapping an empty buffer to a video frame does not make
much sense.
This also adds a simple test in compositor that performs
some trivial checking of the handling of gap events, the test
was not failing before, but an error was logged, this is
no longer the case.
Fixes#717
This validate that the base class properly save and return the flow
return value received when gst_rtp_base_depay_push/push_list() helper is
being used.
This is not set anywhere, and it's pretty clear the pipeline in
question has not been tested in a long time. Disable test with
a FIXME, test needs to be rewritten to not use real output devices.
overlaycomposition.c:276:5: warning: implicit declaration of function 'exit' [-Wimplicit-function-declaration]
overlaycomposition.c(263): warning C4090: 'initializing': different 'const' qualifiers
Previously this would've only set discont=TRUE and then for all future
buffers simply returned immediately.
Instead we also need to
a) drain previous input until its buffer time
b) update next_ts and base_ts accordingly for the gap
c) actually store the new buffer after the gap so it can be used in
the future and so the old buffer before the gap is gone
Also update the unit test accordingly so that it actually tests for this
behaviour. Previously it only tested that after the gap we got no output
at all.
By adding this field, buffer producers can now explicitly set the exact
geometry of planes, allowing users to easily know the padded size and
height of each plane.
GstVideoMeta is always heap allocated by GStreamer itself so we can
safely extend it.
When using gst_video_info_align() user had no easy way to retrieve the
padded size and height of each plane.
This can easily be implemented in fill_planes() as it's already called
in align() with the padded height.
Ideally we'd add a plane_size field to GstVideoInfo but the remaining
padding is too small so that would be an ABI break.
Fix#618
When checking the behaviour of live seeking on audiomixer or
adder we don't *really* need real audio devices. audiotestsrc
in live mode is enough to test the behaviour of those elements.
Also avoids people repeatedly wasting hours trying to figure out
whether that failing behaviour is due to their code or not.
This is done by reusing `gst_gl_memory_setup_buffer` avoiding to
duplicate code.
Without a VideoMeta, mapping those buffers lead to GstBuffer mapping the
buffer in system memory even when specifying the GL flags (through the
buffer merging mechanism) making the result totally broken.