Conceptually identical to the present signal of d3d11videosink.
This signal will be emitted with current render target
(i.e., swapchain backbuffer) and command queue. Signal handler
can record GPU commands for an overlay image or to blend
an image to the render target.
In addition to d3d12 resources, videosink will send
d3d11 and d2d resources depending on "overlay-mode"
property, so that signal handler can render by using
preferred/required DirectX API.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6838>
From the spec (chapter 34, v1.3.283):
````
UNORM: the components are unsigned normalized values in the range [0, 1]
SRGB: the R, G and B components are unsigned normalized value that represent
values using sRGB nonlinear encoding, while the A component (if one
exists) is a regular unsigned normalized value
```
The difference is the storage encoding, the first one is aimed for image
transfers, while the second is for shaders, mostly in the swapchain stage in the
pipeline, and it's done automatically if needed [1].
As far as I have checked, other frameworks (FFmpeg, GTK+), when import or export
images from/to Vulkan, use exclusively UNORM formats, while SRGB formats are
ignored.
My conclusion is that Vulkan formats are related on how bits are stored in
memory rather their transfer functions (colorimetry).
This patch does two interrelated changes:
1. It swaps certain color format maps to try first, in both
gst_vulkan_format_from_video_info() and gst_vulkan_format_from_video_info_2(),
the UNORM formats, when comparing its usage, and later check for SRGB.
2. It removes the code that check for colorimetry in
gst_vulkan_format_from_video_info_2(), since it not storage related.
1. https://community.khronos.org/t/noob-difference-between-unorm-and-srgb/106132/7
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6797>
Add pitch tests with different forward and backward playback rates.
Those tests depend on the libSoundTouch version to validate the buffers
checksums. The actual version uses libSoundTouch 2.3.2, use the
`--force-fallback-for=soundtouch` meson option to build using the same
version.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6247>
Since we don't want to expose video decoding API outside of GStreamer, the
header is removed from installation and both source files are renamed as
-private.
The header must remain in gst-libs because is referred by GstVulkanQueue,
which's the decoder factory.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6723>
Adding new property in order to notify users of device removed status.
Once device removed status is detected, application should release
all ID3D12Device objects corresponding to the adapter, including
GstD3D12Device object. Otherwise D3D12CreateDevice() call for the
adapter will fail.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6699>
It seems that when D3D11CreateDevice collides in time
with other D3D11 calls, in particular the proccess of
creating a shader, it can corrupt the memory in the driver.
D3D11 spec doesn't seem to require any thread safety from
D3D11CreateDevice. Following MSDN, it is supposed to be called
in the beginning of the proccess, while GStreamer calls it with each
new pipeline.
Such crashes in the driver were frequently reproducing on the
Intel UHD 630 machine.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6686>
We suspect that it's not thread safe to just create and
destroy the device from any thread, particularly because
of D3D11CreateDevice, that is not documented as thread-safe.
While D3D11CreateDevice is usually protected from outside
by the gst_d3d11_ensure_element_data, it still can cross
with the Release() method of another device.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6686>
../subprojects/gst-plugins-bad/tests/check/libs/gstlibscpp.cc:41:
fatal error: gst/mpegts/gstmpegts-enumtypes.h: No such file or directory
Could only pass the needed deps to the libscpp test, but gets
messier to maintain, so let's at it for consistency.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6643>
There was a potential busy loop occuring because when we were taking
data from the internal ccbuffer, we were not resetting which field had
written data. This would mean that the next time data was retrieved
from ccbuffer, it was always from field 0 and never from field 1.
This only affects usage of cc_buffer_take_separated() which is only used
by cdp->raw cea608.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6423>
Cea608 (valid) padding removal is available on the input side of ccconverter
or configurable on cccombiner. cccombiner can now configure whether
valid or invalid cea608 padding is used and for valid padding, how long
after valid non-padding to keep sending valid padding.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6300>
The pool currently defaults to performing a layout transition to
VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL, with some special exceptions for
video usages. This may not be a legal transition depending on the usage.
Provide an API to explicitly control the initial image layout.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5881>
Interlaced MJPEG is a big hack. Most of the streams we've found are from old
AVID tools. There are two methods to detect interlaced stream: the container
offers a height bigger (or double) than the image's height in SOF. The other
is from a APP0 marker.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5838>
Serialize every GstMeta that supports serialization into the NEW_BUFFER
payload. This is especially important for GstVideoMeta in the case of
multiplanar buffers, or if stride!=width.
Sponsored-by: Netflix Inc.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5355>
- GstAnalyticRelationMeta is a base class for analytics
meta. It's able to store analytics results (GstAnalyticRelatableMtd)
and describe the relation between each analysis results.
- GstAnalysisRelationMeta also contain an algorithm able to explore
analysis results relation using a bfs.
- Relation(edge) between analysis results (vertice) are stored in an adjacency-matrix
that allow to quickly identify if two analysis results are related and by
which relation they related. It also work for indirect relation
and can provide the path of analysis results by which two
analysis results are related.
- One allocation per buffer to store analysis results. Here we rely on
the application to guess how much space will be required to store all
analysis results. This is something that could be improved
significantly but it's a starting point.
- Define common analysis results, classification, object-detection,
tracking that are subclass of GstAnalyticRelatableMtd. The also
provide exemple of how to extend GstAnalyticRelatableMtd to have them
benefit for the mechanim to express relation with other analysis
results.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4962>
A payload of 0x80 0x80 means that it's padding. It's not a good idea to
throw this away though, because of the cc_valid field.
According to CEA 10-B section 25.2.1, if cc_valid is zero, the run-in
clock and start bit should not be generated. In practice, this means
that any closed captions will be erased and the end-user TV will show
that captions are not available for this stream. This might have
undesired consequences, e.g. we were just showing a long line of
captions and we disable it before the user has had time to read it, or
you can't enable closed captions during silence/music intervals.
We cannot reliably detect whether there's a currently-silent closed
caption stream or just nothing, but we have this information coming from
upstream, so we can at least not discard it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5508>
This pair of elements, inspired from shmsink/shmsrc, send unix file
descriptors (e.g. memfd, dmabuf) from one sink to multiple source
elements in other processes.
The unixfdsink proposes a memfd/shm allocator, which causes for example
videotestsrc to write directly into memories that can be transfered to
other processes without copying.
Sponsored-by: Netflix Inc.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5328>
Sending an EOS event is actually really bad because rtpbin doesn't
handle that very well. It was only being used as a way to notify
webrtcbin to check if re-negotiation is needed.
We don't need that anymore, since changing the direction is enough to
notify webrtcbin to check for re-negotiation.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5045>
There's no reason to release GstMemory manually at all.
If we do release GstMemory, corresponding GstBuffer will be
discarded by GstBufferPool baseclass because the size is changed
to zero.
Actual cause of heavy CPU usage in case of fixed-size pool
(i.e., decoder output buffer pool) and if we remove GstMemory from
GstBuffer is that GstBufferPool baseclass is doing busy wait in acquire_buffer()
for some reason. That needs to be investigated though, discarding
and re-alloc every GstBuffer is not ideal already.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4935>
Adding Direct3D11 backend Qt6 QML videosink element, qml6d3d11sink.
Implementation details are similar to the qt6 plugin in -good
but there are a few notable differences.
* qml6d3d11sink accepts all GstD3D11 supported video formats (e.g., NV12).
* Scene graph (owned by qml6d3d11sink) will hold dedicated and sharable
RGBA texture which belongs to Qt6's Direct3D11 device, instead of sharing
GStreamer's own texture with Qt6.
* All rendering operations will be done by using GStreamer's Direct3D11 device.
Specifically, upstream texture will be copied (in case of RGBA)
or converted to the above mentioned Qt6's sharable texture.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3707>
Allow a project to use gstreamer-full as a static library
and link to create a binary without dependencies.
Introduce the option 'gst-full-target-type' to
select the build type, dynamic(default) or static.
In gstreamer-full/static build configuration gstreamer (gst.c)
needs the symbol gst_init_static_plugins which is defined
in gstreamer-full.
All the tests and examples are linking with gstreamer but the
symbol gst_init_static_plugins is only defined in the gstreamer-full
library. gstreamer-full can not be built first as it needs to know what plugins
will be built.
One option would be to build all the examples and tests after
gstreamer-full as the tools.
Disable tools build in subprojects too as it will be built at the end of
build process.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4128>
Depending on the exact output format, 0x00 may be a better default for
padding than 0x80. 0x00 is the recommended padding value when used in
CDP (and cc_data) but is not when used in s334-1a. See CTA-708-E 4.3.5
amd SMPTE 334-1-2007 5.3.2.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4578>
On MacOS with homebrew, the openssl library is not
properly detected with pkg-config.
So disable the test compilation if openssl
is not properly detected along with libcrypto.
libcrypto is detected but it uses the system one
which leads to the error:
your binary is not an allowed client of /usr/lib/libcrypto.dylib for
architecture x86_64
See more details from Apple:
https://developer.apple.com/forums/thread/124782
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4481>
Similar to cairooverlay element but this element emits "draw"
signal with Direct3D11 render target view, so that an application
can render/overlay/blend on the given render target view
without any copy operation
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4415>