Adding Direct3D11 backend Qt6 QML videosink element, qml6d3d11sink.
Implementation details are similar to the qt6 plugin in -good
but there are a few notable differences.
* qml6d3d11sink accepts all GstD3D11 supported video formats (e.g., NV12).
* Scene graph (owned by qml6d3d11sink) will hold dedicated and sharable
RGBA texture which belongs to Qt6's Direct3D11 device, instead of sharing
GStreamer's own texture with Qt6.
* All rendering operations will be done by using GStreamer's Direct3D11 device.
Specifically, upstream texture will be copied (in case of RGBA)
or converted to the above mentioned Qt6's sharable texture.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3707>
Allow a project to use gstreamer-full as a static library
and link to create a binary without dependencies.
Introduce the option 'gst-full-target-type' to
select the build type, dynamic(default) or static.
In gstreamer-full/static build configuration gstreamer (gst.c)
needs the symbol gst_init_static_plugins which is defined
in gstreamer-full.
All the tests and examples are linking with gstreamer but the
symbol gst_init_static_plugins is only defined in the gstreamer-full
library. gstreamer-full can not be built first as it needs to know what plugins
will be built.
One option would be to build all the examples and tests after
gstreamer-full as the tools.
Disable tools build in subprojects too as it will be built at the end of
build process.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4128>
Depending on the exact output format, 0x00 may be a better default for
padding than 0x80. 0x00 is the recommended padding value when used in
CDP (and cc_data) but is not when used in s334-1a. See CTA-708-E 4.3.5
amd SMPTE 334-1-2007 5.3.2.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4578>
On MacOS with homebrew, the openssl library is not
properly detected with pkg-config.
So disable the test compilation if openssl
is not properly detected along with libcrypto.
libcrypto is detected but it uses the system one
which leads to the error:
your binary is not an allowed client of /usr/lib/libcrypto.dylib for
architecture x86_64
See more details from Apple:
https://developer.apple.com/forums/thread/124782
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4481>
Similar to cairooverlay element but this element emits "draw"
signal with Direct3D11 render target view, so that an application
can render/overlay/blend on the given render target view
without any copy operation
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4415>
Running tests with Vulkan Validation enabled show an error on vkimage tests:
Code 0 : Validation Error: [ VUID-VkImageViewCreateInfo-image-04441 ]
Object 0: VK_NULL_HANDLE, type = VK_OBJECT_TYPE_COMMAND_BUFFER; Object 1: handle
= 0x50000000005, type = VK_OBJECT_TYPE_IMAGE;
| MessageID = 0xb75da543
| Invalid usage flag for VkImage 0x50000000005[] used by vkCreateImageView(). In
this case, VkImage should have VK_IMAGE_USAGE_SAMPLED_BIT |
VK_IMAGE_USAGE_STORAGE_BIT | VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT |
VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT |
VK_IMAGE_USAGE_TRANSIENT_ATTACHMENT_BIT | VK_IMAGE_USAGE_INPUT_ATTACHMENT_BIT |
VK_IMAGE_USAGE_FRAGMENT_SHADING_RATE_ATTACHMENT_BIT_KHR |
VK_IMAGE_USAGE_FRAGMENT_DENSITY_MAP_BIT_EXT |
VK_IMAGE_USAGE_VIDEO_DECODE_DST_BIT_KHR |
VK_IMAGE_USAGE_VIDEO_DECODE_DPB_BIT_KHR |
VK_IMAGE_USAGE_VIDEO_ENCODE_SRC_BIT_KHR |
VK_IMAGE_USAGE_VIDEO_ENCODE_DPB_BIT_KHR | VK_IMAGE_USAGE_SAMPLE_WEIGHT_BIT_QCOM
| VK_IMAGE_USAGE_SAMPLE_BLOCK_MATCH_BIT_QCOM set during creation.
The Vulkan spec states: image must have been created with a usage value
containing at least one of the usages defined in the valid image usage list for
image views
This patch adds VK_IMAGE_USAGE_SAMPLED_BIT to the usage bits in test.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4296>
H.265 NAL always have 2 bytes of headers. Unlike the H.264 parser, this parser
will simply return that there is NO_NAL if some of these bytes are missing.
This is then properly special cased by parsers and decoders. Add a test to
ensure we don't break this in the future.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3234>
The appropriate return value for incomplete NAL header should be
GST_H264_PARSER_NO_NAL_END. This tells the parser element to
gather more data. Previously, it would assume the NAL is corrupted
and would drop the data, potentially causing stream corruption.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3234>
As specified in EIA/CEA-608-B section 8.4:
When closed captioning is used on line 21, field 2, it shall conform
to all of the applicable specifications and recommended practices as
defined for field 1 services with the following differences:
a) The non-printing character of the miscellaneous control-character pairs
that fall in the range of 14h, 20h to 14h, 2Fh in field 1, shall be replaced
with 15h, 20h to 15h, 2Fh when used in field 2.
b) The non-printing character of the miscellaneous control-character pairs
that fall in the range of 1Ch, 20h to 1Ch, 2Fh in field 1, shall be replaced
with 1Dh, 20h to 1Dh, 2Fh when used in field 2.
This means simply switching the "field" field in the caps isn't enough for
converting raw 608 from one field to another, some control codes also
need to be amended.
+ Adds simple test
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4126>
The previous implementation was a bit primitive, assuming the subclass
had registered a template name starting with sink_ . Instead make
the effort of parsing the actual template name, and use that to generate
the final pad name.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4032>
Raw 608 caps can now contain a "field" field. On the input side it
signifies that the input raw 608 is attached to either field 0 or 1,
on the output side it allows selecting whether to extract the raw 608
data for field 0 or 1 for field-aware formats.
In addition, it is also allowed to use ccconverter to "convert" 608
field 0 to 608 field 1 (and conversely), this is passthrough as the
change only needs to happen in the caps.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4031>
Systems like musl libc don't support ISO 6937 in iconv. This ensures
that the MPEG-TS plugin can cope with that. There is existing support
in the plugin for other methods, so it seems to have been the original
intent anyway.
Fixes: #1314
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3245>
If an input is malformed (only produces cea608 field 1 cc_data) then
when in passthrough we would effectively be dropping every second cea608
on output as we would not store any unused cea608 data.
Fix by having all code paths go through the framerate conversion code
which will store and retrieve any relevant data across buffers.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3211>
In fact, all the h264 bit writer have byte aligned output except
the slice header. So we change the API from bit size in unit to
byte size, which is easy to use. For slice header, we add a extra
"trail_bits_num" to return the unaligned bits number.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3193>
According to W3C
specification (https://w3c.github.io/webrtc-pc/#datachannel-send) we
should return InvalidStateError exception when trying to send when the
channel is not open. In the world of C/glib/gstreamer we don't have
exceptions but have to rely on gboolean/GError instead. Introducing
these calls for a change in function signature of the action signals
used to send data on the datachannel. Changing the signature of the
existing "send-string" and "send-data" signals would mean an immediate
breaking change so instead we deprecate them. Furthermore, there is no
way to express GError** as an argument to an action signal in a way
that fits language bindings (pointer-to-pointer simply does not work)
and we have to use regular functions instead.
Therefore we introduce gst_webrtc_data_channel_send_data_full() and
gst_webrtc_data_channel_send_string_full() while deprecating the old
functions and corresponding signals.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1958>
doesn't align on 20 millisecond frame size.
The AMR-WB codec imposes a fixed 20 millisecond frame size. In its current
form, the `voamrwbenc` plugin deals with this limitation by discarding any
audio at the end of the stream that falls short of 20 milliseconds. This patch
keeps the audio data, and appends silence to the end to preserve frame size
alignment.
The patch also adds tests to check for the updated behavior. I noticed that
tests weren't being built, so I changed the build to allow for building the
tests when the `tests` and `voamrwbenc` options are set.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3027>
This avoids getting in a bunch of corner cases. We'd have to insert
a "rejected" line from the start as a place-holder to get around this,
but the rest of the code just becomes more complicated, so just
disallow it for now.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2439>
Add an example to show the usage of present singal.
In this example, a text overlay with alpha blended background
will be rendered on swapchain's backbuffer by using
Direct3D11, Direct2D, and DirectWrite APIs.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2923>
This is based on gtksink, but similar to waylandsink uses Wayland APIs
directly instead of rendering with Gtk/Cairo primitives.
Note that the long term plan is to move this into the existing extension
in `-good`, which requires the Wayland library to move the as well.
For this reason several files like `gstgtkutils.*` and `gtkgstbasewidget.*`
are straight copies and should be kept in sync.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1515>
There might be a sequence of event and buffer flow:
- Got stream-start/caps/segment events
- Got flush events
- And then buffers with a new segment event
In the above case, stream-start and caps event might not be reached to
peer proxysrc if peer proxysrc is not ready to receive them.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1552>
This example code demonstrates D3D11 device sharing between
application and GStreamer. Application can access texture
using appsink and it can be rendered on application's window without
any copy operation.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2646>
This new signal allows data-channel consumers to configure signal handlers on a
newly created data-channel, before any data or state change has been notified.
The webrtcin unit-tests were refactored to make use of this new signal.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2427>
In preparation for the new element `GstGtkWaylandSink`, move reusable
parts out of `GstWaylandSink` into the already exisiting but very
barebone library.
Notable changes include:
- the `GstWaylandVideo` interface was dropped
- support for `wl-shell` was dropped
- lots of renaming in order to match established naming patterns
- lots of code modernisations, reducing boilerplate
- members were made private wherever possible
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2479>
WHen bundling, if multiple medias are used with the same media payload, then
each of the fec/rtx/red additions would add a distinct payload. This could
very easily overflow the available payload space.
Instead, track the relationship between the media payload value and
the relevant fec/rtx/red payload values and reuse them whenever
necessary, even when bundling.
e.g.
...
a=group:BUNDLE video0 video1
m=video 9 UDP/SAVPF 96 97
a=mid:video0
a=rtpmap:96 VP8/90000
a=rtpmap:97 rtx/90000
a=fmtp:97 apt=96
...
m=video 9 UDP/SAVPF 96 97
a=mid:video1
a=rtpmap:96 VP8/90000
a=rtpmap:97 rtx/90000
a=fmtp:97 apt=96
...
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2474>
Now it uses the JPEG parser in libgstcodecparsers, while the whole
code is simplified by relying more in baseparser class for tag
handling.
The element now signals chroma-format and default framerate is 0/1,
which is for still-images.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1473>
Add gst_h265_parser_identify_and_split_nalu_hevc() method to
handle a case where packetized stream contains start-code prefix.
This new method behaves similar to exisiting gst_h265_parser_identify_nalu_hevc()
but it will scan start-code prefix to split given data into
NAL units.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2394>
GstD3D11ScreenCapture object is pipeline-independent global object
and the object can be shared by multiple src elements,
in order to overcome a limitation of DXGI Desktop Duplication API.
Note that the API allows only single capture session in a process for
a monitor.
Therefore GstD3D11ScreenCapture object must be able to handle a case
where a src element holds different GstD3D11Device object. Which can
happen when GstD3D11Device context is not shared by pipelines.
What's changed:
* Allocates capture texture with D3D11_RESOURCE_MISC_SHARED for the
texture to be able to copied into other device's texture
* Holds additional shader objects per src element and use it when drawing
mouse
Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/1197
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2366>
They are part of gst_dep already and we have to make sure to always have
gst_dep. The order in dependencies matters, because it is also the order
in which Meson will set -I args. We want gstreamer's config.h to take
precedence over glib's private config.h when it's a subproject.
While at it, remove useless fallback args for gmodule/gio dependencies,
only gstreamer core needs it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2031>
If things progress fast enough, some state changes may not be seen be
the waiting code.
Fix by:
1. keeping a list of all the state changes
2. waiting checks each entry and if the relevant state is found, all
states up to and including then are removed.
This ensures that any waits will see all the state sets.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1664>
Input (sink pads) is the already-ssrc-muxed stream with the relevant rtp
sdes header extensions already applied:
- mid
- stream-id
- repaired-stream-id
Output (src pads) have the pads separated into individual ssrc's as
that's what rtpbin gives us.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1664>