Given the supported rt formats in a profile/entrypoint config it's
possible to know the supported JPEG colorspace and subsampling. This
patch adds this information in coded caps to a safer autoplugging
after jpegparser.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1575>
This base class is intented for hardware accelerated decoders, but since
only VA uses it, it will be kept internally in va plugin.
It follows the same logic as the others video decoders in the library but.
as JPEG are independet images, there's no need to handle a DBP so no need
of a picture object. Instead a scan object is added with all the structures
required to decode the image (huffman and quant tables, mcus, etc.).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1575>
Gallium drivers historically have reported strange dmabuf sizes, from always
zero to the whole frame (multiple fds). The simplest solution is to use lseek
SEEK_END to get the prime descriptor size.
Also the allocator raises a warning if both values differ in order to report
it to driver.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2574>
The commit b90d0274 introduces uninitialized width and height when we
consider to change the "pixel-aspect-ratio" for some interlaced stream.
We need to check the resolution in the src caps, and if no resolution
info found, there is no need to consider the aspect ratio.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2630>
Removing glvideomixer-like nuance (it was initially referenced)
and rewriting element since it's not an optimal design at all
from performance point of view.
* Remove wrapper bin (and internal conversion/upload/download elements)
which will waste CPU/GPU resources. Conversion/blending can be done by the
d3d11compositor element at once.
* Add support YUV blending without RGB conversion.
The RGB <-> YUV conversion is completely unnecessary since YUV textures
support blending as well.
* Remove complicated blending operation properties since it's hard
to use from application point of view. Instead, adding "operator" property
like what compositor element does.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2631>
Similar to and inspired by glimagesink and gtkglsink.
Using the Wayland buffer transform API allows to offload
rotate operations to the Wayland compositor. This can have
several advantages:
- The Wayland compositor may be able to use hardware plane
capabilities to do the rotation.
- In case of pre-rotated content on rotated outputs the
rotations may equal out, potentially allowing the
compositor to use hardware planes even if they don't
support rotate operations.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2543>
AVC and HEVC define crop rectangle and the x/y coordinates might
not be zero. This commit will address the non-zero x/y offset coordinates
via GstVideoCropMeta if downstream supports the meta and d3d11 memory.
Otherwise decoder will copy decoded texture into output frame.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2624>
This allows the reception of streams that don't exactly match
the codec preferences. In particular, the ssrc in the codec preferences
is local sender SSRC, the other side is expected to send a different SSRC.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2615>
Some encoders (e.g. Makito) have H265 field-based interlacing, but then
also specify an 1:2 pixel aspect ratio. That makes it kind-of work with
decoders that don't properly support field-based decoding, but makes us
end up with the wrong aspect ratio if we implement everything properly.
As a workaround, detect 1:2 pixel aspect ratio for field-based
interlacing, and check if making that 1:1 would make the new display
aspect ratio common. In that case, we override it with 1:1.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2577>
Currently, video format is decided with downstream caps intersection,
but that's not correct since chroma is not considered. The video
decoders have to decide the output format given the used chroma, not
by the downstream caps negotiation.
This patch changes that. Still, caps feature is selected by caps
negotiation, then, with the preferred caps feature, the output format
is search within that caps feature.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2569>
Rewriting GstD3D11Converter (equivalent to GstVideoConverter)
to optimize some conversion path and clean up.
* Extract YUV <-> RGB conversion matrix building method to
utils. It will be used by other implementation
* Use calculated offset values for YCbCr <-> YPbPr conversion
instead of hardcoded values
* Handle color range adjustment
* Move transform matrix building helper function to utils.
The method will be used by other elements
* Use single constant buffer. Multiple constatne buffer for
conversion pipeline is almost pointless
* Remove lots of duplicated HLSL code and split pixel shader
code path into sampling -> colorspace conversion ->
shader output packing
* Avoid floating point precision error around UV coordinates
* Optimize RGB -> YUV conversion path
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2581>
Copy V4L2_PIX_FMT_P010 define from linux header.
V4L2_PIX_FMT_P010 is the little endian definition of P010 so map
it GST_VIDEO_FORMAT_P010_10LE.
Add it v4l2 default video formats to allows v4l2 decoders to
enumerate and use it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2590>
In case that input is D3D11 texture, QSV seems to work regardless
of the alignment. Actually the alignment requirement seems to make
only sense for system memory.
Other Intel GPU dependent implementations (new VA encoder, and MediaFoundation)
do not require such alignment nor other vendor specific ones (NVENC and AMF)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2540>
A few header files in -bad contain comments that start with the
/** gtk-doc pattern, but should not actually be parsed (and warned
about as such).
Previously, we were using far-reaching wildcard patterns to avoid
parsing those, but this had the unintended side effect of also
excluding legitimate files, and creating confusion when comments
were not parsed from those.
Switch to excluding specific files instead.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2576>
Msdkdec should use it own pool when the allocation from downstream query
is not any msdk_allocator (i.e. msdk_video_allocator,
msdk_dmabuf_allocator and msdk_system_allocator). Otherwise, when using
pipeline "msdkh264dec ! vah264enc !" to transcode a not 16-bit-aligned
stream (i.e. 1920x1080), the transcoding will fail due to the size
mismatch issue between decoder pool and encoder pool.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2451>
This new signal allows data-channel consumers to configure signal handlers on a
newly created data-channel, before any data or state change has been notified.
The webrtcin unit-tests were refactored to make use of this new signal.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2427>
In preparation for the new element `GstGtkWaylandSink`, move reusable
parts out of `GstWaylandSink` into the already exisiting but very
barebone library.
Notable changes include:
- the `GstWaylandVideo` interface was dropped
- support for `wl-shell` was dropped
- lots of renaming in order to match established naming patterns
- lots of code modernisations, reducing boilerplate
- members were made private wherever possible
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2479>
Adding a uri interface enables plugging in RFB/VNC sources to anything
that makes use of uridecodebin:
gst-play-1.0 rfb://:password@10.40.216.180:5903?shared=1
Use userinfo to pass user (ignored) and password, other key/value pairs
can be encoded in the query part of the URI (see shared)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1963>
This is a workaround for pts because oneVPL cannot handle the pts
correctly when there is b-frames. We first cache the input frame pts in
a queue then retrive the smallest one for the output encoded frame as
we always output the coded frame when this frame is displayable.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2089>
gst_amf_encoder_try_output() pushes at most one output buffer downstream
although more may be ready. As a consequence, output samples will keep
queueing up in AMFComponent whenever QueryOutput() returns AMF_REPEAT
(and do_wait is FALSE). This has negative impact on latency when the
video being encoded is a live stream.
In order to avoid it, always retrieve and push all samples available in
AMFComponent's output queue at once.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2536>
Intel DXVA driver crashes sometimes (from GPU thread) if
ID3D11VideoDecoder is released while there are outstanding view objects.
To make sure the object life cycle, holds an ID3D11VideoDecoder refcount
in GstD3D11Memory object.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2504>
We prefer black color as an initial texture color and
Direct3D11 runtime will initialize texture with zeros (except for alpha)
which is fine for RGB formats. But UV components of YUV texture
requires manual clear for black color.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2502>
1) check for right macro name when checking for NICE_VERSION_CHECK
2) if libnice version is 0.1.18.1 this should not satisfy
a NICE_VERSION_CHECK(0,1,19).
Fixes build with libnice 0.1.18.1 subproject checkout.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2499>
We should not reset the input/output_frame_count when some configure
changes. For example, the if resolution changes, the current way just
resets the frame count and make the PTS of the output buffer restart
from the original PTS of the first frame. That causes a lot of QOS
event and drop all the new frames.
We should only reset them when encoder start().
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2489>
d3d11screencapture can miss a cursor shape to draw or draw an outdated cursor shape.
- AcquireNextFrame only provides cursor shape when there is one update
- current d3d11screencapture skips cursor shape when mouse is not drawn
So, if a gstreamer application uses d3d11screencapture with cursor initially not drawn
"show-cursor"=false and then switches this property to true, the cursor will not be
actually drawn until AcquireNextFrame provides a new cursor shape.
This commit makes d3d11screencapture always update the cursor shape information, even
if the mouse is not drawn. d3d11screencapture will always have the latest cursor shape
when requested to draw it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2485>
WHen bundling, if multiple medias are used with the same media payload, then
each of the fec/rtx/red additions would add a distinct payload. This could
very easily overflow the available payload space.
Instead, track the relationship between the media payload value and
the relevant fec/rtx/red payload values and reuse them whenever
necessary, even when bundling.
e.g.
...
a=group:BUNDLE video0 video1
m=video 9 UDP/SAVPF 96 97
a=mid:video0
a=rtpmap:96 VP8/90000
a=rtpmap:97 rtx/90000
a=fmtp:97 apt=96
...
m=video 9 UDP/SAVPF 96 97
a=mid:video1
a=rtpmap:96 VP8/90000
a=rtpmap:97 rtx/90000
a=fmtp:97 apt=96
...
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2474>
* Enhance debug log to print human readable D3D11_FORMAT_SUPPORT flags
value, instead of packed numeric flagset value.
* Only device supported format will be added to format table.
Depending on device feature level (i.e., D3D9 feature devices),
16bits formats will not be supported. Although there might be formats
we deinfed but not supported, it will not be a major issue in practice
since our D3D11 implementation does not support legacy devices already
(known limitation) and also old d3dvideosink will be promoted in that case.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2441>
Now it uses the JPEG parser in libgstcodecparsers, while the whole
code is simplified by relying more in baseparser class for tag
handling.
The element now signals chroma-format and default framerate is 0/1,
which is for still-images.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1473>
NVIDIA GPUs have undocumented limitation regarding minimum resolution
and it can be queried via a NVDEC API. However, since we don't want to
bring CUDA/NVDEC API into D3D11, use hardcoded values for now
until we find a nice way for capability check.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2406>
Until March 2022, the FFmpeg MXF muxer would write the various index table
segments with the same instance ID, which should only be used if it is a
duplicate/repeated table.
In order to cope with those, we first compare the other index table segment
properties (body/index SID, start position) before comparing the instance
ID. This will ensure that we don't consider them as duplicate, but can still
detect "real" duplicates (which would have the same other properties).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2407>
If the stream chroma doesn't match with any video format in the source
caps template (generated from va config surface formats) instead of
return unknown, return the first available format in the template,
assuming that the driver would be capable to do color conversions.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2404>
Use newly added gst_h265_parser_identify_and_split_nalu_hevc()
method to handle broken streams where packetized NAL unit
contain start code prefix in it.
It's obviously wrong stream but we know how to work around it
and even need to support such broken streams since
stateless decoder implementations are being a primary
decoder element.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2394>
Add gst_h265_parser_identify_and_split_nalu_hevc() method to
handle a case where packetized stream contains start-code prefix.
This new method behaves similar to exisiting gst_h265_parser_identify_nalu_hevc()
but it will scan start-code prefix to split given data into
NAL units.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2394>
Instead of using a hard-coded list of preferred formats according the
chroma type, now if now caps are pre-negotiated, from template caps
will choose the first format with the same chroma type. If
pre-negotiated, then it will choose the first format, with same chroma
type, from the first caps structure.
Also all the decoders will check if GST_VIDEO_FORMAT_UNKNOWN is
returned, failing the negotiation.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2351>
V4L spec now requires decode_params flags to be set in accordance to the
frame's type. In particular this is required by H.264 decoder of NVIDIA
Tegra SoC to operate properly. Set the flags based on type of parsed
slices.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1757>