The current way only selects the best video format from the first
structure of the caps. The caps like:
video/x-raw(memory:VAMemory),drm-format=(string)NV12; \
video/x-raw(memory:VAMemory),format=(string){ NV12, Y210 }
Will just choose NV12 as the result, even the bitstream is 10 bits.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4928>
The current way of using parent's copy_metadata() virtual function will
selectively filter out some meta such as crop meta. That virtual function
should be used when copying input buffer's meta data into output buffer,
not suitable when importing the input buffer.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4887>
When the input buffer has crop meta, and we need to do copy, we
should consider the uncropped video size and copy the full size
of video memory.
The video meta in this case should contain the full uncropped
resolution info. We can use it to create full size va buffers.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4887>
The VA has its internal video format mapping(because different drivers may
have different interpretation for the same format), so we should convert the
info in GstVideoInfoDmaDrm into the according video info based on that mapping.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4821>
The devices list returned by g_udev_client_query_by_subsystem() may
contain udev devices in disorder path name. For example, on some
platform it may contain renderD129 before renderD128 device. This
will cause we register wrong va plugin name. In this case, the
renderD129 will be registered as default plugins such as vah265dec,
while the renderD128 will be registered as varenderD128h265dec.
This conflicts with the non-udev version of gst_va_device_find_devices().
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4643>
In the same spirit of libva-win32 elements this patch shows the driver of each
element in gst-inspect, giving more information to the user. This driver
description is parsed from vaQueryVendorString from mesa and intel drivers,
while copied as is for others. Also appends the render node for multi gpu
systems.
Fixes#2349
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4204>
Otherwise application would not be able to know matching element
for wanted device. Typical use case of the read-only device path
(DXGI Adapter LUID, CUDA device index, etc) property is that
application enumerates physical devices and then selects matching
GStreamer element (in null state) via device path property.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4220>
The VA API has not defined the scaling list entries for U/V planes
for the 4:4:4 stream. In fact, we do not meet the 4:4:4 format output
for H264 so far, and scaling list is not used frequently, so we just
print out some warning and ignore these scaling list values.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3749>
The VAAPI vaQueryVideoProcPipelineCaps() requires the context as the
parameter. So far, we always pass VA_INVALID_ID and it can succeed.
But the API does not say that and in theory, a valid context is required.
Now the new platform really needs a valid context and so we have to
delay that query until the context is created.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3613>
And even that vaav1dec doesn't use vabasedec negotiate vmethod, it should align
with the new scheme of using base's width & height for surface size and
output_info structure for downstream display size negotiation.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3480>
This vmethod can be used by decoders with the same VA decoder reopen logic:
same profile, chroma, width and height.
Also a new public method called gst_va_base_dec_set_output_state() with the
common GStreamer code for setting the output state, which is always called by
the negotiate vmethod.
In order to do this refactoring, new variables in vabasedec have to be populated
by the decoders:
* width and height define the resolution set in VA decoder. In the case of H264
would be de coded_width and codec_height, or max_width and max_height in AV1.
* output_info is the downstream video info used for negotiation in
gst_va_base_dec_set_output_state().
* input_state, from codec parent class shall be also held by vabasedec
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3480>
There could be multi-GPU setups where the non-first has more
entrypoints than the first one, and the elements names are not
homogeneous, leading to pipeline building error.
This patch add the render node in the elements names when they belong
to the non-first device.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3491>
To fix the warning on Alderlake
vafilter gstvafilter.c:534:gst_va_filter_ensure_filters:<vafilter0>
vaQueryVideoProcFiltersCaps: list argument exceeds maximum number
Increase the number of caps to 16 as vadumpcaps does.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3473>
1. Removes the verification if the internal encoder is not opened
yet to allow the property setting.
2. And toggles on the base class' reconf flag for each property
variable that can be modified at run time.
3. Mark those modifiable properties as mutable while playing.
Currently the run-time modifiable properties are:
qpi, qpp, qpb, bitrate, target percentage, target usage and rate control
Other properties can be enabled too, but they need testing.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2466>
Adds an internal function reset() which drains the internal queues and
calls the reconfig() vmethod.
This reset() method is called inconditionally at set_format() and in
handle_frame() if the instance's reconf flag is enabled.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2466>
If parameters remain similar enough to avoid either encoder reopening
or downstream renegotiation, avoid it.
This is going to be useful for dynamic parameters setting.
To check if the stream parameters changed, so the internal encoder has
to be closed and opened again, are required two steps:
1. If input caps, profile, chroma or rate control mode have changed.
2. If any of the calculated variables and element properties have
changed.
Later on, only if the output caps also changed, the pipeline
is renegotiated.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2466>
This method will return the caps configured in the reconstruct buffer
pool, and its maxium number of buffers to allocate.
The caps are needed later to know if the internal encoder has to be
reopened if the stream properties change.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2466>
In fact, all the h264 bit writer have byte aligned output except
the slice header. So we change the API from bit size in unit to
byte size, which is easy to use. For slice header, we add a extra
"trail_bits_num" to return the unaligned bits number.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3193>
Handle when encoder doesn't support rate control, which is set as
VA_RC_NONE, and if the set rate control mode is not supported by the
GStreamer element, the element configuration fails.
Also it logs out max and target bitrate.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3063>
The entrypoint is set when the encoder helper is constructed,
nonetheless it was also passed as parameter when opening. That's
buggy.
In order to simplify the code, the entrypoint at construction is
honored.
But gst_va_encoder_has_profile_and_entrypoint() now doesn't rely in
the internal list of profiles since it only contains those that
belongs to codec and entrypoint, thus it queries directly the VA
driver.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3063>
The AV1 support multi spatial layers within one TU with different
resolutions, and only the highest spatial layer need to be output.
For example, there are two spatial layer, base level is 800x600
and higher level is 1920x1080. We need to decode both because the
higher level needs base layer as reference, but we only need to output
1920x1080 frames here.
The current manner always renegotiates the caps once we detect the
current picture resolution changes, so we renegotiate again and
again between different layers. That's a big waste and has very
low performance. We now only do the renegotiation for the highest
output layer. For other non output layers, we just keep a internal
buffer pool which is big enough to handle the surface allocation.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2382>
The current handle_frame() does not return the real error that happens
in decode_scan and decode_frame, which makes the pipeline continue with
the error and may trigger asserting later.
We also return the error when decode_quant_table or decode_huffman_table
fails.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2938>
Radeon mesa gallium driver has a bug which adds P010_10LE sink caps
format. This patch removes formats which arent 420 chroma.
gst_caps_set_format_array() wasn't used because the fix traverse
several structures with potential different formats.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2844>
This patch adds general mechanism for handling specific hacks. In this
case for jpeg decoder in i965 driver, which cannot create surfaces
with fourcc specified.
From jpeg decoder to the allocator, which creates the surfaces,
there's a non-simple path: basedec pseudo-class adds a hacks guint32
which will be set by actual elements (vajpegdec, in this case) and
basedec will always set the hack to the allocator when the allocator
is instantiated.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1575>
Given the supported rt formats in a profile/entrypoint config it's
possible to know the supported JPEG colorspace and subsampling. This
patch adds this information in coded caps to a safer autoplugging
after jpegparser.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1575>
This base class is intented for hardware accelerated decoders, but since
only VA uses it, it will be kept internally in va plugin.
It follows the same logic as the others video decoders in the library but.
as JPEG are independet images, there's no need to handle a DBP so no need
of a picture object. Instead a scan object is added with all the structures
required to decode the image (huffman and quant tables, mcus, etc.).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1575>
Currently, video format is decided with downstream caps intersection,
but that's not correct since chroma is not considered. The video
decoders have to decide the output format given the used chroma, not
by the downstream caps negotiation.
This patch changes that. Still, caps feature is selected by caps
negotiation, then, with the preferred caps feature, the output format
is search within that caps feature.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2569>
We should not reset the input/output_frame_count when some configure
changes. For example, the if resolution changes, the current way just
resets the frame count and make the PTS of the output buffer restart
from the original PTS of the first frame. That causes a lot of QOS
event and drop all the new frames.
We should only reset them when encoder start().
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2489>