The code is simplified by using GQuarks for looking for caps features, and
removing inner loops.
Also, it's used the pad template caps to compare with the incoming caps because
is cheaper at the beginning of negotiation, where the pad template caps is used.
And, since the ANY caps where removed, there's no need to check for an initial
intersection.
Finally, the completion of caps features is done through a loop.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6698>
The ANY caps in pad template caps seems to mess up the DMA negotiation.
The command of:
GST_GL_API=opengl gst-launch-1.0 -vf videotestsrc ! video/x-raw,format=NV12 !
vapostproc ! "video/x-raw(memory:DMABuf)" ! glimagesink
fails to negotiate, but in fact, the vapostproc can convert the input NV12
formant into the RGBA format to render.
The ANY may help the passthough mode, but we should make the negotiate correct
first.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6698>
When the conversion is only caps feature from memory:VAMemory to system memory,
it's possible to optimize by doing a pseudo pass-through since the va-backed
buffers are the same for system memory buffers.
This change will also mitigates #2940
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6174>
When we fixup src caps, the current way of handling the HDR fields is not
correct.
1. We trim the HDR fields only when the input caps is not a subset of the
fixup src caps. But in fact, the input caps with HDR fields such as the
"mastering-display-info" can possibly be the subset of the fixup src caps,
if they have all same other fields.
2. We always copy the colorimetry from input caps to src caps if it is
absent. But when hdr-tone-mapping is enabled, the HDR->SDR conversion makes
the colorimetry change. We should use downstream's setting, or just use the
default colorimetry of SDR.
We changes to:
1. If hdr-tone-mapping is enabled, we trim all HDR fields and add a correct
colorimetry.
2. Copy colorimetry from input if it is still absent.
3. Consider the subset replacement.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2244>
By default, the classification is
"Converter/Filter/Colorspace/Scaler/Video/Hardware", but if VA
post-processor driver supports either color balance, skin tone
enhancement, sharpening or noise reduction, "Effect" is added.
Thus, if vapostproc ranking is raised, it can be chosen by
autovideosink.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2066>
In order to other plugins use gstva objects, such as allocators and buffer
pools, this merge request move them from the va plugin to the gstva library.
This objects are not exposed in <gst/va/gstva.h> since they are not expected
to be used by users, only by plugin implementators.
Because of the surface copy design, which is used to implement allocator's
mem_copy() virtual function, depends on the vafilter, which is kept inside
the plugin, memory copy through VAPosproc is disabled and removed temporarly.
Also added some missing parameter validation.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2048>
This patch adds a new parameter: hdr-tone-mapping (same as
vaapipostproc), if the HDR capabilites are availabe in driver, and
it's disabled by default.
If hdr-tone-mapping is enabled then HDR fields in sink caps are
processed in frames from HDR to SDR, removing those hdr fields in
source pad caps too.
hdr-tone-mapping is not enabled if a color conversion is also
requested, since it fails to process in the iHD driver, so far.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1258>
Intel drivers expose some colorbalance's maximum values much more
bigger than their minimum values, given their middle values (default
value). This means, in practice, that the real middle point between
the maximum and minimum values implies a major change in the color
balance, which is not expected by the GStreamer color balance logic.
This patch makes the given maximum value symmetrical to the minimum
value, given the middle one (default value).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1580>
When fixating color, there might be "other caps" with color spaces not
supported by the caps features exposed in the vapostproc's source pad
caps template (perhaps it's a bug somewhere else in GStreamer).
This solution checks if the proposed format exists in the filter
within the caps feature associated with the proposed format.
The check is done with the new filter's function
gst_va_filter_has_video_format().
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1559>
gst_va_fixate_format() will iterate all othercaps' structures to find
the one with less information lost at color conversion. If a structure
with same color format is found, the iteration stops. It's like a
smart truncation. Then, this function also will choose the caps
feature.
Later this structure is used fixate its size and no further truncation
is needed.
Don't intersect at fixate, since it kills possible resizing.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1261>
gst_va_vpp_complete_caps_features() now receives the @feature_name to
add and return if @caps doesn't provide it.
So, instead of two nested loops, now the function is a single loop,
traversing @caps to find if each structure already contains the requested
@features_name.
It's important to add missing caps features with @caps, in order to
not lost information.
The function caller does the external loop by calling per each
available caps feature.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1024>
In order to make more readable the caps transformation, the operation
was split in two phases:
1. Rangify the supported caps structures.
2. Add the missing (and supported) caps features.
Step 1 modified its logic, by copying any unrecognized structure.
It's a previous step required for allowing ANY caps feature as
passthrough.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1024>
The first approach to fixate was simply a copy&paste of both
videoconvert and videoscale, trying to keep their logic as isolated
as possible. But that brought duplicated and sparse logic.
This patch merge both approaches simplifying the fixate operation.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1109>