AS-IS:
D3D11Convert class is baseclass of D3D11ColorConvert and D3D11Scale
* GstD3D11Convert
|_ GstD3D11ColorConvert
|_ GstD3D11Scale
TO-BE:
Introducing a new base class for color conversion and/or rescale elements
* GstD3D11BaseConvert
|_ GstD3D11Convert
|_ GstD3D11ColorConvert
|_ GstD3D11Scale
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2029>
In 6ae24948 the pipeline buffer destroy were removing assuming it
wasn't required. Nonetheless, debugging the code it looks like a
buffer leak in iHD driver since the ID of the buffer kept increasing.
The difference now is that first the filter buffers are destroy first
and later the pipeline buffer.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2023>
Just like the decoder, the vapostproc also needs to copy the output
buffer to raw buffer if downstream elements only supports raw caps
and does not support the video meta.
The pipeline like:
gst-launch-1.0 filesrc location=xxxx ! h264parse ! vah264dec ! \
vapostproc ! capsfilter caps=video/x-raw,width=55,height=128 ! \
filesink location=xxx
needs this logic to dump the data correctly.
fixes: #1523
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2026>
Both wasapi2 and wasapi plugins use WASAPI API. So "device.api=wasapi"
would make sense for the wasapi2 plugin as well. But people would be
confused by the identical "device.api=wasapi" property if intended
plugin is wasapi, not wasapi2. This change will make them distinguishable
by using "device.api" device property.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2024>
The first time update_src_caps is called, there's no frame parsed yet,
therefore we don't know whether the file has alternate-field interlacing
mode. If we run it again after we have a frame, it might be that now we
have the SEI pic_struct parsed, and therefore we know that it's
field-based interlaced, and therefore the height must be multiplied by
two. Earlier on this was not detected as a change.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2022>
One problem that va dmabuf allocator had is when preparing a buffer from
dmabuf memories in the allocator pool, specially when a buffer is composed by
several memories. This memories have to be by certain number and in certain
order.
This patch stores the number of memories and their address in order when a
dmabuf-based buffer is created and when preparing a buffer, it is reconstructed
with this info.
Finally, instead of pushing the memories as soon as they are unrefed, they are
hold until GstVaBufferSurface's ref_mems_count reaches zero (all the memories
related with that buffer/surface are unrefed). Until that happen, all the
memories are pushed back into the queue, locked, assuring that all the memories
related with a single buffer (with the same surface) remain contiguous, so the
buffer reconstruction is assured.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2013>
Instead of removing memories from buffers at reset_buffer()/release_buffer() the
bufferpool operation is kept as originally designed, still the allocator pool is
used too. Thus, this patch restores the buffer size configuration while removing
release_buffer(), reset_buffer() and acquire_buffer() vmethods overloads.
Then, when the bufferpool base class decides to discard a buffer, the VA
surface-based memory is returned to the allocator pool when its last reference
is freed, and later reused if a new buffer is allocated again.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2013>
Add a new element d3d11deinterlace to support deinterlacing.
Similar to d3d11videosink and d3d11compositor, this element is
a wrapper bin of set of child elements including helpful
conversion elements (upload/download and color convert)
to make this element configurable between non-d3d11 elements.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2016>
Instead of relying on GstBaseParse default behaviour of computing
the duration of a parsed buffer based on the framerate passed
to gst_base_parse_set_framerate(), we instead compute the duration
ourselves, as we have more information available.
In particular, this means we now output buffers with a duration
that matches that of raw interlaced buffers when each field is
output in a separate buffer.
This fixes DTS interpolation performed by GstBaseParse, as the
previous behaviour of outputting each field with the duration of
a full frame was messing up the base class calculations.
When not enough information is available, h264parse simply falls
back to calculating the duration based on the framerate and hope
for the best as was the case previously.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1973>
Since our decoder DPB texture pool cannot be grown once it's
configured, we should pre-allocate sufficient number of textures
for zero-copy playback (but not too many).
The "min buffers" allocation query parameter can be a hint for
the number of required textures in addition to DPB size.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2017>
In the current code, we call frame_unref only when the frame is
outputted. This is OK for normal playback, but when seek happens,
the frames stored in DPB is not outputted and causes some memory
leak.
The correct way is that we should call frame_unref every time we
finish the handle_frame(), which is also the behaviour of H264/H265
decoder.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2014>
* Invoke GstH265DecoderClass::new_sequence() method per interlaced
stream status update so that subclass can update caps.
* Parse picture timing SEI and set buffer flags on GstH265Picture
object. Subclass can refer to it like that of our h264decoder
implementation.
* Remove pointless GstH265PictureField enum
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2008>
There were two problems with frame copy:
1. The input video info are from the format color, not form the allocated VA
surface, it's needed to update the sink video info according with the
allocator's data.
2. The parameters of `gst_video_frame_copy()` were backwards.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2007>
transform_size() basetransform vmethod is used when there's no output buffer
pool and allocates a system memory buffer. With VA this cannot be allowed, since
it needs VASurfaces to process.
Thus transform_size() is not required, but to play safe let's return FALSE.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2007>