For interlaced video:
* set the interlace mode in the src caps
* double the height from SPS in the caps.
* set field latency, instead of frame latency.
Fix#778
This code add required mechanism to try and allocate (not implemented yet)
otherwise wait for more buffers. This also comes with mechanism to terminate
the wait on flush or PAUSED_TO_READY transitions.
Use a goto to ensure that for all cases we cleanup the current picture state.
And move the src buffer allocation higher, so we don't queue a bitstream
buffer if we don't have a picture buffer to decode into.
If decoding failed because end_picture() failed, set the picture to
nonexisting, this way output_picture() will be skipped. This avoids confusing
special cases in output_picture() implementation.
This allow negotiating the output format through caps. Some drivers can
pipeline the decoder buffer through an image processor. This only support
colorspace conversion for now.
This implements driver stride support but only for single allocation buffers.
This code is imported from the original v4l2 plugin and adapted to the new
helper context.
In some case, when downstream does not support GstVideoMeta, we need to
normalize the stride and offset of the buffer so that downstream can render
properly with a GstVideoMeta. This code is not called when GstVideoMeta is
supported downstream.
In this patch we strictly set the GstVideoMeta width/height to the coded width
and height. Further patches will add stride support and frame copying when
downstream does not support GstVideoMeta.
Don't let downstream cause a renegotiation at random point in time. This would
lead to spurious renegotiation and the decoder state may not be recoverable.
We now pass the controls, associated to a request, queue the bitstream, qeueue
a picture buffer to decode into and finally queue the request. This now runs
until the buffer pool is exhausted. The next step will be to dequeue.
In this patch we fill the control structure with the bitstream paramter and
copy the bitstream data into V4L2 memory. Slice paramters are only the subset
of what Hantro needs, without any support for interlaced content.
This is a pooling allocator and the buffer pool does nothing other then
reusing the GstBuffer structure. Note that the pool is an internal pool, so
the start/stop/set_config virtual functions are not implemented.
This introduces the skeleton of the H264 decoder. The plugin will list the
devices and register a subclass of the GstV4L2CodecH264Dec base class. The
subclass will pick the required specific information from the GstV4L2Device
stored in the subclass structure.
This is a GstObject which will be used to hold on media and video device file
descriptor and provide abstracted ioctl calls with these descriptor. At the
moment this helper contains just enough to enumerate the supported format.
This part will be used by the plugin to register the CODEC specific elements..
Most of the features we need are very early or not expose yet in the uAPI.
Using an internal copy ensure that we everything we need is defined avoiding
to add load of checks and conditionnal code.
This introduces a GstV4L2CodecDevice structure and helper to retrieve a
list of CODEC device drivers. In order to find the device driver we
enumerate all media devices with UDEV. We then get the media controller
topology and locate a entity with function encoder or decoder and make
sure it is linked to two V4L2 IO entity pointing to the same device
node.
The media driver can support HEVC 8-bit 422 encoding for non-lowpower
mode since ICL[1], so VPP is not needed for this case.
Sample pipeline:
gst-launch-1.0 videotestsrc ! video/x-raw,format=YUY2 ! msdkh265enc ! \
filesink location=output.h265
[1] https://github.com/intel/media-driver#decodingencoding-features
gsth265parser does it already. Although corresponding API of h265parser is
gst_h265_sei_free, _clear suffix is more consistent naming for h264parser
since there are gst_h264_{sps,pps}_clear().
DXVA supports two kinds of texture structure for DPB, one is
"1) texture array" and the other is "2) array of texture".
1) is a type of texture which is single ID3D11Texture2D object having
ArraySize greater than one. So the ID3D11Texture2D itself is a set of texture.
Each sub texture of this type mush have identical resolution, format and so on,
and the number of sub texture in a texture array is fixed.
2) is an array of usual ID3D11Texture2D object. That means each
ID3D11Texture2D is independent each other and might have different resolution as well.
Moreover, we can modify the number of frames of the array dynamically.
This type is more flexible than "1) texture array" in terms of dynamic
behavior and also this type of texture can be used for shader resource view
but "1) texture array" couldn't be.
If "2) array of texture" is supported by driver, DXVA spec is saying that
it's preferred format over "1) texture array" in terms of performance.