By this commit, following formats will be newly supported by d3d11 elements
* Y444_{8, 12, 16}LE formats:
Similar to other planar formats. Such Y444 variants are not supported
by Direct3D11 natively, but we can simply map each plane by
using R8 and/or R16 texture.
* P012_LE:
It is not different from P016_LE, but defining P012 and P016 separately
for more explicit signalling. Note that DXVA uses P016 texture
for 12bits encoded bitstreams.
* GRAY:
This format is required for some codecs (e.g., AV1) if monochrome
is supported
* 4:2:0 planar 12bits (I420_12LE) and 4:2:2 planar 8, 10, 12bits
formats (Y42B, I422_10LE, and I422_12LE)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2346>
Some elements will make use of the automatically generated names to
create new pads in future muxer instances, for example splitmuxsink.
Previously we would've created a pad with a random pid that would become
"sink_0", and then on a new muxer instance a pad "sink_0" and tsmux
would've then failed because 0 is not a valid PID.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2318>
When B-frame is enabled, encoder seems to adjust PTS of encoded sample
by using frame duration.
For instance, one observed timestamp pattern by using B-frame enabled
and 30fps stream is:
* Frame-1: MF pts 0:00.033333300 MF dts 0:00.000000000
* Frame-2: MF pts 0:00.133333300 MF dts 0:00.033333300
* Frame-3: MF pts 0:00.066666600 MF dts 0:00.066666600
* Frame-4: MF pts 0:00.099999900 MF dts 0:00.100000000
We can notice that the amount of PTS shift is frame duration and
Frame-4 exhibits PTS < DTS.
To compensate shifted timestamp, we should
calculate the timestamp offset and re-calculate DTS correspondingly.
Otherwise, total timeline of output stream will be shifted, and that
can cause time sync issue.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2354>
Previously pads might have been requested already (e.g. in NULL state),
then reset was called (e.g. because changing state) and then a new pad
was requested. Resetting is re-creating the internal muxer object and as
such resetting the pid counter, so the next requested pad would get the
same pid as the first requested pad which then leads to collisions.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2317>
As per SDK doc, IDeckLinkInputCallback::VideoInputFrameArrived
method might not provide video frame and it can be null.
In that case, given stream_time can be invalid.
So, we should not try to convert GST_CLOCK_TIME_NONE
by using gst_clock_adjust_with_calibration()
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2337>
Track kernel and VA driver dependencies so gstreamer
will re-inspect the plugin if any of them change.
Also, do not blacklist the plugin if !msdk_is_available
since it could be a transient issue caused by one or
more external dependency issues (e.g. wrong/missing
driver specified, but corrected by user later on).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2335>
Some decoder implementations might drain out internal buffers and
reset its status on segment-done event. So, in case that
upstream stream-format is packetized but downstream supports only
byte-format, required codec-data might not be forwarded toward
downstream if such parameter set NAL units don't exist in inband
bitstream. Therefore, parse elements should re-send parameter set NAL
units like the case of flush event.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2334>
We add 12 bits entries into this default mapping. And the old mapping
is not precise. For example, the NV12 should not be used as the default
mapping for VA_RT_FORMAT_YUV422 and VA_RT_FORMAT_YUV444, it is even not
a 422 or 444 format.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2332>
For those using context from the application which
would be the embedded video case, if the frame callback
is entering at the same time as window is finalizing,
a wayland proxy object would be destroyed twice, leading
the refcout less than zero in the second time, it can
throw an abort() in wayland.
For those top window case, which as a directly connection
to the compositor, they can stop the message queue then
the frame callback won't happen at the same time as the
window is finalizing. It doesn't think it would bother
them about this.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1883>
alphacombine element is a simple element that assumes buffers are always
paired, or at least that missing buffers are signalled with a GAP. The QoS
implementation in the GstVideoDecoder base class allow decoders dropping
frames independently and that could lead to stall in alphacombine.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2326>
We always get a warning such as:
h265decoder gsth265decoder.c:1432:gst_h265_decoder_do_output_picture: \
<vah265dec0> Outputting out of order 255 -> 0, likely a broken stream
in H265 decoder.
The problem is caused because we fail to reset the last_output_poc when
we get IDR and BLA. The incoming IDR and BLA frame already bump all the
frames in the DPB, but we forget to reset the last_output_poc, which
make the POC out of order and generate the warning all the time.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2294>
There are a few places which require deep copy
(basesink on drain for example). Also this implementation can be
useful for future use case.
One probable future use case is that copying DPB texture to
another texture for in-place transform since our DPB texture is never
writable, and therefore copying is unavoidable.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2308>
... based on MediaFoundation work queue API.
By this commit, wasapi2 plugin will make use of pull mode scheduling
with audioringbuffer subclass.
There are several drawbacks of audiosrc/audiosink subclassing
(not audiobasesrc/audiobasesink) for WASAPI API, which are:
* audiosrc/audiosink classes try to set high priority to
read/write thread via MMCSS (Multimedia Class Scheduler Service)
but it's not allowed in case of UWP application.
In order to use MMCSS in UWP, application should use MediaFoundation
work queue indirectly.
Since audiosrc/audiosink scheduling model is not compatible with
MediaFoundation's work queue model, audioringbuffer subclassing
is required.
* WASAPI capture device might report larger packet size than expected
(i.e., larger frames we can read than expected frame size per period).
Meanwhile, in any case, application should drain all packets at that moment.
In order to handle the case, wasapi/wasapi2 plugins were making use of
GstAdapter which is obviously sub-optimal because it requires additional
memory allocation and copy.
By implementing audioringbuffer subclassing, we can avoid such inefficiency.
In this commit, all the device read/write operations will be moved
to newly implemented wasapi2ringbuffer class and
existing wasapi2client class will take care of device enumeration
and activation parts only.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2306>