Adds a new plugin for ASIO devices.
Although there is a standard low-level audio API, WASAPI, on Windows,
ASIO is still being broadly used for audio devices which are aiming to
professional use case. In case of such devices, ASIO API might be able
to show better quality and latency performance depending on manufacturer's
driver implementation.
In order to build this plugin, user should provide path to
ASIO SDK as a build option, "asio-sdk-path".
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2309>
When driver return error on update plane request, kmssink
disables the scaling and retries plane update.
While doing so kmssink was matching the source rectangle dimensions
to the target rectangle dimensions which were calculated
as per scaling but this is incorrect, instead what we want here is
that target rectangle dimensions should match the source rectangle
dimensions as scaling is disabled now and so we match result
rectangle dimensions with source rectangle dimensions.
While at it, also match the result rectangle coordinates for
horizontal and vertical offsets with source rectange coordinates,
as since there is no scaling being done so no recentering is
required.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2415>
We may need to drop the slices such as RASL pictures with the NoRaslOutputFlag, so
the current picture of h265decoder may be freed. We should not assign the frame->
output_buffer too early until we really output it. Or, the later coming slices will
allocate another picture and trigger the assert of:
gst_video_decoder_allocate_output_frame_with_params:
assertion 'frame->output_buffer == NULL' failed
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2421>
In H265, the stream may have odd bit depth such as 9 or 11. And
the bit depth of luma and chroma may differ. For example, the
stream with luma depth of 8 and chroma depth of 9 should use the
10 bit rtformat as the decoded picture format.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2420>
Renamed gst_va_decoder_set_format() to
gst_va_decoder_set_frame_size_with_surfaces() which resembles better
the passed parameters. Internally it creates the vaContext.
Added gst_va_decoder_set_frame_size() which is an alias of
gst_va_decoder_set_frame_size_with_surfaces() without surfaces. This
is the function which replaces gst_va_decoder_set_format() where
used.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2417>
We should use the NumPocTotalCurr value stored in decoder, which is a calculated
valid value, rather than use the invalid value in the slice header. Most of the
time, the NumPocTotalCurr is 0 and make the tmp_refs a very short length, and
causes the decoder's wrong result.
By the way, the NumPocTotalCurr is not the correct name specified in H265 spec,
its name should be NumPicTotalCurr. We change it to the correct name.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2414>
Some GPUs support BGRA format and it will be converted to subsampled
YUV format by GPU internally. Disable this implicit conversion
since the conversion parameters such as input/output colorimetry
are not exposed nor it's written in bitstream (e.g., VUI).
We prefer explicit conversion via our conversion elements.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2410>
The VP9 streams have the ability to change the resolution dynamically
at any time point. It does not send ad KEY frame before change the
resolution, even the INTER frame can change the resolution immediately.
So we need to check the resolution change for each frame and do the
re-negiotiation if needed.
Some insaned stream may play in resolution A first and then dynamically
changes to B, and after 1 or 2 frames, it use a show_existing_frame to
repeat the old frame of resolution A before. So, not only new_picture(),
but also duplicate_picture() need to check this.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2407>
Some codecs such as VP9, its config and context have the ability to
dynamically. When we only change the width and height, no need to
re-create the config and context. The helper function can just change
the resolution without re-creating config and context.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2407>
When the sink goes from PLAYING to READY and then back to PLAYING,
the initialization of the audioclient in prepare() fails with the
error AUDCLNT_E_ALREADY_INITIALIZED. As a result, the playback
stops.
To fix this, we need to drop the AudioClient in unprepare() and
grab a new one in prepare() to be able to initialize it again
with the new buffer spec.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2096>
The functionality now resides in
gst_wasapi_util_get_device() and
gst_wasapi_util_get_audio_client().
This is a preparatory patch. It will be used in the following
patch to init/deinit the AudioClient separately from the device.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2096>
The current setting of color properties are not very correct and
we will get some kind of "unknown Color Standard for YUV format"
warnings printed out by drivers. The video-color already provides
some standard APIs for us, and we can use them directly.
We also change the logic to: Finding the exactly match or explicit
standard first. If not found, we continue to find the most similar
one.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2385>
Qualcomm GPU works fine with current implementation now.
Noticeable difference between when it was disabled and current
d3d11 implementation is that we now support GstD3D11Memory
pool, so there will be no more frequent re-binding decoder surface anymore.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2377>
Adding the compatile profiles when we decide the final profile used for decoding.
The final profile candidates include:
1. The profile directly specified by SPS, which is the exact one.
2. The compatile profiles decided by the upstream element such as the h265parse.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2322>
The segmentation is stateful, its information may depend on the previous
segmentation setting. For example, if loop_filter_delta_enabled is TRUE,
the filter_level[GST_VP9_REF_FRAME_INTRA][1] should inherit the previous
frame's value and can not be calculated by the current frame's segmentation
data only. So we need to maintain the segmentation state inside the vp9
decoder and update it when the new frame header comes.
We also fix the CLAMP issue of lvl_seg and intra_lvl because of their wrong
uint type here.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2369>
The current way of DMA buffer mapping is simply forwarding the job
to parent's map function, which is a mmap(). That can not handle the
non-linear buffers, such as tiling, compressed, etc. The incorrect
mapping of such buffers causes broken images, which are recognized
as bugs. We should directly block this kind of mapping to avoid the
misunderstanding.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2353>
If the decoder has crop_top/left value > 0(e.g. the conformance
window in the H265). Which means that the real output picture
locates in the middle of the decoded buffer. If the downstream can
support VideoCropMeta, a VideoCropMeta is added to notify the
real picture's coordinate and size. But if not, we need to copy
it manually and the other_pool is needed. We always assume that
decoded picture starts from top-left corner, and so there is no
need to do this if crop_bottom/right value > 0.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2298>