User can get the required buffer size by using buffer pool config.
Since d3d11 implementation is a candidate for public library in the future,
we need to hide everything from header as much as possible.
Note that the total size of allocated d3d11 texture memory by GPU is not
controllable factor. It depends on hardware specific alignment/padding
requirement. So, GstD3D11 implementation updates actual buffer size
by allocating D3D11 texture, since there's no way to get CPU accessible
memory size without allocating real D3D11 texture.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2482>
Split fields ends up on multiple picture and requires accessing the
other_field to complete the information (POC).
This also cleanup the DPB from non-reference (was not useful) and skips
properly merge field instead of keeping them duplicated. This fixes most
of interlace decoding seen in fluster.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2474>
When a frame is composed of two fields, the base class now split the
picture in two. In order to support this, we need to ensure that picture
buffer is held in VB2 queue so that the second field get decoded into
it. This also implements the new_field_picture() virtual and sets the
previous request on the new picture.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2474>
Originally, if a buffer arrives with crop meta but downstream doesn't
handle crop allocation meta, vapostproc tried to reconfigure itself to
non pass-through mode automatically. Sadly, this behavior was based on
the wrong assumption that propose_allocation() vmethod would bring
downstream allocation query, but it is not.
Now, if vapostproc is in pass-through mode, the cropping is passed to
downstream. Pass-through mode can be disabled via a parameter.
Finally, if pass-through mode isn't enabled, it's assumed the buffer
is going to be processed and, if cropping, downstream already
negotiated the cropped frame size, thus it's required to do the
cropping inside vapostproc to avoid artifacts because of the size of
downstream allocated buffers.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2443>
There are three framerate conversion algorithms described in
<https://github.com/Intel-Media-SDK/MediaSDK/blob/master/doc/mediasdk-man.md>,
interpolation is not implemented so far and thus distributed timestamp algorihtm
is considered to be more practical which evenly distributes output timestamps
according to output framerate. In this case, newly generated frames are inserted
between current frame and previous one, timestamp is calculated by msdk API.
This implementation first pushes newly generated buffers(outbuf_new) forward and
the current buffer(outbuf) is handled at last round by base transform automatically.
A flag "create_new_surface" is used to indicate if new surfaces have been generated
and then push new outbuf forward accordingly.
Considering the upstream element may not be the msdk element, it is necessary to
always set the input surface timestamp as same as input buffer's timestamp and
convert it to msdk timestamp.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2418>
wasapi2 plugin should be preferred than old wasapi plugin if available because:
* wasapi2 supports automatic stream routing, and it's highly recommended
feature for application by MS. See also
https://docs.microsoft.com/en-us/windows/win32/coreaudio/automatic-stream-routing
* This implementation must be various COM threading issue free by design
since wasapi2 plugin spawns a new dedicated COM thread and all COM objects'
life-cycles are managed correctly.
There are unsolved COM issues around old wasapi plugin. Such issues are
very tricky to be solved unless old wasapi plugin's threading model
is re-designed.
Note that, in case of UWP, wasapi2 plugin's rank is primary + 1 already
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2314>
* Remove unnecessary upcasting. We are now dealing with C++ class objects
and don't need explicit C-style casting in C++ world
* Use helper macro IID_PPV_ARGS() everywhere. It will make code
a little short.
* Use ComPtr smart pointer instead of calling manual IUnknown::Release()
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2461>
The parent context shares some resources with child context, so the
child context should be destroyed first, otherwise the command below
will trigger a segmentation fault
$> gst-launch-1.0 videotestsrc num-buffers=100 ! msdkh264enc ! \
msdkh264dec ! fakesink videotestsrc num-buffers=50 ! \
msdkh264enc ! msdkh264dec ! fakesink
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2435>
Current implementation for translating native coordinate and
video coordinate is very wrong because d3d11videosink doesn't
understand native HWND's coordinate. That should be handled
by GstD3D11Window implementation as an enhancement.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2450>
Inspired by an MR https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2382
The idea is that we can make use of MoveWindow() in WIN32 d3d11window
implementation safely because WIN32 d3d11window implementation creates
internal HWND even when external HWND is set and then subclassing is used to
draw on internal HWND in any case. So the coordinates passed to MoveWindow()
will be relative to parent HWND, and it meets well to the concept of
set_render_rectangle().
On MoveWindow() event, WM_SIZE event will be generated by OS and then
GstD3D11WindowWin32 implementation will update render area including swapchain
correspondingly, as if it's normal window move/resize case.
But in case of UWP (CoreWindow or SwapChainPanel), we need more research to
meet expected behavior of set_render_rectangle()
Fixes: https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/1416
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2450>
Adds a new plugin for ASIO devices.
Although there is a standard low-level audio API, WASAPI, on Windows,
ASIO is still being broadly used for audio devices which are aiming to
professional use case. In case of such devices, ASIO API might be able
to show better quality and latency performance depending on manufacturer's
driver implementation.
In order to build this plugin, user should provide path to
ASIO SDK as a build option, "asio-sdk-path".
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2309>
When driver return error on update plane request, kmssink
disables the scaling and retries plane update.
While doing so kmssink was matching the source rectangle dimensions
to the target rectangle dimensions which were calculated
as per scaling but this is incorrect, instead what we want here is
that target rectangle dimensions should match the source rectangle
dimensions as scaling is disabled now and so we match result
rectangle dimensions with source rectangle dimensions.
While at it, also match the result rectangle coordinates for
horizontal and vertical offsets with source rectange coordinates,
as since there is no scaling being done so no recentering is
required.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2415>
We may need to drop the slices such as RASL pictures with the NoRaslOutputFlag, so
the current picture of h265decoder may be freed. We should not assign the frame->
output_buffer too early until we really output it. Or, the later coming slices will
allocate another picture and trigger the assert of:
gst_video_decoder_allocate_output_frame_with_params:
assertion 'frame->output_buffer == NULL' failed
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2421>
In H265, the stream may have odd bit depth such as 9 or 11. And
the bit depth of luma and chroma may differ. For example, the
stream with luma depth of 8 and chroma depth of 9 should use the
10 bit rtformat as the decoded picture format.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2420>
Renamed gst_va_decoder_set_format() to
gst_va_decoder_set_frame_size_with_surfaces() which resembles better
the passed parameters. Internally it creates the vaContext.
Added gst_va_decoder_set_frame_size() which is an alias of
gst_va_decoder_set_frame_size_with_surfaces() without surfaces. This
is the function which replaces gst_va_decoder_set_format() where
used.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2417>
We should use the NumPocTotalCurr value stored in decoder, which is a calculated
valid value, rather than use the invalid value in the slice header. Most of the
time, the NumPocTotalCurr is 0 and make the tmp_refs a very short length, and
causes the decoder's wrong result.
By the way, the NumPocTotalCurr is not the correct name specified in H265 spec,
its name should be NumPicTotalCurr. We change it to the correct name.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2414>
Some GPUs support BGRA format and it will be converted to subsampled
YUV format by GPU internally. Disable this implicit conversion
since the conversion parameters such as input/output colorimetry
are not exposed nor it's written in bitstream (e.g., VUI).
We prefer explicit conversion via our conversion elements.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2410>
The VP9 streams have the ability to change the resolution dynamically
at any time point. It does not send ad KEY frame before change the
resolution, even the INTER frame can change the resolution immediately.
So we need to check the resolution change for each frame and do the
re-negiotiation if needed.
Some insaned stream may play in resolution A first and then dynamically
changes to B, and after 1 or 2 frames, it use a show_existing_frame to
repeat the old frame of resolution A before. So, not only new_picture(),
but also duplicate_picture() need to check this.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2407>
Some codecs such as VP9, its config and context have the ability to
dynamically. When we only change the width and height, no need to
re-create the config and context. The helper function can just change
the resolution without re-creating config and context.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2407>