There are three framerate conversion algorithms described in
<https://github.com/Intel-Media-SDK/MediaSDK/blob/master/doc/mediasdk-man.md>,
interpolation is not implemented so far and thus distributed timestamp algorihtm
is considered to be more practical which evenly distributes output timestamps
according to output framerate. In this case, newly generated frames are inserted
between current frame and previous one, timestamp is calculated by msdk API.
This implementation first pushes newly generated buffers(outbuf_new) forward and
the current buffer(outbuf) is handled at last round by base transform automatically.
A flag "create_new_surface" is used to indicate if new surfaces have been generated
and then push new outbuf forward accordingly.
Considering the upstream element may not be the msdk element, it is necessary to
always set the input surface timestamp as same as input buffer's timestamp and
convert it to msdk timestamp.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2418>
The mapping between an RTP session and the SDP m= line is not always the
same, especially when BUNDLEing is used.
This causes a failure in a specific case where if when bundling,
if mline 0 is a data channel, and mline 1 an audio/video section,
then retrieving the transceiver at mline 0 (rtp session used) will fail
and cause an assertion.
This fix is actually potentially a regression for cases where the remote
part does not provide the a=ssrc: media level SDP attributes as is now
becoming common, especially when simulcast is involved.
The correct fix actually requires reading out header extensions as used
with bundle for signalling in the actual data, what media and therefore
transceiver is being used.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2467>
wasapi2 plugin should be preferred than old wasapi plugin if available because:
* wasapi2 supports automatic stream routing, and it's highly recommended
feature for application by MS. See also
https://docs.microsoft.com/en-us/windows/win32/coreaudio/automatic-stream-routing
* This implementation must be various COM threading issue free by design
since wasapi2 plugin spawns a new dedicated COM thread and all COM objects'
life-cycles are managed correctly.
There are unsolved COM issues around old wasapi plugin. Such issues are
very tricky to be solved unless old wasapi plugin's threading model
is re-designed.
Note that, in case of UWP, wasapi2 plugin's rank is primary + 1 already
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2314>
When converting from one framerate to another, counters are
reset periodically, however when not converting they never are
and can_genearte_output ends up making overflow-prone calculations
with large values for input_frames and output_frames.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2465>
We forget one case that is the frame and field pictures may be mixed
together. For this case, the dpb is interlaced while the last picture
may be a complete frame. We do not need to cache that complete picture
and should output it directly.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2448>
* Remove unnecessary upcasting. We are now dealing with C++ class objects
and don't need explicit C-style casting in C++ world
* Use helper macro IID_PPV_ARGS() everywhere. It will make code
a little short.
* Use ComPtr smart pointer instead of calling manual IUnknown::Release()
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2461>
The GST_ELEMENT_ERROR will call the gst_object_get_path_string and
use gst_object_get_parent to get the full object path name, which
needs to lock the object. But we are already in a locked context and
so this will cause a deadlock, the pipeline can not exit normally.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2451>
The parent context shares some resources with child context, so the
child context should be destroyed first, otherwise the command below
will trigger a segmentation fault
$> gst-launch-1.0 videotestsrc num-buffers=100 ! msdkh264enc ! \
msdkh264dec ! fakesink videotestsrc num-buffers=50 ! \
msdkh264enc ! msdkh264dec ! fakesink
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2435>
Current implementation for translating native coordinate and
video coordinate is very wrong because d3d11videosink doesn't
understand native HWND's coordinate. That should be handled
by GstD3D11Window implementation as an enhancement.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2450>
Inspired by an MR https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2382
The idea is that we can make use of MoveWindow() in WIN32 d3d11window
implementation safely because WIN32 d3d11window implementation creates
internal HWND even when external HWND is set and then subclassing is used to
draw on internal HWND in any case. So the coordinates passed to MoveWindow()
will be relative to parent HWND, and it meets well to the concept of
set_render_rectangle().
On MoveWindow() event, WM_SIZE event will be generated by OS and then
GstD3D11WindowWin32 implementation will update render area including swapchain
correspondingly, as if it's normal window move/resize case.
But in case of UWP (CoreWindow or SwapChainPanel), we need more research to
meet expected behavior of set_render_rectangle()
Fixes: https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/1416
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2450>
We need to consider the first field of the last picture when the
last picture can not enter the DPB.
Another change is, when prev field's frame_num is not equal to the
current field's frame_num, we should also return FASLE because it
is also a case of losing some field.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2430>
For interlaced streams, it is also possible that the last frame is
not able to be inserted into DPB when the DPB is full and the last
frame is a non ref. For this case, we need to hold a extra ref for
the first field of the last frame and wait for the complete frame
with both top and bottom fields. For the progressive stream, the
behaviour is unchanged.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2430>
When reaching the end of non-frame wrapping track in pull mode, we want to force
the switch to the next non-eos pad. This is similar to when we exceed the
maximum drift.
Fixes issues on EOS where not everything would be drained out and stray errors
would pop out.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2434>
Adds a new plugin for ASIO devices.
Although there is a standard low-level audio API, WASAPI, on Windows,
ASIO is still being broadly used for audio devices which are aiming to
professional use case. In case of such devices, ASIO API might be able
to show better quality and latency performance depending on manufacturer's
driver implementation.
In order to build this plugin, user should provide path to
ASIO SDK as a build option, "asio-sdk-path".
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2309>
... for user to be able to set the number of required samples.
For instance, our default value is 240 samples
(about 5ms latency in case that sample rate is 48000), which might
be larger than actual buffer size of audio capture device.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2307>
The print_ref_pic_list_b now not only needs to trace the ref_pic_list_b0/1,
but also need to trace the ref_frame_list_0_short_term. We need to pass the
name directly to it rather than an index to refer to ref_pic_list_b0/1.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2425>