We can only convert from non-cdp to cdp within the confines of valid cdp
framerates. The existing caps negotiation code was allowing any
framerate to convert to a cdp output which is incorrect and would hit an
assertion later.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2372>
We are dropping RASL (Random Access Skipped Leading picture) which
is associated with an IRAP (Intra Random Access Picture) that has
NoRaslOutputFlag equal to 1, since the RASL picture will not be
outputted and also it should not be used for reference picture.
So, corresponding GstVideoCodecFrame should be released immediately.
Otherwise GstVideoDecoder baseclass will hold the unused frame.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2330>
The build fails on macos with the following error:
/usr/local/Cellar/opencv/4.5.0_5/include/opencv4/opencv2/core/mat.hpp:2226:15: error: no template named 'initializer_list' in namespace 'std'
Mat_(std::initializer_list<_Tp> values);
fatal error: too many errors emitted, stopping now [-ferror-limit=]
35 warnings and 20 errors generated.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2368>
If they're not dropped, they can be blocked in the queue even if it is
leaky in the case where there is a buffer being pushed downstream. Since
in webrtc, it's unlikely that there will be a special allocator to
receive RTP packets, there is almost no downside to just ignoring the
queries.
Also drop queries if they get caught in the pad probe after the queue.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2363>
By using the clocksync inside the dtlssrtpenc, all streams inside a
bundled are synchronized together. This will cause problems if their
buffers are not already arriving synchronized: clocksync would wait for
a buffer on one stream and then buffers from the other stream(s) with
lower timestamps would all be sent out too late.
Placing the clocksync before the rtpbin and rtpfunnel synchronizes each
stream individually and they will be send out more smoothly as a result.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2355>
If the decoder has crop_top/left value > 0(e.g. the conformance
window in the H265). Which means that the real output picture
locates in the middle of the decoded buffer. If the downstream can
support VideoCropMeta, a VideoCropMeta is added to notify the
real picture's coordinate and size. But if not, we need to copy
it manually and the other_pool is needed. We always assume that
decoded picture starts from top-left corner, and so there is no
need to do this if crop_bottom/right value > 0.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2298>
The base va decoder's video_align is just used for calculation the
real decoded buffer's width and height. It does not have meaning
for the VideoMeta, because it does not align to the real picture
in the output buffer. We will use VideoCropMeta to replace it later.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2298>
The base va decoder's video_align is just used for calculation the
real decoded buffer's width and height. While the gst_video_info_align
just calculate the offset and stride based on the video_align. But
all the offsets and strides are overwritten in gst_va_dmabuf_allocator_try
or gst_va_allocator_try, which make that calculation useless.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2298>
By this commit, following formats will be newly supported by d3d11 elements
* Y444_{8, 12, 16}LE formats:
Similar to other planar formats. Such Y444 variants are not supported
by Direct3D11 natively, but we can simply map each plane by
using R8 and/or R16 texture.
* P012_LE:
It is not different from P016_LE, but defining P012 and P016 separately
for more explicit signalling. Note that DXVA uses P016 texture
for 12bits encoded bitstreams.
* GRAY:
This format is required for some codecs (e.g., AV1) if monochrome
is supported
* 4:2:0 planar 12bits (I420_12LE) and 4:2:2 planar 8, 10, 12bits
formats (Y42B, I422_10LE, and I422_12LE)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2346>
Some elements will make use of the automatically generated names to
create new pads in future muxer instances, for example splitmuxsink.
Previously we would've created a pad with a random pid that would become
"sink_0", and then on a new muxer instance a pad "sink_0" and tsmux
would've then failed because 0 is not a valid PID.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2318>
When B-frame is enabled, encoder seems to adjust PTS of encoded sample
by using frame duration.
For instance, one observed timestamp pattern by using B-frame enabled
and 30fps stream is:
* Frame-1: MF pts 0:00.033333300 MF dts 0:00.000000000
* Frame-2: MF pts 0:00.133333300 MF dts 0:00.033333300
* Frame-3: MF pts 0:00.066666600 MF dts 0:00.066666600
* Frame-4: MF pts 0:00.099999900 MF dts 0:00.100000000
We can notice that the amount of PTS shift is frame duration and
Frame-4 exhibits PTS < DTS.
To compensate shifted timestamp, we should
calculate the timestamp offset and re-calculate DTS correspondingly.
Otherwise, total timeline of output stream will be shifted, and that
can cause time sync issue.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2354>
Previously pads might have been requested already (e.g. in NULL state),
then reset was called (e.g. because changing state) and then a new pad
was requested. Resetting is re-creating the internal muxer object and as
such resetting the pid counter, so the next requested pad would get the
same pid as the first requested pad which then leads to collisions.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2317>