GstRTPHeaderExtension::write can map the RTP buffer for reading. If that
happens on a buffer that is already mapped WRITE-only by the payloader,
the payloader's mapping gets invalidated (GstRTPBuffer::map will point
to a different instance of GstMemory).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/1173>
Usually the latency message is only posted whenever latency of an
element changes but that might be too early as the sinks might not be
able to query the latency at that point yet.
Similarly adding a new sink should cause latency reconfiguration once
that new sink is able to report its latency.
This fixes latency configuration in pipelines where webrtcbin is the
only "sink", i.e. it is used in a sendonly session. Before, the latency
would always be configured to 0.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/843>
By using the clocksync inside the dtlssrtpenc, all streams inside a
bundled are synchronized together. This will cause problems if their
buffers are not already arriving synchronized: clocksync would wait for
a buffer on one stream and then buffers from the other stream(s) with
lower timestamps would all be sent out too late.
Placing the clocksync before the rtpbin and rtpfunnel synchronizes each
stream individually and they will be send out more smoothly as a result.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2355>
If the decoder has crop_top/left value > 0(e.g. the conformance
window in the H265). Which means that the real output picture
locates in the middle of the decoded buffer. If the downstream can
support VideoCropMeta, a VideoCropMeta is added to notify the
real picture's coordinate and size. But if not, we need to copy
it manually and the other_pool is needed. We always assume that
decoded picture starts from top-left corner, and so there is no
need to do this if crop_bottom/right value > 0.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2298>
The base va decoder's video_align is just used for calculation the
real decoded buffer's width and height. It does not have meaning
for the VideoMeta, because it does not align to the real picture
in the output buffer. We will use VideoCropMeta to replace it later.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2298>
The base va decoder's video_align is just used for calculation the
real decoded buffer's width and height. While the gst_video_info_align
just calculate the offset and stride based on the video_align. But
all the offsets and strides are overwritten in gst_va_dmabuf_allocator_try
or gst_va_allocator_try, which make that calculation useless.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2298>
The finish() virtual function documentation state that "Sub-classes can refuse
to decode new data after." Though, it is very common to issue a non-flushing
seek after that event in gapless playback uses case. This fixes potential
stalls with code using segment seeks, by using drain() virtual funciton
instead.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/1206>
By this commit, following formats will be newly supported by d3d11 elements
* Y444_{8, 12, 16}LE formats:
Similar to other planar formats. Such Y444 variants are not supported
by Direct3D11 natively, but we can simply map each plane by
using R8 and/or R16 texture.
* P012_LE:
It is not different from P016_LE, but defining P012 and P016 separately
for more explicit signalling. Note that DXVA uses P016 texture
for 12bits encoded bitstreams.
* GRAY:
This format is required for some codecs (e.g., AV1) if monochrome
is supported
* 4:2:0 planar 12bits (I420_12LE) and 4:2:2 planar 8, 10, 12bits
formats (Y42B, I422_10LE, and I422_12LE)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2346>
Some elements will make use of the automatically generated names to
create new pads in future muxer instances, for example splitmuxsink.
Previously we would've created a pad with a random pid that would become
"sink_0", and then on a new muxer instance a pad "sink_0" and tsmux
would've then failed because 0 is not a valid PID.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2318>
When B-frame is enabled, encoder seems to adjust PTS of encoded sample
by using frame duration.
For instance, one observed timestamp pattern by using B-frame enabled
and 30fps stream is:
* Frame-1: MF pts 0:00.033333300 MF dts 0:00.000000000
* Frame-2: MF pts 0:00.133333300 MF dts 0:00.033333300
* Frame-3: MF pts 0:00.066666600 MF dts 0:00.066666600
* Frame-4: MF pts 0:00.099999900 MF dts 0:00.100000000
We can notice that the amount of PTS shift is frame duration and
Frame-4 exhibits PTS < DTS.
To compensate shifted timestamp, we should
calculate the timestamp offset and re-calculate DTS correspondingly.
Otherwise, total timeline of output stream will be shifted, and that
can cause time sync issue.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2354>
Previously pads might have been requested already (e.g. in NULL state),
then reset was called (e.g. because changing state) and then a new pad
was requested. Resetting is re-creating the internal muxer object and as
such resetting the pid counter, so the next requested pad would get the
same pid as the first requested pad which then leads to collisions.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2317>
There may be two or more threads involved here however the important
interaction is the use of ogg->seeK_event_drop_till value that was only
set in the push-mode seek-event thread and could race with upstream
sending e.g. and EOS (or data).
Scenario is this:
1. oggdemux performs a seek to near the end of the file to try and find
the duration. ogg->push_state is set to PUSH_DURATION.
2. Seek is picked up by the dedicated seek event thread and sets
ogg->seek_event_drop_till to the seek event's seqnum.
3. Most operations are blocked or dropped waiting on the duration to
be determined and processing continues until a duration is found.
4. Two branching options for how this ultimately plays out
4a. The source is too fast and we receive an EOS event which is dropped
because ogg->push_state == PUSH_DURATION. In this case everything
works.
4b. We hit our 'almost at the end' check in
gst_ogg_pad_handle_push_mode_state() and attempt to seek back to the
beginning (or to a user-provided seek). This seek is marshalled to
the seek event thread without setting ogg->seek_event_drop_till but
with change ogg->push_state = PUSH_PLAYING. If an EOS event or
e.g. buffers arrive from upstream before the seek event thread has
picked up the seek event, then the EOS/data is processed as if it
came as a result of the seek event. This is the case that fails.
The fix is two-fold:
1. Preemptively set ogg->seek_event_drop_till when setting the seek
event so that data and other events can be dropped correctly.
2. In addition to dropping and EOS events while ogg->push_state ==
PUSH_DURATION, also drop any EOS events that are received before the
seek event has been processed by also tracking the seqnum of the seek.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/1196>
If downstream tries to seek in BYTES format, don't pass that through
to upstream. The byte positions downstream requests won't make any
sense in the muxed stream. There might be other formats we want to
pass through to upstream, but BYTES is not one of them. If we get a
seeking query about BYTES format, refuse that too.
This fixes a situation where we're playing a fragmented mp4 over http
and qtdemux refuses the initial seek (in TIME format), but then
h264parse/baseparse send a seek in BYTES format and everything falls
apart.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/1014>