Some decoding APIs support delayed output for performance reasons.
One example would be to request decoding for multiple frames and
then query for the oldest frame in the output queue.
This also increases throughput for transcoding and improves seek
performance when supported by the underlying backend.
Introduce support in the mpeg2 base class, so that backends that
support render delays can actually implement it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1013>
Downstream might need the start code offset when decoding.
Previously this computation would be scattered in multiple sites. This
is error prone, so move it to the base class. Subclasses can access
slice->sc_offset directly without computing the address themselves
knowing that the size will also take the start code into account.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1013>
The GstV4l2CodecAllocator dispose function clears `self->decoder` but
the finalize function then tries to use it if the allocator has no been
detached yet.
Fix by detaching in the dispose function before we clear
`self->decoder`.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1220>
Standard interlace handling:
* If we have interlace-mode=interleaved and the field order, we just
set it when creating the session
* If we have interlace-mode=(interleaved|mixed) and no field order, we
set the field order on the first buffer
The encoder session does not support changing the FieldDetail after it
has started encoding frames, so we cannot support mixed streams
correctly.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1214>
Delay decoders downstream negotiation just before an output frame
needs to be allocated.
This is required, are least for H.264 and H.265 decoders, since
codec_data might trigger a new sequence before finishing upstream
negotiation, and sink pad caps need to set before setting source pad
caps, particularly to forward HDR fields. The other decoders are
changed too in order to keep the same structure among them.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1257>
Using GstBaseDec hack to access the parent_object of each element in
the element itself is a bit fragile. It would be better to keep its
own parent object as the usual global variable. It would make it
resistant to code changes.
The GstBaseDec macro to access the parent object now it's internal to
base decoder.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1257>
.. if a current direction has already been set
When `webrtcbin` has created an offer based on codec_preferences,
it might not have received caps on its sinkpads by the time a
remote description is set, in which case we want to connect the
input stream upon actual reception of the caps instead.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1233>
With mpeg4videoparse drop=false config-interval=N|-1 we might be
trying to insert a config before we have actually received one,
in which case we'll try to map a NULL buffer which will generate
lots of criticals.
Fixes#855
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1265>
gst_va_fixate_format() will iterate all othercaps' structures to find
the one with less information lost at color conversion. If a structure
with same color format is found, the iteration stops. It's like a
smart truncation. Then, this function also will choose the caps
feature.
Later this structure is used fixate its size and no further truncation
is needed.
Don't intersect at fixate, since it kills possible resizing.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1261>
Detected while reading the code, cccombiner must set
self->current_video_buffer to NULL *after* emitting selected-samples
in order for the application to get a useful return when peeking
the next video sample.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1252>
When schedule is true (as is the case by default), we insert padding
when no caption data is present in the schedule queue, and previously
weren't checking whether the caption pad had gone EOS, leading to
infinite scheduling of padding after EOS on the caption pad.
Rectify that by adding a "drain" parameter to dequeue_caption()
In addition, update the captions_and_eos test to push valid cc_data
in: without this cccombiner was attaching padding buffers it had
generated itself, and with that patch would now stop attaching
said padding to the second buffer. By pushing valid, non-padding
cc_data we ensure a caption buffer is indeed attached to the first
and second video buffers.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1252>
At gst_va_dmabuf_allocator_setup_buffer_full, static code analysis tool
does not know number of objects in descriptor is always larger than 0 if
export_surface_to_dmabuf succeeds. Thus, the tool will assume buf is
allocated with mem but not released when desc.num_objects equals to 0
and raise a mem leak issue.
For gst_va_dambuf_memories_setup, we should also inform the tool that
n_planes will be larger than 0 by checking the value at very beginning.
Then, the defect similar to above will not be raised during static analysis.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1241>