Just like what we do in VA plugins, the GST_MAP_VAAPI can directly
peek the surface of the VA buffers. The old flag 0 just peek the
surface proxy, which may not be convenient for the users who do not
want to include our headers.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer-vaapi/-/merge_requests/435>
This encoder advertises alignment=au as output format, which means
each output frame should contain a full decodable access unit.
The video encoder base class is not aware of our output alignment
and will output spurious buffers with just the SPS/PPS inside when
we call gst_video_encoder_set_headers(), which is broken because
each buffer is supposed to contain a full decodable access unit
in our case.
Just don't tell the base class about our headers, they will be
sent at the beginning of each IDR frame anyway.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2178>
When the probe returns GST_PAD_PROBE_REMOVE and gets called concurrently
from the streaming thread while we're in the callback here, the hook has
already been destroyed by the time we've reacquired the object lock.
Consequently, cleanup_hook gets passed an invalid pointer.
Keep another reference to the hook alive to avoid this situation.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/873>
e.g. when exporting an opaque image, yet with alpha channel.
Apple ProRes certification requires that, when a ProRes-writing
application *knows* that the entire frame is opaque, the application
writes only RGB without alpha even when the clip is RGBA. For that,
this tiny change allows the app to override pixel depth when writing ProRes.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/1061>
When a buffer is pushed downstream, we should try not to hold the
buffer mapped with write access. Doing so would often lead to
an unneccesary memcpy later.
For instance, gst_buffer_make_writable() in
gst_video_decoder_finish_frame() will cause a memcpy because of
_memory_get_exclusive_reference().
We know that we can perform a two-step remap when using system
memory, as this will not cause the location of the memory to
change.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/812>
When outputting fragmented mp4, with a seekable downstream, we rewrite
the moov to maybe add a duration to the mvex. If we start by not
writing the initial moov->mvex->mhed duration and then overwrite with a
moov containing mhed atom, the moov's will have different sizes and
could overwrite subsequent data and result in an unplayable file.
e.g. The initial moov would be of size 842 and the final moov would have
a size of 862.
Fix by always pushing out the mhed duration in the moov when
fragmenting.
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/issues/898
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/1060>
User can get the required buffer size by using buffer pool config.
Since d3d11 implementation is a candidate for public library in the future,
we need to hide everything from header as much as possible.
Note that the total size of allocated d3d11 texture memory by GPU is not
controllable factor. It depends on hardware specific alignment/padding
requirement. So, GstD3D11 implementation updates actual buffer size
by allocating D3D11 texture, since there's no way to get CPU accessible
memory size without allocating real D3D11 texture.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2482>
Split fields ends up on multiple picture and requires accessing the
other_field to complete the information (POC).
This also cleanup the DPB from non-reference (was not useful) and skips
properly merge field instead of keeping them duplicated. This fixes most
of interlace decoding seen in fluster.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2474>
When a frame is composed of two fields, the base class now split the
picture in two. In order to support this, we need to ensure that picture
buffer is held in VB2 queue so that the second field get decoded into
it. This also implements the new_field_picture() virtual and sets the
previous request on the new picture.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2474>
* Add support for H265
* Don't overwrite original codec_data / streamheader in the output
caps, but instead allow them to change and send them to the
combiner at the right moment: encoder caps, reencoded GOP,
original caps, original GOP(s), and potentially encoder caps
and rencoded last GOP.
* For H264 / H265, force usage of a format with inband SPS / PPS
(avc3 / hev1), this is cleaner than misadvertising avc1, hvc1 and
some muxers like mp4mux will actually advertise both differently.
Unfortunately, while mp4 supports updating the codec_data and using
avc1 with no in-band SPS / PPS updates, it turns out some decoders
(eg chrome / firefox) don't handle this particularly well and stop
decoding after the reencoded GOP. We could expose a switch to
force usage of avc1 / hvc1 nevertheless, but for now stick to
requiring that the parser output SPS / PPS in-band with
config-interval=-1 (that has not changed)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/1249>
Adding new property for user to be able to set expected the maximum
number of blend task threads. This can be useful in case that user
wants to restrict the number of parallel task runners for system
resource management or debugging/development purpose.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/1242>
This avoids memory corruption in users of that structure which
were (rightfullly) assuming static fields (such as name) wouldn't
change. Without this, they would be using strings which will have been freed in
the meantime.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-devtools/-/merge_requests/252>
The matroska codec specs is unfortunately vague on the subject,
stating for H264:
AVC/H.264 stored as described in [@!ISO.14496-15]
and for H265:
HEVC/H.265 stored as described in [@!ISO.14496-15]
This spec however specifies multiple stream formats, our
implementation has opted for interpreting this as avc1 / hvc1,
both of which disallow in-band SPS.
Most decoders however will support in-band SPS / PPS, and
this commit gives the option to explicitly mux in avc3 / hev1,
which allows changing stream parameters on the fly, that is
useful for smart encoding for example.
When either of these stream formats are picked as the input,
changes in codec_data / tier / level / profile do not cause
renegotiation failure, a warning is logged however as it isn't
clear how compliant such a stream is.
The stream-format field is correctly ordered in the template
caps to avoid selecting potentially non-compliant options on
automatic negotiation.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/1047>
This new tracer will list loaded elements and plugins. This should
make it easier to generate minimal builds of GStreamer.
This also traces other features such as typefind functions, device
providers and dynamic types.
The format of the output of gst-stats should match the parameters
expected by the meson based gst-build system.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/782>
Adds a user-controllable timestamp offset to clusters and blocks. This
should be useful if we want to have timestamps that have significance
outside of the current file (for example, we might set the offset to the
wallclock when the file is being created, or some other common base, if
we want to correlate streams across multiple files).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/1051>