The Content-Length header would unconditionally be included when the
proxy property was set. This would result in requests with both
Content-Length and Transfer-Encoding header. Now we rely on the
use-content-length property in the proxy case aswell. This also makes
sure that Content-Type is set correctly, since before that would be
skipped if proxy was used.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7935>
If a new fragment is added with a valid duration but no offset then the start
offset is set later based on the end offset of the previous fragment. At that
point the end offset of this fragment can also be calculated and not doing so
would give the next fragment the same start offset.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8069>
Allow sample rate, number of channels and bps to change and in that case update
the caps accordingly.
Also move (non-fatal) validity checks and storing of the header values outside
the actual parsing once we actually know that a valid frame is available.
And also don't warn on the last frame with fixed block size blocking strategy
that the block size has changed: the last frame is allowed to be smaller.
Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/3281
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8075>
We get loads of warnings when parsing videos from users:
gsth264parser.c:1115:gst_h264_parser_parse_user_data_unregistered: No more remaining payload data to store
gsth264parse.c:646:gst_h264_parse_process_sei:<h264parse0> failed to parse one or more SEI message
Those are raised because of unregistered SEI without user data.
The spec does not explicitly state that unregistered SEI needs to have
data and I suppose the UUID by itself can carry valuable information.
FFmpeg also parses and exposes such SEI so there is no reason for us no
too as well.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7931>
It creates a new structure for passing the codec quality structure at _start(),
where it will be filled. The quality level can be set or changed according
encoder limits.
Later the quality level will be set at _update_session_parameters() and at each
frame encoding. That's why it has to be set at _start().
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8007>
The algorithm for generating the current slot index is a simple round robin,
nonetheless it's not assured that the next slot index it's not still used by a
still living encode picture.
This new way holds an array with the still living encode pictures and the next
slot index looks for a released index in the array.
Its downside is deallocating a picture need to be removed from the array, so the
helper has to be passed to the uninit() function
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8007>
In GStreamer that buffer information is decoupled, holding other structures to
describe the stream: GstCaps. So, to keep the GStreamer design this patch
removes these information from GstVulkanEncoderPicture and pass to
gst_vulkan_encoder_encode() a pointer to GstVideoInfo.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8007>
Instead of holding all headers in an external array and add them into the
bitstream buffer before the encoding operation, adding extra memory and extra
copy operations, the encoder picture should specify the offset where the Vulkan
will start to add the bitstream slices/frame, because the element has written
already the headers until that offset.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8007>
That's the number of references that gst_vulkan_encoder_encode() receives to
process, so it has to go as a parameter, because it's part of the reference
list, not of the picture.
This commit also modified unit tests accordingly.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8007>
The structure already stored the generic video capabilities and the specific
codec capabilities both for encoding an decoding. The generic decoder
capabilities weren't stored because it was only used internally in the decoder
helper object. Nonetheless, for the encoder, the elements will need the generic
encoder capabilities to configure the encoding. That's why it's required to
expose it as part of GstVulkanVideoCapabilities. And the generic decoder is
included for the sake of symmetry.
While updating the API vkvideoencodeh265 test got some code-style fixes.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8007>