Holding previously decoded but not outputted pictures even after
new_sequence is not a safe approach in various aspect.
However, we cannot drain out DPB on new_sequence() unconditionally,
because there is a case where decoder should drop decoded pictures
if NoOutputOfPriorPicsFlag is set.
To detect NoOutputOfPriorPicsFlag before the new_sequence() call,
this patch splits decoding process into two path, one for nal unit parsing
in order to detect NoOutputOfPriorPicsFlag and then each nal unit
will be decoded.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1937>
Gapless playback is handled by adjusting buffer timestamps & durations
and by adding GstAudioClippingMeta.
Support for "Frankenstein" streams (= poorly stitched together streams)
is also added, so that gapless playback support doesn't prevent those
from being properly played.
Co-authored-by: Sebastian Dröge <sebastian@centricular.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1028>
1. Always set the according GstVaH264EncFrame pointer when GstVideoCodecFrame
pointer is assigned, which can make the logic safe.
2. Fix the forgotten change in _sort_by_frame_num. Its input pointer now is
GstVideoCodecFrame type.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1935>
Add properties to control input cropping in the V4L2 device.
The input cropping is applied before composing the result to the
capture buffer. By default the capture size will be set to the same
size as the crop region, but it can be scaled to a different output
frame size if supported by the V4L2 device.
If scaling is not supported, the cropped image will
be composed as is into the top-left corner of the capture buffer.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1089>
Get the current crop bounding region from the V4L2 device so
that it can be provided to applications and used to validate
crop settings. Also make the default crop region available so
that it can be used to reset the crop when appropriate.
Uses the selection API when available with fallback to the crop
API for older kernels.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1089>
The gst_v4l2_object_set_crop() is used for removing buffer
alignment padding. Give it a name that better reflects
that usage. This helps to distinguish from cropping of the
input image (e.g. cropping at the image sensor on a captre
device), which can be unrelated to the memory buffer padding,
especially if scaling is involved.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1089>
The default query handler would go through typefind, which by default accepts
any CAPS. But once configured, parsebin can't reconfigure itself, it should
therefore pass through the ACCEPT_CAPS query to the first element after
typefind (if any).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1900>
Don't reconfigure outputs when the select-streams
event is sent from the app, as the selection may
not take effect for some time. Instead, wait
for the pipeline to confirm the new set of
selected streams when it sends the message.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1900>
If we previously had subtitles coming in, the video
may be chained through a text overlay block. Before,
the code would end up trying to link pads that were
already linked and video would not get reconnected
properly.
To fix that, make sure that the candidate
pads are actually unlinked first. If a textoverlay
is present and no longer needed, it will be cleaned
up later in the reconfiguration sequence.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1900>
Requesting a new pad can start a reconfiguration cycle, where
playsink will block all input pads and wait for data on them
before doing internal reconfiguration. If a pad is released,
that reconfiguration might never trigger because it's now waiting
for a pad that doesn't exist any more.
In that case, complete the reconfiguration on pad release.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1180>
Make it posible to configure the element to obtain the timestamps from
reference timestamp meta data instead of using the ntp-offset property,
or estimating its own offset. Currently the only time format supported
is "timestamp/x-unix", i.e. UTC time expressed in the unix time epoch.
In addition the custom event GstNtpOffset has been renamed to
GstOnvifTimestamp, to reflect that it is not necessarily used to convey
the ntp-offset. As a consequence we had to modify a couple of files in
the rtsp-server as well.
Fixes#984
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1683>
This patch fixes a seg.fault in gst_structure_new() with warnings as below.
GLib-GObject-WARNING **:
../gobject/gtype.c:4330: type id '0' is invalid
GLib-GObject-WARNING **:
can't peek value table for type '<invalid>' which is not currently referenced
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1918>
On GstVideoDecoder::{drain,flush}, we send null packet with
CUVID_PKT_ENDOFSTREAM flag to drain out decoder. Which will
reset CUVID parser as well.
To continue decoding after the drain, the next input buffer
should include sequence headers otherwise CUVID parser will
not report any decodeable frame.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1911>
There could be a case where the new program has the same program number as the
previous one ... but is actually located on a PID previously used for elementary
stream. In that case the program is guaranteed to not be an update of the
previous program but a completely new one.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1893>
We need to be able to look for programs by their PID also. Using a hash table
was a bit sub-par (and overkill) for storing a range of programs.
This is needed because there could potentially be two programs with the same
program id but different PMT PID (while one is being deactivated the new one
would "exist").
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1893>
This commit modifies the interleave calculation to allow growing when incoming
data is before the segment start.
The rationale is that there is no requirement whatsoever for data before the
segment start to be "coherent" on all streams.
For example, a demuxer could rightfully send data from the video stream from the
previous keyframe (potentially quite a bit before the segment start) and the
audio from just before the segment start.
This will activate the same logic as growing the interleave when some streams
haven't received buffers yet.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1892>
* When a stream receives EOS, it will no longer change, we shouldn't take that
stream into account for interleave calculation.
* When streams (re)appear, we do not want to grow the initial interleave values
to excessive values. Instead of setting it to a default of 5s, progressively
grow it to that maximum.
* When the status of input streams change (i.e. going to/from "some haven't
received data yet" and "all have received data"), update the interleave
immediately instead of waiting for (potentially) 5s of data before updating
it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1892>
The point here is that rtpsession will create a new rtpsource when
the field "rtx-ssrc" is present, and when not doing rtx, that means
a random ssrc will create a new rtpsource that will be included in RTCP
messages for the current session.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1882>
Instead of using GstMiniObject to hold H264 frame, now it uses a plain
structure. Besides, instead of holding a reference to
GstVideoCodecFrame, the H264 frame structure is set as a
GstVideoCodecFrame user data.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1856>
According to va_dec_hevc.h, pic_param->st_rps_bits should be set
for accelorater to skip parsing the *short_term_ref_pic_set
(num_short_term_ref_pic_sets) structure.
Also modified fill_picture to get parser info as a parameter,
in order to get slide_hdr->short_term_ref_pic_set_size.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1886>
Fix a small race where a group can receive stream-start
and post a pending buffering message just as another
thread posts a different buffering message, causing them
to be received by the application out of order. In the
worst case, this leads the application receiving a
stale 99% buffering message and going back to buffering
right after the 100% buffering message.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1840>
Apparently GtkSharp expects each object has only one ToggleRef at any
time. Assigning element.Handle into Raw has a consequence that second
ToggleRef attempts to get created but fails on g_object_unref () that
breaks a GObject assertion:
toggle_refs_notify: assertion failed: (tstack.n_toggle_refs == 1)
This is because toggle references should be removed with
g_object_remove_toggle_ref(), not a simple unref().
In order to avoid duplicate toggle references, introduce
ElementFactory.MakeRaw(), which creates a GstElement without its
accompanying C# object. The returned raw pointer can be assigned into
another GLib.Object without trouble.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1885>
And use the output segment position for the outgoing timestamp while it
is. This is needed to delay the calculation of `output_ts_offset` until
we actually have a usable timestamp, as tsmux will output a few initial
packets while `last_ts` is still unset.
Without this, the calculation would use the initial `0` value, which did
not have the intended effect of making VBR mode behave like CBR mode,
but always calculated an offset equal to the selected start time.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1884>
Fix suppression to support release and debug builds.
Here is the debug build call stack:
```
==10707== by 0x48B5520: g_malloc (gmem.c:106)
==10707== by 0x48D19DC: g_slice_alloc (gslice.c:1069)
==10707== by 0x48D3947: g_slist_copy_deep (gslist.c:619)
==10707== by 0x48D38B8: g_slist_copy (gslist.c:567)
==10707== by 0x4ADC90B: gst_debug_remove_with_compare_func (gstinfo.c:1504)
```
In release build `g_slist_copy (gslist.c:567)` got inlined:
```
==15419== by 0x48963E0: g_malloc (gmem.c:106)
==15419== by 0x48AA382: g_slice_alloc (gslice.c:1069)
==15419== by 0x48AB732: g_slist_copy_deep (gslist.c:619)
==15419== by 0x4A39B8F: gst_debug_remove_with_compare_func (gstinfo.c:1504)
```
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1814>
When doing only a single stream of audio/video this hardly matters,
but when doing many at the same time, the fact that you have to get
a hold of the glib global type-system lock every time you process a buffer,
means that there is a limit to how many streams you can process in
parallel.
Luckily the fix is very simple, by doing a cast rather than a full
type-check.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1873>
There is a chance that pool->buffers[index] sets BUFFER_STATE_QUEUED, but
it has not been queued yet which makes pool->buffers[index] still NULL.
At this time, if pool_streamff release all buffers with BUFFER_STATE_QUEUED
state regardless of whether the buffer is NULL or not, it will cause segfault.
To fix this, also check buffer when streamoff release buffer.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1842>
When tunneling over HTTP, if connection on the second channel happens
before the control timer is created we may trigger an assert in
rtsp_ctrl_timeout_remove(). Avoid that by taking the priv->lock before
attaching the client thread to the context.
Fixes#1025
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1867>
When there is vpp scaling downstream, we need to make sure SFC is not
triggered because vpp may fall into passthrough mode which causes
the decoder negotiation to create src caps with vpp scaled width/height.
This patch includes bitstream's original size in first query with
downstream in gst_msdkdec_src_caps, which is the same for what we do for
color format in this query. This is to ensure SFC scaling starts to
work only when downstream directly asks for a different size instead of
through vpp.
Note that here SFC scaling follows the same behavior as msdkvpp:
if user only changes width or height, e.g. dec ! video/x-raw,width=xx !,
the height will be modified to the value which fits the original DAR.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1838>
* Hide GstCudaMemory member variables
* Make GstCudaAllocator object GstCudaContext independent
* Set offset/stride of memory correctly via video meta
* Drop GST_BUFFER_POOL_OPTION_VIDEO_ALIGNMENT support.
This implementation actually does not support custom alignment
because we allocate device memory via cuMemAllocPitch
of which alignment is almost uncontrollable
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1834>
* cudaupload/download
- Specify only formats actually we can deal with
nvcodec elements, not all video formats
- Supports CUDA output for download and input for upload in order
to make passthrough possible, like other upload/download elements.
* cudabasetransform
- Reset conversion element if upstream CUDA memory
holds different CUDA context and the element can accept it.
This is the same behavior as corresponding d3d11 filter elements.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1834>
This was to support very old V4L2 kernel. As we moved to DMABuf and can now
detach buffers on renegotiation, the buffer it tries to fix no longer exist.
The risk to blocking indefinitly the application does still exist though.
Fixes#1070
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1861>
When we negotiate with downstream, We should use the intersected
caps of input and output to decide the alignment and stream format.
The current code just uses the input caps which may lack the stream
format.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1837>
The demux now outputs the AV1 stream in "tu" alignment, so we do not need
to detect the input alignment. But the annex b stream format is not recognized
by the demux, we still need to detect that stream format for the first input.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1837>
Decoders that required frame aligmment and didn't have an associated
alpha decoder were skipped. This is because the parser was constructing
caps based on the software alpha decoder, which specify super-frame
alignment.
Iterate over the caps to filter the one that have a matching codec-alpha, with
the semantic the no codec-alpha field means codec-alpha=false. Then if
everything was removed, callback to the original, so that the first non-alpha
decoder will be picked.
Fixes#820
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1855>
Currently for copying the coded buffer onto a GStreamer buffer, the
coded buffer is mapped two times: one for getting the size, and later
for do the actual copy. We can avoid this by doing directly in the
element rather than in the general encoder object.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1845>
We need to always add the RTX/RED/ULPFEC elements as rtpbin will only
call us once to request aux/fec senders/receivers.
We also need to regenerate the media section of the SDP instead of
blindly copying from the previous offer.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1762>
When the size of V4L2 capture or output is changes with VIDIOC_S_FMT,
the device is only required to update the compisition window to fit
inside the new frame size. This can result in captured data only being
updated on a portion of the frame after a resize.
Update the composition window to the default value determined by the
V4L2 device driver whenever the format is changed to make sure that
all image data is composed to its full size.
Fixes#765
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1806>
As specified formally in RFC8851
Each rid description is placed in its own caps field in the structure.
This is very similar to the already existing extmap-$id sdp<->caps
transformations that already exists.
The mapping is as follows:
a=rid:0 direction ';'-separated params
where direction is either 'send' or 'recv'
gets put into a caps structure like so:
rid-0=(string)<"direction","param1","param2",etc>
If there are no rid parameters then the caps structure is generated to
only contain the direction as a single string like:
rid-0=(string)direction
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1760>
Not having this field is equivalent with it being 1/1 so consider
it like that. The generic caps functions are not aware of these
semantics and would consider the caps different, causing a negotiation
failure when caps are changing from caps with to caps without or the
other way around.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1826>
Not having these fields is equivalent with them being mono/0 so consider
them like that. The generic caps functions are not aware of these
semantics and would consider the caps different, causing a negotiation
failure when caps are changing from caps with to caps without or the
other way around.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1826>
Otherwise fetching of the offer will fail with a cryptic error:
```
Traceback (most recent call last):
File "/../gstreamer/subprojects/gst-examples/webrtc/sendrecv/gst/webrtc_sendrecv.py", line 56, in on_offer_created
offer = reply['offer']
TypeError: 'Structure' object is not subscriptable
```
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1821>
```
ERROR peer '5762' not found
Traceback (most recent call last):
File "/../gstreamer/subprojects/gst-examples/webrtc/sendrecv/gst/webrtc_sendrecv.py", line 190, in <module>
res = loop.run_until_complete(c.loop())
File "/usr/lib64/python3.10/asyncio/base_events.py", line 641, in run_until_complete
return future.result()
File "/../gstreamer/subprojects/gst-examples/webrtc/sendrecv/gst/webrtc_sendrecv.py", line 155, in loop
self.close_pipeline()
File "/../gstreamer/subprojects/gst-examples/webrtc/sendrecv/gst/webrtc_sendrecv.py", line 142, in close_pipeline
self.pipe.set_state(Gst.State.NULL)
AttributeError: 'NoneType' object has no attribute 'set_state'
```
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1821>
```
File "/../gstreamer/subprojects/gst-examples/webrtc/sendrecv/gst/webrtc_sendrecv.py", line 189, in <module>
loop.run_until_complete(c.connect())
File "/usr/lib64/python3.10/asyncio/base_events.py", line 641, in run_until_complete
return future.result()
File "/../gstreamer/subprojects/gst-examples/webrtc/sendrecv/gst/webrtc_sendrecv.py", line 40, in connect
self.conn = await websockets.connect(self.server, ssl=sslctx)
File "/home/nirbheek/.local/lib/python3.10/site-packages/websockets/legacy/client.py", line 650, in __await_impl_timeout__
return await asyncio.wait_for(self.__await_impl__(), self.open_timeout)
File "/usr/lib64/python3.10/asyncio/tasks.py", line 445, in wait_for
return fut.result()
File "/home/nirbheek/.local/lib/python3.10/site-packages/websockets/legacy/client.py", line 654, in __await_impl__
transport, protocol = await self._create_connection()
File "/usr/lib64/python3.10/asyncio/base_events.py", line 1080, in create_connection
transport, protocol = await self._create_connection_transport(
File "/usr/lib64/python3.10/asyncio/base_events.py", line 1110, in _create_connection_transport
await waiter
File "/usr/lib64/python3.10/asyncio/sslproto.py", line 631, in _on_handshake_complete
raise handshake_exc
File "/usr/lib64/python3.10/asyncio/sslproto.py", line 676, in _process_write_backlog
ssldata = self._sslpipe.do_handshake(
File "/usr/lib64/python3.10/asyncio/sslproto.py", line 116, in do_handshake
self._sslobj = self._context.wrap_bio(
File "/usr/lib64/python3.10/ssl.py", line 526, in wrap_bio
return self.sslobject_class._create(
File "/usr/lib64/python3.10/ssl.py", line 865, in _create
sslobj = context._wrap_bio(
ssl.SSLError: Cannot create a client socket with a PROTOCOL_TLS_SERVER context (_ssl.c:801)
```
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1821>
asyncio.get_event_loop() will not implicitly create a new event loop
in a future version of Python, so we need to do that explicitly.
```
webrtc_sendrecv.py:188: DeprecationWarning: There is no current event loop
loop = asyncio.get_event_loop()
```
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1821>
If the tracks element was parsed from the SeekEntry, don't
parse it a second time and recreate tracks, as this
loses any tags that were read using the seek table.
If a genuinely new Tracks element is found, do read that
as it is needed for MSE support.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1798>
Remove the symbolic link `gst-uninstalled` which points to `gst-env`.
The `uninstalled` is the old name and the project should stick to a
single name for the procedure.
Remove the term from all the files, exceptions are variables from
dependencies like `uninstalled_variables` from pkgconfig and
`meson-uninstalled`.
Adjust mentions of the script in the documentation and README.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1743>
Do not maintain similar build instructions within each gst-plugins-*
subproject and the subproject/gstreamer subproject. Use the build
instructions from the mono-repository and link to them via hyperlink.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1743>
This a new VA-API implementation of a H264 encoder.
It can control the GOP and parameter settings, while the MV searching,
VCL and the rate control algorithm are implemented by VA drivers and HW.
It supports most of the common usage options in H264, but still lacks
of look ahead, field, B frame weighted prediction, etc.
Co-authored-by: Victor Jaquez <vjaquez@igalia.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1051>
The g_queue_clear_full() and g_array_copy() functions in the glib
may not be available for the current glib version check, so we add
helper functions to wrap it.
This should be deleted after the glib version bumps.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1051>
LOAD macro relies in m7 being zero for interleaving purposes. Using LOAD
on the m7 register makes it interleave with its new content instead of
with 0.
The effect of this bug was bobbing on some static lines that appeared
over fast-moving content.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1816>
The fd was in different meanings on windows:
POSIX read and write use the fd as a file descriptor.
The gst_poll use the fd as a WSASocket.
This patch use WSASocket as default on windows. This is a temporary measure, because IPC has many different implement. There may be a better way in the future.
See #1044
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1791>
The dynamic resolution changes when
the sequence starts when the decoder detects a coded frame with one or
more of the following parameters different from those previously
established (and reflected by corresponding queries):
1.coded resolution (OUTPUT width and height),
2.visible resolution (selection rectangles),
3.the minimum number of buffers needed for decoding,
4.bit-depth of the bitstream has been changed.
Although gstreamer parser has parsed the stream resolution.
but there are some case that we need to handle resolution change event.
1. bit-depth is different from the negotiated format.
2. the capture buffer count can meet the demand
3. there are some hardware limitations that the decoded resolution may
be larger than the display size. For example, the stream size is
1920x1080, but some vpu may decode it to 1920x1088.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1381>
v4l2videodec do some refactoring so that it can support
dynamic resolution change event.
1.wrap the setup process of capture as a function,
as decoder need setup the capture again when
dynamic resolution change event is received.
2.move the function "remove_padding"
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1381>
This allows downstream of a payloader to know the RTP header's marker
flag without first having to map the buffer and parse the RTP header.
Especially inside RTP header extension implementations this can be
useful to decide which packet corresponds to e.g. the last packet of a
video frame.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1776>
osxaudiodeviceprovider now probes devices more than once to determine
if the device can function as both an input AND and output device.
Previously, if the device provider detected that a device had any output
capabilities, it was treated solely as an Audio/Sink. This causes issues
that have both input and output capabilities (for example, USB interfaces
for professional audio have both input and output channels). Such devices
are now listed as both an Audio/Sink as well as an Audio/Source.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1385>
The RTP payload seems to be required as it carries the frame count
information. Also, gst_rtp_base_payload_allocate_output_buffer had
the second argument incorrect.
Strangely some devices like Shanling MP4 and Sony XM3 would still
work without this while some like the Sony XM4 do not.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1797>
While this is slightly more expensive (~48% slower per random number) it
does not cause any measurable difference when running through a complete
audio conversion pipeline.
On the other hand its random numbers are of much higher quality and on
spectrograms for 32 bit to 24 bit conversion the difference is clearly
visible.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1729>
The instant-rate value in the TrickMode enum is a
flag, but the other values are not. Move instant-rate
to the end of the enum and give it a value large enough
for it to be used without modifying the trick-mode
setting.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1788>