The original BUNDLE support commit placed a queue after the
rtpfunnel that combines streams, but I don't see a good reason for
it. It has default settings, so if network output is slow might
accidentally store up to 1 second of pending data, increasing
latency.
Remove it in favour of doing any necessary buffering before
webrtcbin. If it turns out that there is a reason for it to
exist, the limits should probably be configurable and small.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3437>
Currently, when rtspsrc property add-reference-timestamp-metadata=true,
a downstream rtph264depay element will attach multiple copies of the
same GstReferenceTimestampMeta to the depayloaded media buffers. This
can have signficant performance impacts further downstream in a pipeline
like the following:
rtspsrc add-reference-timestamp-metadata=true ! rtph264depay ! h264parse ! ... ! rtph264pay ! ...
For example, if there are 10 packet buffers for a frame of RTP H.264
video, each of those packet buffers will contain the same reference
timestamp meta. The rtph264depay element will then attach all 10
metadata to the depayloaded frame. And then later when we payload the
frame buffer again for proxying, we now have 10 more buffers each with
10 instance of the same metadata. Allocating/deallocating 100+ instances
of metadata @ 30fps for multiple streams has a pretty large performance
impact.
Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/1578
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3431>
The tile width in pixel is not always available. Notably for
8L128 10bit format, the tile is 8x128 bytes, and the pixel
format is fully packed. That means that the tile contains at
least 6 pixels per line, but it also hold some bits of the
pixel from the same line on the previous or next tile.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3424>
In current tile representation, only tiles with power of two
width and height in bytes are supported. This limitation
prevents adding more complex tiles formats.
In this patch, we deprecate tile_ws and tile_hs from GstVideoFormatInfo and
replace if with an array of GstVideoTileInfo. Each plane tiles are then
described with their pixels width/height, line stride and total size.
The helper gst_video_format_info_get_tile_sizes() that depends on the
deprecated API is also being removed. This can simply be removed as it wasn't
in any stable release yet.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3424>
Setting force_live lets aggregator behave as if it had at least one of
its sinks connected to a live source, which should let us get rid of the
fake live test source hack that is probably present in dozens of
applications by now.
+ Expose API for subclasses to set and get force_live
+ Expose force-live properties in GstVideoAggregator and GstAudioAggregator
+ Adds a simple test
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3008>
The only case where we definitely need to write a new trun is when the
data_offset value does not match the end of the list of entries.
Needing multiple trun atoms is required when interleaving multiple
streams together.
All other cases can be covered by adding more entries to the existing
trun atom.
Fixes playback of fragemented mp4 in ffplay and chrome.
Using e.g. mp4mux fragment-duration=1000 fragment-mode=dash-or-mss
and
mp4mux fragment-duration=1000 fragment-mode=first-moov-then-finalise
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3426>
Attribute's value should use returned value from get_attribute for
VAConfigAttribRTFormat, since VAProfileHEVCMain10, in AMD Mesa Gallium,
can process either VA_RT_FORMAT_420 and VA_RT_FORMAT_420_10, which isn't
considered in gstreamer-vaapi design, where encoder's src pads will
expose only 4:2:0 color formats but no 4:2:0 10bit. So, this is a workaround
for this issue while new vah265enc is released.
Signed-off-by: Boyuan Zhang <boyuan.zhang@amd.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3397>
This was the intention from the start, just took me a few years *cough* to
actually implement it properly.
Gapless is handled by re-using as much as possible the same decoders and sinks
if present, and only pre-rolling switching at the sources level (with buffering
if/when needed).
In order to enable "gapless" playback, the "next" uri should be set at any time
between the moment the `about-to-finish` signal is emitted and the moment the
current play item is done. Previously this could only be done with the signal
emission.
This new implementation also allows "Instantaneous URI switching". This allows a
much faster way of switching playback entries while re-using as many elements as
possible. To enable this set `instant-uri` property to TRUE, the default being
FALSE.
API: instant-uri properties
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2784>
When dealing with gapless input (i.e. streams with changing group-id in
GST_EVENT_STREAM_START), we need to take into account the elapsed
running-time (if applicable) in order to properly calculate levels and output
time. Without doing this all incoming data from future groups would be
considered as being "late" and would be consumed immediately.
This does **NOT** modify the actual segment and buffer times, and is only used
internally.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2784>
DecodebinInput (and their backing parsebin or identity) are no longer released
when the corresponding sinkpad is unlinked, but when it's released.
The parsebin element will be resetted:
* If incoming caps are incompatible (was the case before)
* Or when unlinking and it was previously pull-based
This opens the way to use decodebin3 with changing inputs (i.e. gapless)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2784>
Introduce the option to have the streams be parsed with `parsebin` for
compatible sources (i.e. which are eligible for buffering in the same way as
before this commit).
By parsing the inputs directly, this allows more accurate buffering control:
* Instead of relying on potential bitrate information coming from somewhere
* and *without* being linked downstream
If `parse-streams` is activated and the stream is eligible for buffering, then a
`multiqueue` will be used on the output of `parsebin` in order to handle the
buffering.
API: `parse-streams`
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2784>