The tags and caps were leaked for unknown streams, I'm not sure they'd be valid
in that case, but better safe than sorry.
The tags ownership is transfered when calling `gst_adaptive_demux_track_new()`
so unreffing those afterwards was a mistake.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5714>
If this property is enabled then the jitterbuffer will do the normal PTS
calculations according to the configured mode instead of making use of
the RFC7273 media clock.
The timestamp calculated from the RFC7273 media clock will only be
stored in the reference timestamp meta, if addition of that meta is enabled.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5512>
When this property is used, it is assumed that the system clock is
synced close enough to the media clock used by an RFC7273 stream.
As long as both clocks are at most a few seconds from each other this
will give the correct results and avoids having to create an actual
network clock that has to sync first.
If the system clock is actually synchronized to the media clock then
everything will behave exactly the same, otherwise the reference
timestamp meta will be correct but the buffer timestamps will be off by
the difference between the two clocks.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5512>
Do more checks for clock equality than just checking pointers. The same
NTP/PTP clock might be used as pipeline clock but a new instance, so
instead also check what clock they are synced to.
Also handling setting / resetting of the media clock and pipeline clock
correctly by resetting the media clock's state accordingly.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5512>
Because we treat raw audio chunks/samples as keyframes, they were interfering
with seek time adjustment.
Became apparent when the accompanying video stream was I-frame only,
for example ProRes.
Since raw audio streams can be seeked freely, it's fine to just ignore them here,
giving priority to the real keyframes in the video stream.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4946>
In snapshot mode pngenc should output exactly one frame
and then return FLOW_EOS to upstream. If upstream sends
more input frames before shutting down, it should keep
returning FLOW_EOS but not output any more encoded frames.
After a flushing seek it should output frames again though.
Fixes#3069.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5546>
The default 2MB ENCODED_BUFFER_SIZE can't support some 4K video playback. We now
detect the driver reported maximum resolution and choose an appropriate
default bitstream size accordingly. For 4K video these results in around 4MB
buffer instead of 2MB.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4549>
After rendering a QML scene the qml6glsrc element copies the contents of
the scene to a GStreamer buffer. This happens on the Qt render thread.
Then it attaches a sync point to the destination buffer. This sync point
must be awaited by other threads which use the buffer later on. The
current implementation relies on the downstream elements to wait for the
sync point. However, there are situation where this does not work. The
GstBaseTransform e.g. copies the buffer metadata (which overwrites the
sync point without waiting for it) *before* waiting for the sync point.
This commit waits for the sync point inside the qml6glsrc element before
sending it downstream. The wait command is issued on the streaming
thread with the pipeline OpenGL context, i.e. it will synchronize with
the GStreamer OpenGL thread.
This is a port of the original fix for the qmlglsrc element.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5519>
After rendering a QML scene the qmlglsrc element copies the contents of
the scene to a GStreamer buffer. This happens on the Qt render thread.
Then it attaches a sync point to the destination buffer. This sync point
must be awaited by other threads which use the buffer later on. The
current implementation relies on the downstream elements to wait for the
sync point. However, there are situation where this does not work. The
GstBaseTransform e.g. copies the buffer metadata (which overwrites the
sync point without waiting for it) *before* waiting for the sync point.
This commit waits for the sync point inside the qmlglsrc element before
sending it downstream. The wait command is issued on the streaming
thread with the pipeline OpenGL context, i.e. it will synchronize with
the GStreamer OpenGL thread.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5506>
With one regular image file path provided (without %05d),
the element was stuck in a dead loop counting the frames:
gst_image_sequence_src_count_frames
This allows to display any image file out of the element
for a given number of buffers.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5471>
If the v4l2videoenc receives an QUERY_ALLOCATION, it must not propose a
currently used pool, because it cannot be sure that the allocation query came
from exactly the same upstream element. The QUERY_ALLOCATION will not contain
the internal OUTPUT pool.
The upstream element (the basesrc) detects that the newly proposed pool differs
from the old pool. It deactivates the old pool and switches to the new pool.
If there was a format change, a new OUTPUT buffer pool will be allocated in
gst_v4l2_object_set_format_full() and the CAPTURE task will be stopped to switch
the format. If there hasn't been a format change,
gst_v4l2_object_set_format_full() will not be called. The old pool will be kept
and reused.
Without a format change, the processing task continues running.
This leads to the situation that the processing task is running, but the OUTPUT
buffer pool (the old pool) is deactivated. Therefore, the encoder is not able to
get buffers from the OUTPUT pool and encoding cannot continue.
This situation can be triggered by sending a RECONFIGURE event without a format
change.
Resolve this situation by ensuring that the OUTPUT buffer pool is always
activated when frames arrive at the encoder.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4235>
There is a CAPTURE pool in the same function. While the CAPTURE pool is called
cpool, using pool for the OUTPUT pool is confusing.
Using opool for the OUTPUT pool makes it more obvious, which pool is used.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4235>
We were already converting the pad last timestamp to running time but
not the segment position.
This segment position is used by gst_aggregator_simple_get_next_time()
to compute the waiting time when aggregating.
Those waiting times were wrong in my live pipeline using the system
clock, resulting in the aggregator to never wait at all.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5460>
While adding arbitrary tile support, a round up operation was badly
converter. This caused the Y component of the stride to be 0. This
eventually lead to a crash in glupoad preceded by the following
assertion.
gst_gl_buffer_allocation_params_new: assertion 'alloc_size > 0' failed
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5458>
There is a race condition where transfer has not been submitted yet while the
request is cancelled which leads to the transfer state going back to
`DOWNLOAD_REQUEST_STATE_OPEN` and the user of the request to get signalled about
its completion (and the task actually happening after it was cancelled) leading
to assertions and misbehaviours.
To ensure that this race can't happen, we start differentiating between the
UNSENT and CANCELLED states as in the normal case, when entering `submit_request`
the state is UNSENT and at that point we need to know that it is not because
the request has been cancelled.
In practice this case lead to an assertion in
`gst_adaptive_demux2_stream_begin_download_uri` because in a previous call to
`gst_adaptive_demux2_stream_stop_default` we cancelled the previous request and
setup a new one while it had not been submitted yet and then got a `on_download_complete`
callback called from that previous cancelled request and then we tried to do
`download_request_set_uri` on a request that was still `in_use`, leading to
something like:
```
#0: 0x0000000186655ec8 g_assert (request->in_use == FALSE)assert.c:0
#1: 0x00000001127236b8 libgstadaptivedemux2.dylib`download_request_set_uri(request=0x000060000017cc00, uri="https://XXX/chunk-stream1-00002.webm", range_start=0, range_end=-1) at downloadrequest.c:361
#2: 0x000000011271cee8 libgstadaptivedemux2.dylib`gst_adaptive_demux2_stream_begin_download_uri(stream=0x00000001330f1800, uri="https://XXX/chunk-stream1-00002.webm", start=0, end=-1) at gstadaptivedemux-stream.c:1447
#3: 0x0000000112719898 libgstadaptivedemux2.dylib`gst_adaptive_demux2_stream_load_a_fragment [inlined] gst_adaptive_demux2_stream_download_fragment(stream=0x00000001330f1800) at gstadaptivedemux-stream.c:0
#4: 0x00000001127197f8 libgstadaptivedemux2.dylib`gst_adaptive_demux2_stream_load_a_fragment(stream=0x00000001330f1800) at gstadaptivedemux-stream.c:1969
#5: 0x000000011271c2a4 libgstadaptivedemux2.dylib`gst_adaptive_demux2_stream_next_download(stream=0x00000001330f1800) at gstadaptivedemux-stream.c:2112
```
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5435>
The counter was using a signed 8 bit integer, which was overflowing
after 127 entries. That was then passed as an unsigned 32 bit integer to
libflac, which caused it to be converted to a huge unsigned number.
That then caused an invalid memory access inside libflac.
As a bonus, signed integer overflow is undefined behaviour.
Instead, use an unsigned 8 bit integer. Once this overflows the existing
code already catches it and stops adding the cue. While FLAC__metadata_object_cuesheet_insert_track()
takes an unsigned 32 bit integer for the track number, FLAC__StreamMetadata_CueSheet_Track is
limiting it to an unsigned 8 bit integer.
Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/2921
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5420>
Found that osxaudiosink could not be added standalone in gst-full build
using
-Dgst-full-elements=osxaudio:osxaudiosink because element registration
was
done at the plugin level. Now src/sink elements and deviceprovider have
their
individual registration.
Copied/adapted from the alsa plugin.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5419>
scanlines->m1 = same line of the previous field
scanlines->t0 = line above of the current field
scanlines->b0 = line below of the current field
scanlines->mp = same line of the next field
Deinterlacing a field weaved frame:
When deinterlacing the top field, the next bottom field is available
(part of the same frame). but when deinterlacing the bottom field,
the next top field (part of the next frame) is not available and
scanlines->mp equals NULL.
In this case it's better to use greedy algorithm using the prevous field
(twice) rather then linear interpolation of the current field.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5331>
If we end up with GST_CLOCK_TIME_NONE as running time for an RTP packet
then this can't be used for bitrate estimation, and also not for
constructing the next RTCP SR. Both would end up with completely wrong
values, and an RTCP SR with wrong values can easily break
synchronization in receivers.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5329>
The timestamp offset can be negative, and it can be a bigger negative
number than the latency introduced by the rtpjitterbuffer so the overall
timeout offset can be negative.
Using the negative offset for calculating how many packets can still
arrive in time when encountering a lost packet in an equidistant stream
would then overflow and instead of considering fewer packets lost a lot
more packets are considered lost.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5296>
gst_video_info_set_interlaced_format() can return an error if the
width/height causes integer overflow. Handle this case, so that we can
fail cleanly. This has been experienced while testing an in-progress
driver.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5286>
Some drivers will push an buffer flagged LAST but empty. In decoder
case, this results in an "producing too many buffer" warning, even
though the result is entirely correct. Detect this case in order to
signal EOS earlier and avoid this warning.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5286>
when playing some codec such as matroska with vp9 codec,
demuxer will save information like video_mastering_display_info
and video_content_light_level in caps that decoder need,
v4l2videodecoder can use it by calling V4L2_CTRL_CLASS_COLORIMETRY
ioctl.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4403>
If decoder notify a source change event when the capture format is
changed, not the resolution changed.
then gst_v4l2_object_acquire_format will retuen false due to
unsupported format.
we need to clear the format lists in the source change flow,
and reenumerate format list
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5218>
This elements pass RTP packets along unchanged and appear as a RTP
payloader element.
This is useful, for example when using the gstreamer-rtsp-server
library, in the case where you are receiving RTP packets from a
different source and want to serve them over RTSP. Since the
gst-rtsp-server library expect the element marked as payX to be a RTP
payloader element and assumes certain properties are available.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5204>
Imported dmabuf are not being duped, so they should never be closed. Instead,
we ensure their live time by having strong reference on their original
buffer. This should fix potential flickering due to dmabuf being closed
too early.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5101>
Now that we can split GStreamer buffers over multiple v4l2 buffer, we may
endup waiting for these buffers to be processed. Avoid waiting for any of
the parts being processed. As a side effect, the pool will now try to
grow if the number of buffers is not sufficient, and will fail
otherwise.
This fixes a hang if the very first frame did not fit. In this case, the
driver will retrain that buffer until the capture is setup, but
GStreamer won't setup the capture until process() function have
returned.
Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5100
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5143>
The hack enforcing strictly increasing timestamps was, according to the
code comments, because librtmp was confused with backwards timestamps.
rtmp2sink is not using librtmp as rtmpsink did, so this is no longer
required.
Also changing the timestamps is causing audio glitches when streaming to
Youtube.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5212>
These 10bit formats are identical to NV12_16L32S, but 64bytes of data is being
prefixed with 16bytes data with four pixels of lower 2bits per byte. For
MT2110T, the lower two bits set so each bytes contains a column of 4 pixels,
also describe as tiled lower 2 bits. MT2110T has been chosen as a name to match
the vendor chosen name. This format is unlikely to exist for other vendors.
For MT2110R, the 2 low bits are in raster order.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3444>
When advancing the ringbuffer, store the processed CoreAudio sample
time, then interpolate the clock in the _get_delay() calls to smooth
the clock. CoreAudio's "latency" report is always a constant and
otherwise leads to the clock generating a latency-time staircase.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5140>
Set the BufferFrame size in CoreAudio so it will deliver data
in ringbuffer segment units when recording. Otherwise, CoreAudio
will provide data in whatever granularity it wants, with no
relationship to the requested latency-time.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5140>
Issue was that Qt was caching multiple different shadersbased on a static
variable location. This static variable location was the same no matter
the input video format and so if an item had an input video format of
RGB and another of RGBA, they would eventually end up using the same
GL shader leading to incorrect colours.
Fix by providing different static variable locations for each of the
different shaders that will be produced.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5145>
When we fill a bitstream buffer the buffer might be too small to hold
the entire frame. Only resize to the filled size, preventing the
following assertion to happen.
gst_buffer_resize_range: assertion 'bufmax >= bufoffs + offset + size' failed
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5100>
Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/2771
This EOS branch exists so that if a seek with a stop is made, qtdemux
stops accepting bytes from the sink after the entire requested playback
range is demuxed, as otherwise we could keep download content that is
not being used.
This patch fixes two flaws that were present in that EOS check:
1) A comparison was made between track time and movie time without conversion.
This made the check trigger early in files with edit lists. This patch fixes
this by converting the track PTS to movie PTS (stream time) for the check.
2) To avoid sending a EOS prematurely when the segment stop is within a GOP and
B-frames are present, the check for EOS should only be done for keyframes. I
gather this was already the intention with the existing code, but because it
used `stream->on_keyframe` instead of the local variable `keyframe` the old
code was checking if the *previous* frame was a keyframe.
It's interesting to note that these two flaws in the old code mask each other
in most cases: the track PTS will have reached the movie end PTS, but EOS would
only be sent if the previous frame was a keyframe. A simple case where they
wouldn't mask each other, reproducing the bug, is a sequence of 3 frame GOPs
with structure I-B-P.
The following validateflow tests have been added to future-proof the
fix:
* validate.test.mp4.qtdemux_ibpibp_non_frag_pull.default
* validate.test.mp4.qtdemux_ibpibp_non_frag_push.default
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5021>
We were checking if the tag list is writable, but it may actually be
shared through the same event (tee upstream or multiple consumers).
Fix a bug where multiple branches have a videoflip element checking the
taglist. The first one was changing the orientation back to rotate-0
which was resetting the other instances.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5097>