g_sequence_remove_range's end iter is exclusive, so if one
wants to remove that item as well, it should be called with
the next iter.
This could in theory fix an issue where:
* The sequence isn't entirely trimmed, with an old item lingering
* Following FEC packets are immediately discarded because they
arrived later than corresponding media packets, long enough for
seqnums to wrap around
* We now try to reconstruct a media packet with a completely obsolete
FEC packet, chaos ensues.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1341>
We need to hold onto the last buffer until the next buffer arrives.
Before, if a caps change comes we would remove the currently rendering
buffer. if Qt asks use to render something, we would render the dummy
black texture.
Fixes a period of black output when upstream is e.g. changing resolution
as in hls adaptive bitrate scenarios.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1338>
The `gst_v4l2_buffer_pool_dqbuf` function contains this ominous comment:
/* get our GstBuffer with that index from the pool, if the buffer was
* outstanding we have a serious problem.
*/
outbuf = pool->buffers[group->buffer.index];
Unfortunately it is common for buffers in _output_ buffer pools to be
both queued and outstanding at the same time. This can happen if the
upstream element keeps a reference to the buffer, or in an encoder
element itself when it keeps a reference to the input buffer for each
frame.
Since the current code doesn't handle this case properly we can end up
with crashes in other elements such as:
(gst-launch-1.0:32559): CRITICAL **: 17:33:35.740: gst_video_frame_map_id: assertion 'GST_IS_BUFFER (buffer)' failed
and:
(gst-launch-1.0:231): GStreamer-CRITICAL **: 00:16:20.882: write map requested on non-writable buffer
Both these crashes are caused by a race condition related to releasing
the same buffer twice from two different threads. If a buffer is queued
and outstanding this situation is possible:
**Thread 1**
- Calls `gst_buffer_unref` decrementing the reference count to zero.
- The core GstBufferPool object marks the buffer non-outstanding.
- Calls the V4L2 release buffer function.
- If the buffer is _not_ queued:
- Release it back to the free pool (containing non-queued buffers).
**Thread 2**
- Dequeues the queued output buffer.
- Marks the buffer as not queued.
- If the buffer is _not_ outstanding:
- Calls the V4L2 release buffer function.
- Release it back to the free pool (containing non-queued buffers).
If both of these threads run at exactly the same time there is a small
window where the buffer is marked both not outstanding and not queued
but before it has been released. In this case the buffer will be freed
twice causing the above crashes.
Unfortunately the variable recording whether a buffer is outstanding is
part of the core `GstBuffer` object and is managed by `GstBufferPool` so
it's not as straightforward as adding a mutex. Instead we can fix this
by additionally recording the buffer state in `GstV4l2BufferPool`, and
handle "internal" and "external" buffer release separately so we can
detect when a buffer becomes not outstanding.
In the new solution:
- The "external" buffer pool release and the "dqbuf" functions
atomically update the buffer state and determine if a buffer is still
queued or outstanding.
- Subsequent code and a new
`gst_v4l2_buffer_pool_complete_release_buffer` function can proceed to
release (or not) a buffer knowing that it's not racing with another
thread.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1010>
The pipeline flow for receiving looks like this:
rtpsession ! rtpssrcdemux ! session_fec_decoder ! rtpjitterbuffer ! \
rtpptdemux ! stream_fec_decoder ! ...
There are two places where a fec decoder could be placed.
1. As requested from the 'request-fec-decoder' signal: after rtpptdemux
for each ssrc/pt produced
2. after rtpssrcdemux but before rtpjitterbuffer: added for the
rtpst2022-1-fecenc/dec elements,
However, there was some cross-contamination of the elements involved and
the request-fec-decoder signal was also being used to request the fec
decoder for the session_fec_decoder which would then be cached and
re-used for subsequent fec decoder requests. This would cause the same
element to be attempted to be linked to multiple elements in different
places in the pipeline. This would fail and cause all kinds of havoc
usually resulting in a not-linked error being returned upstream and an
error message being posted by the source.
Fix by not using the request-fec-decoder signal for requesting the
session_fec_decoder and instead solely rely on the added properties for
that case.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1300>
The average_period should always represent the time between two
events. The specification defines the event time as the time
between audio samples, video frame sync, video line sync, etc.
In case of one timestamp per PDU the timestamp_interval identifies
the amount of events between the timestamp of one PDU and the
timestamp of the next PDU.
As described in IEEE 1722-2016 chapter
"10.4.12 timestamp_interval field" timestamp_interval shall be
nonzero.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1076>
The failure conditions can be overidden by subclasses, and a boolean
return value is provided to the caller whether adding/removing the child
element has actually worked. The caller can then handle this
accordingly but flooding stderr with this is not very useful.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1320>
Upstream caps might for example be
application/x-rtp,media=audio,encoding-name={OPUS, X-GST-OPUS-DRAFT-SPITTKA-00, multiopus}
and while that is not fixed caps it is enough to match it with a media.
Only caps structures that have the correct structure name and that have
the media and encoding-name field are preserved, but if both are present
then these caps are used as "codec preferences".
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1291>
Currently reading extension relies on the fact that everything after the
last"." character is a file extension. Whereas that works fine for most
of the cases, it breaks when the URI contains a query part.
E.g.: `http://url.com/file.mp4?param=value` returns `mp4?param=value`
instead of `mp4`.
In this commit we use URI parser to read the path of the URI (in the example
above, that is `/file.mp4`) and read extension from that path.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1305>
Some decoding APIs support delayed output for performance reasons.
One example would be to request decoding for multiple frames and
then query for the oldest frame in the output queue.
This also increases throughput for transcoding and improves seek
performance when supported by the underlying backend.
Introduce support in the mpeg2 base class, so that backends that
support render delays can actually implement it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1013>
Downstream might need the start code offset when decoding.
Previously this computation would be scattered in multiple sites. This
is error prone, so move it to the base class. Subclasses can access
slice->sc_offset directly without computing the address themselves
knowing that the size will also take the start code into account.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1013>
The GstV4l2CodecAllocator dispose function clears `self->decoder` but
the finalize function then tries to use it if the allocator has no been
detached yet.
Fix by detaching in the dispose function before we clear
`self->decoder`.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1220>
Standard interlace handling:
* If we have interlace-mode=interleaved and the field order, we just
set it when creating the session
* If we have interlace-mode=(interleaved|mixed) and no field order, we
set the field order on the first buffer
The encoder session does not support changing the FieldDetail after it
has started encoding frames, so we cannot support mixed streams
correctly.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1214>
Delay decoders downstream negotiation just before an output frame
needs to be allocated.
This is required, are least for H.264 and H.265 decoders, since
codec_data might trigger a new sequence before finishing upstream
negotiation, and sink pad caps need to set before setting source pad
caps, particularly to forward HDR fields. The other decoders are
changed too in order to keep the same structure among them.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1257>
Using GstBaseDec hack to access the parent_object of each element in
the element itself is a bit fragile. It would be better to keep its
own parent object as the usual global variable. It would make it
resistant to code changes.
The GstBaseDec macro to access the parent object now it's internal to
base decoder.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1257>
We were checking if the start time of the gap event was
GST_CLOCK_TIME_NONE, which is superfluous because that cannot happen,
and then not checking if it was NONE after gst_segment_to_running_time,
which caused a crash if an identity received a gap event fully or
partially outside the current segment.
This patch was done in cooperation with:
Jan Alexander Steffens (heftig) <jan.steffens@ltnglobal.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1269>
.. if a current direction has already been set
When `webrtcbin` has created an offer based on codec_preferences,
it might not have received caps on its sinkpads by the time a
remote description is set, in which case we want to connect the
input stream upon actual reception of the caps instead.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1233>
With mpeg4videoparse drop=false config-interval=N|-1 we might be
trying to insert a config before we have actually received one,
in which case we'll try to map a NULL buffer which will generate
lots of criticals.
Fixes#855
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1265>
gst_va_fixate_format() will iterate all othercaps' structures to find
the one with less information lost at color conversion. If a structure
with same color format is found, the iteration stops. It's like a
smart truncation. Then, this function also will choose the caps
feature.
Later this structure is used fixate its size and no further truncation
is needed.
Don't intersect at fixate, since it kills possible resizing.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1261>
If the pad does not have a current caps, get_pad() returns the query
caps which can be ANY. In such case the caps does not have any structure
resulting in a critical warning when calling gst_caps_get_structure().
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1254>
Detected while reading the code, cccombiner must set
self->current_video_buffer to NULL *after* emitting selected-samples
in order for the application to get a useful return when peeking
the next video sample.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1252>
When schedule is true (as is the case by default), we insert padding
when no caption data is present in the schedule queue, and previously
weren't checking whether the caption pad had gone EOS, leading to
infinite scheduling of padding after EOS on the caption pad.
Rectify that by adding a "drain" parameter to dequeue_caption()
In addition, update the captions_and_eos test to push valid cc_data
in: without this cccombiner was attaching padding buffers it had
generated itself, and with that patch would now stop attaching
said padding to the second buffer. By pushing valid, non-padding
cc_data we ensure a caption buffer is indeed attached to the first
and second video buffers.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1252>
Also refactor various internals of the monitor code:
- Don't allow starting twice but just return directly when starting a
second time.
- Don't end up in an inconsistent state if call start() a second time
while the monitor is starting up.
- Remove complicated cookie code: it was not possible to add/remove
filters while the monitor was started anyway so this was only useful
in the very small time-window while starting the monitor or while
getting the devices. Instead disallow adding/removing filters while
the monitor is starting, and when getting devices work on a snapshot
of providers/filters.
Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/667
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1189>
At gst_va_dmabuf_allocator_setup_buffer_full, static code analysis tool
does not know number of objects in descriptor is always larger than 0 if
export_surface_to_dmabuf succeeds. Thus, the tool will assume buf is
allocated with mem but not released when desc.num_objects equals to 0
and raise a mem leak issue.
For gst_va_dambuf_memories_setup, we should also inform the tool that
n_planes will be larger than 0 by checking the value at very beginning.
Then, the defect similar to above will not be raised during static analysis.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1241>
gst_base_parse_reset() does not reset data_bytecount to 0, so
gst_base_parse_update_bitrates() uses a wrong value to calculate
the average bitrate on subsequent pipeline starts. This leads to an
excessive amount of "tag" events being pushed. These events include
very high "bitrate" values that diminish over time, and are produced
until the average bitrate is back to sane values.
Fixes#840
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1228>
This will only affect individual/tarball module builds, as the
options yield to the parent project which was set to gpl=disabled
by default already. We kept it as auto in the original commit
to accommodate the need to update cerbero as well, which had to
be done separately after the initial commit.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1217>
Previously gst_structure_has_name was used to get a string to compare with supported mimetypes.
This is incorrect as above function returns a user defined structure name which is
not the structure mimetype value.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1206>
Trying to reset before the pads have been deactivated races with the
streaming thread. There was also a buggy buffer clear leaving a dangling
`stored_frame` pointer around. Use `gst_interlace_reset` so this happens
properly.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1039>
We don't support D3D11 interop for UWP because some APIs
(specifically MFTEnum2) are desktop application only.
However, the code for symbol loading is commonly used by both UWP and WIN32.
Just link GModule unconditionally which is UWP compatible, and simply don't
try to load any library/symbol dynamically when D3D11 interop is unavailable.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1216>
... in favour of dep.get_variable('foo', ..) which in some
cases allows for further cleanups in future since we can
extract variables from pkg-config dependencies as well as
internal dependencies using this mechanism.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1183>
When ass hinting value is set to anything other than NONE,
subtitles cannot use smooth scaling, thus all animations will jitter.
The libass author warns about possibility of breaking some scripts when it is enabled,
so lets do what is recommended and disable it to get the smooth scaling working.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1201>
WebVTT in ISO MP4 is specified in ISO 14496-30,
and needed for DASH support. It's stored in an
mp4 specific format. To handle it compatibly,
the wvtt boxes are converted back into WebVTT text
and pushed as application/x-subtitle-vtt
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1182>
Swap the `need_process` boolean check on qtdemux streams
for a direct function pointer to the splitting function,
so we can stop adding extra cases to the single growing
`gst_qtdemux_process_buffer()` function.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1182>
Currently the extension data length specified in the RTP header would
say it was shorter then the data serialised to a packet. When
combining the resulting buffer, the underlying memory would still
contain the extra (now 0-filled) padding data.
This would mean that parsing the resulting RTP packet would potentially
start with a number of 0-filled bytes which many RTP formats are not
expecting.
Such usage is found by e.g. RTP header extension when allocating the
maximum buffer (which may be larger than the written size) and shrinking
to the required size the data once all the rtp header extension data has
been written.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1146>
The code in the aes elements assumes OpenSSL >= 1.1.0:
- implicit library initialization;
- version retrieved with OpenSSL_version(OPENSSL_VERSION);
and it fails to build with older versions.
Specify the required OpenSSL version explicitly in meson.build so that
the elements are excluded on older systems (e.g. Ubuntu 16.04) and the
rest of GStreamer can still build.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1067>
An inactive pad is a pad which, in live mode, hasn't yet received
a first buffer, but has been waited on at least once.
Exposing API to support this behaviour allows users of aggregator
subclasses to request pads, and not start pushing data on those
immediately, while avoiding systematic timeouts.
Subclasses must check in explicitly to this behavior, most likely
by exposing a user-facing property, and must check whether a pad
needs ignoring when aggregating. That is because by design,
aggregator subclasses don't get a list of "ready" pads, but instead
directly iterate element->sinkpads.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/867>
Since commit a55dafe341, stream-scoped tags no
longer appeared as top-level tags, introducing a behaviour regression, specially
for MP3 files.
The `gst_discoverer_info_get_tags()` API now returns all tags detected for the
given media, as documented.
A new API is introduced to get container-specific tags,
`gst_discoverer_container_info_get_tags()`. The discoverer tool was adapted to
use it. `gst_discoverer_info_get_tags()` is now deprecated in favor of
`gst_discoverer_container_info_get_tags()` and
`gst_discoverer_stream_info_get_tags()`.
Fixes#759
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1107>