Is a seek is done on stream-collection post, there are no selected streams
yet. Therefore none would be chosen to adjust the key-unit seek.
If no streams are selected, fallback to a default stream (i.e. one which has
track(s) with GST_STREAM_FLAG_SELECT).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4922>
When seeking is handled by the collection posting thread, there is a possibility
that some leftover data will be pushed by the stream thread.
Properly detect and reject those early segments (and buffers) by comparing it to
the main segment seqnum
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4922>
... otherwise streams with constant size samples defined with a single
`sample_size` for all samples in the `stsz` box fall in the category
`chunks_are_samples` in `qtdemux_stbl_init`, overriding the actual
sample count.
`FOURCC_soun` would set this automatically for `compression_id == 0xfffe`,
however `compression_id` is read from the Audio Sample Entry box at an offset
marked as "pre-defined" in some version of the spec and set to 0 both by
GStreamer and FFmpeg for opus streams.
Considering the stream `sampled` flag is set explicitely by other fourcc
variants, doing so for opus seems consistent.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4908>
The "Encapsulation of Opus in ISO Base Media File Format" [1] specifications,
§ 4.3.2 Opus Specific Box, indicates that data must be stored as big-endian.
In `build_opus_extension`, `gst_byte_writer_put*_le ()` variants were used,
causing audio streams conversion to Opus in mp4 to offset samples due to the
PreSkip field incorrect value (29ms early in our test cases).
[1] https://opus-codec.org/docs/opus_in_isobmff.html#4.3.2
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4891>
The muxer used a fixed value of 2 channels because the TR 102 366 spec
says they're to be ignored. However, the demuxer still trusted them,
resulting in bad caps.
Make the muxer fill in the correct channel count anyway (FFmpeg already
does) and make the demuxer ignore the value.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4773>
The qt5 and qt6 plugins will now correctly error out if you enable the
option, and you can also now explicitly ensure that wayland, x11,
eglfs support is actually functional by enabling the options. It was
too easy to build non-functional support for these.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4776>
Without this, the plugin cannot be loaded in a devenv because the
RPATH is not added to the plugin dylib. This RPATH will be stripped on
install, which is what we want.
When deploying apps, people are supposed to use `macdeployqt` to
create an AppBundle that bundles Qt for you and sets the RPATHs
correctly to point to that bundled Qt.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4776>
Due to the alpha value being inserted with _BEFORE, we were ending up
with ARGB instead of RGBA, thus displaying completely wrong colours.
According to libpng's manual, "to add an opaque alpha channel, use filler=0xff
or 0xffff and PNG_FILLER_AFTER which will generate RGBA pixels".
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4759>
Since c0bf793c05 ("flvmux: Set PTS based on
running time") the timestamp of the output buffer is already in running
time. So using that for 'srcpad->segment.position' does not work correctly
because gst_aggregator_simple_get_next_time() will convert it again with
gst_segment_to_running_time().
This means that the timestamp returned by
gst_aggregator_simple_get_next_time() may be incorrect. For example, if
flvmux is added to a already runinng pipeline then the timestamp is too
small and gst_aggregator_wait_and_check() returns immediately. As a result,
buffers may be muxed in the wrong order.
To fix this, use the PTS of the incoming buffer instead of the outgoing
buffer. Also add the duration as get_next_time() is supposed to return the
timestamp of the next buffer, not the current one.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4734>
There are broken(?) mjpeg videos that are incorrectly detected as
interlaced. This happens because 'info.height > height' (e.g. 1088 > 1080).
In the interlaced case info.height is approximately 'height * 2' but not
exactly because height is a multiple of DCTSIZE. Make the check more
restrictive but take the rounding effect into account.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4717>
For interlaced jpeg, gst_jpeg_dec_decode_direct() is called twice, once for each
field. In this case, stride[n] is plane_stride[n] * 2 to ensure that only every
other line is written. So the loop must stop at height / num_fields.
If the frame is really interlaced then continuing beyound this, is not harmful,
because jpeg_read_raw_data() will do nothing and return 0, so am info message is
printed.
However, if the frame is not actually interlaced, just misdetected as interlaced
then there is still data available from the second half of the frame. Now
line[0][j] is set to the scratch buffer. If the scratch buffer is not allocated
(because the height is a multiple of v_samp[0] * DCTSIZE) then the result is a
segfault due to a null-pointer dereference.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4717>
Drivers may signal end of sequence using an empty buffer and LAST buffer
set, or just an empty buffer on certain legacy implementation. When this
occured, we'd send GST_V4L2_FLOW_LAST_BUFFER were the code expected
GST_FLOW_EOS. Stop abusing GST_FLOW_EOS and port all the code to the new
GST_V4L2_FLOW_LAST_BUFFER.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4671>
Make splitmuxsrc deal better with stream reordering by
making the largest observed PTS contiguous in the
next fragment. Previously, it selected DTS, but then
aligned that with the segment start of the next fragment,
which holds PTS values - leading to glitches in
streams that don't have PTS = DTS at the start.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4660>
The index is already incremented by 3 every iteration so multiplying it
by 3 additionally on each array access is doing it twice and does not
work.
This caused invalid files to be created if there's more than one CEA608
triplet in a buffer, and out of bounds memory reads.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4645>
Invoking gst_osx_video_sink_osxwindow_destroy() can currently cause a deadlock
because showFrame() keeps trying to get the same lock as well. Moving the lock
closer to where it's actually needed seems to be enough to fix the issue for now.
Reported-by: Alexande B <abobrikovich@gmail.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4627>
This is a fix for a data race leading to:
> GLib-CRITICAL: g_hash_table_foreach:
> assertion 'version == hash_table->version' failed
Identified sequence:
* `rtp_session_on_timeout` acquires the lock on `session` and proceeds with its
processing.
* `rtp_session_process_rtcp` is called (debug log : received RTCP packet) and
attempts to acquire the lock on `session`, which is still held by
`rtp_session_on_timeout`.
* as part of an hash table iterator, `rtp_session_on_timeout` transitively
invokes `source_caps` which releases the lock on `session` so as to call
`session->callbacks.caps`.
* Since `rtp_session_process_rtcp` was waiting for the lock to be released, it
succeeds in acquiring it and proceeds with `rtp_session_process_rr` which
transitively calls `g_hash_table_insert` via `add_source`.
* After `source_caps` re-acquires the lock and gives the control flow back to
`rtp_session_on_timeout`, the hash table iterator is changed, resulting in the
assertion failure.
This commits copies `sess->ssrcs[sess->mask_idx]` and iterates on the copy so
the iterator is not affected by a concurrent change due to the lock being
released in the `source_caps` callback.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4585>
On very quick start/stop, the mainloop may never be run. As a side
effect, our idle stop function is not really being ran, so we can't rely
on that to free the main loop. Simply unref the mainloop when the
thread have completely stop.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4539>
If we don't do that, clients can rely on this signal to see the final pad
topology but it won't be the real one as some of them will disappear after
emitting that signal. This can happen after injecting a different init segment.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4557>
Since c2f890ab, element properties are gathered from the parse-launch
line and passed at object construction.
This caused the following issue to happen in videoflip:
* videoflip installed a CONSTRUCT property named method, now deprecated
* videoflip now also overrides that property with a video-direction
property
GObject construction causes method to be set first at construct time,
with the user-provided value, then video-direction with the default
value.
The user-provided value was thus overridden, causing a regression.
Fix by not installing the properties as CONSTRUCT, and explicitly
implementing constructed() instead in order to ensure that we do still
call gst_video_flip_set_method() at least once during construction.
Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/2529
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4551>
While testing the [implementation for insertable streams] in `webrtcsink` &
`webrtcsrc`, I encountered critical warnings, which turned out to result from
two race conditions in `rtpsession`. Both race conditions produce:
> GLib-CRITICAL: g_hash_table_foreach:
> assertion 'version == hash_table->version' failed
This commit fixes one of the race conditions observed.
In its simplest form, the test consists in 2 pipelines and a Signalling server:
* pipelines_sink: audiotestsrc ! webrtcsink
* pipelines_src: webrtcsrc ! appsrc
1. Set `pipelines_sink` to `Playing`.
2. The Signalling server delivers the `producer_id`.
3. Initialize `pipelines_src` to establish a session with `producer_id`.
4. Set `pipelines_src` to `Playing`.
5. Wait for a buffer to be received by the `appsrc`.
6. Set `pipelines_src` to `Null`.
7. Set `pipelines_sink` to `Null`.
The race condition happens in the following sequence:
* `webrtcsink` runs a task to periodically retrieve statistics from `webrtcbin`.
This transitively ends up executing `rtp_session_create_stats`.
* `pipelines_sink` is set to `Null`.
* In `Paused` to `Ready`, `gst_rtp_session_change_state()` calls
`rtp_session_reset()`.
* The assertion failure occurs when `rtp_session_reset` is called while
`rtp_session_create_stats` is executing.
This is because `rtp_session_create_stats` acquires the lock on `session` prior
to calling `g_hash_table_foreach`, but `rtp_session_reset` doesn't acquire the
lock before calling `g_hash_table_remove_all`.
Acquiring the lock in `rtp_session_reset` fixes the issue.
[implementing insertable streams support]: https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1176
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4532>
Unfortunately streamoff does not flush the events, and this can cause all
sort of issues. Flush events on capture queue. We also return
GST_V4L2_FLOW_RESOLUTION_CHANGE in case a resolution change was seen.
This allow skipping streamon(capture) on flush, which could lead to a
configuration miss-match, or failure if the buffers aren't of the right
size.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4525>
Let the driver detects the change and reconfigure the capture side
transparently from there. This avoid reallocation of the output buffers,
and eliminates the need to stop and restart the capture task. This is
only happening if the driver have support for this, otherwise the old
behaviour is maintained.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4525>
Stop doing capture buffer allocation based on guesses
and wait for the source change event when available.
Unlike stateless decoder, the stateful decoder is not aware of
the coded resolution, and this may lead to the wrong result
even when using TRY_FMT.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4525>
In previous implementation that job was split between handle_frame and
the processing loop and it wasn't clear if this mechanism was race
free. The capture setup would also be tried for every buffer, which was
not necessary.
This also simplify the handling of SRC_CH event, dropping the unneeded
atomic boolean.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4525>
This reverts commit f29c19be58. If this is
called for the reference context then we would run into an infinite
loop, which is not really better than an assertion.
By fixing up DTS to never be ahead of the PTS in the previous commit
this situation should be impossible to hit now.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4515>
The decoder needs to force another enumeration of the format. For
this it was clearing the v4l2object insternal list, leaving a fmtdesc
pointer pointing to freed memory. This patch clears the fmtdesc pointer
that has just been free. It also makes sure the probe function does not
use the cached formats list. The probe function will restore the current
fmtdesc pointer based on the currently configured pixelformat.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4426>
As we don't have anything smart in the fixation process, we may endup with
a format that has a lower bitdepth, even if downstream can handle higher
depth. it is notably the case when negotiating with deinterlace, which places
is non-passthrough caps before its passthrough one. This makes the generic
fixation prefer the formats natively supported by deinterlace element over
the HW 10bit format. As some HW can downscale 10bit to 8bit, this can break
10bit decoding.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4426>
The previous code would only check if two packets in a row were duplicates. If
not (i.e. a packet is a duplicate of a packet received slightly before) the code
would generate completely bogus FCI because it assumes there were no duplicates
present in the array.
In order to be efficient, just store all received packets and remove the
duplicates just before the FCI is generated once the array of observations have
been sorted by seqnum.
Fixes TWCC usage with moderate to high packet duplication.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4378>
With GST_SEEK_FLAG_SNAP_AFTER present, the previous version would
adjust seek time based on the keyframe farthest away from desired_time.
This was incorrect, because we always want the *earliest* suitable keyframe
to seek to, not the last one.
With this fix, in case of the SNAP_AFTER, we now look for the closest keyframe
that can be found after desired_time. Behaviour for SNAP_BEFORE should remain
unchanged.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4251>
Assuming that V4L2 CAPTURE devices always use one buffer per JPEG image, we can
always mark JPEGs provided by a V4L2 element as parsed.
The V4L2 elements require that JPEG images sent to V4L2 OUTPUT devices must
always be parsed.
This is necessary to link a V4L2 CAPTURE device with a V4L2 OUTPUT device
without explicitly marking the stream as parsed or adding a jpegparse into the
pipeline.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4247>
The av1C box is optional so dropping parsing does not break anything
fundamentally, and there seems to be no historical record how version 0
even looks like while the comments and the parsing disagreed with each
other.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4027>
When using qtdemux in a pipeline that should only work as a pure demuxer (not
for actual playback), qtdemux shouldn't emit new GstSegments to correct
the start time (jump to the future) to ensure that the user experiences no
playback delay. By doing so, it's generating the wrong segments when an append
of data from the past happens. When that happens, downstream elements such as
parsers (eg: aacparse) may clip those buffers laying before the GstSegment and
create problems on the GStreamer client app (eg: WebKit).
Getting buffers clipped out because of the wrong GstSegments started becoming
a problen when this commit was introduced:
ab6e49e9cc audioparsers: add back segment clipping to parsers that have lost it
This clipping makes test DASH shaka 35 from MVT tests[1] to fail in
WebKitGTK/WPE (at least) and can potentially cause a number of other problems
in the WebKit Media Source Extensions (MSE) code.
Note that this new behaviour of not emitting new GstSegments only makes sense
when qtdemux is being used as a pure demuxer and not as part of a regular
pipeline. This is why the variant field has been added. When equal to
VARIANT_MSE_BYTESTREAM, it will make qtdemux behave differently in push mode,
taking decisions that meet the expectations for an MSE-like processing mode.
This kind of tweaks have been done in the past for MSS streams, for instance.
That code has been refactored to use VARIANT_MSS_FRAGMENTED now, instead of
its own dedicated boolean flag.
Co-authored by: Alicia Boya García <ntrrgc@gmail.com>
...who suggested to use "variant: mse-bytestream" in the caps to identify that
mode, as proposed in her unmerged patch:
https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/issues/467
[1] https://github.com/rdkcentral/mvt
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3990>
All the RTP src pads were sharing the same stream-id while each actually
carry a different stream.
This was causing problem for example when funneling the streams together
and then trying to split them using 'streamiddemux'.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3866>
Deserialize socket control messages as GstSocketTimestampMessage only
if (level, type) is (SOL_SOCKET, SCM_TIMESTAMPNS).
Without this patch, messages with types SCM_RIGHTS or SCM_CREDENTIALS
could be deserialized as GstSocketTimestampMessage instead of
GUnixFDMessage or GUnixCredentialsMessage from gio.
Fixes#1736
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3777>
AVC-Intra is a range of H.264-compliant intra-only codecs from
Panasonic. The codes and descriptions have been taken from VLC.
The (encumbered) sample I have here produces byte-stream H.264,
including SPS and PPS and no `avcC` box.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3739>
When calculating the presentation offset for CMAF input in live
playback, subtract the stream_time of the fragment from the
calculated presentation offset, so that the first fragment
is played at running time zero.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3680>
This is recommended by various specifications for such framerates, while
for integer framerates we continue using centiframes to allow for some
more accuracy.
Using N means that no rounding error accumulates, eventually leading to
outputting a packet with a different duration.
Some tools such as MediaInfo determine that a stream is variable
framerate if any packet has a different duration than the others, and
there is no reason I can see for not using the full 4 bytes of
resolution that the mp4 timescale offers.
Example problematic pipeline:
```
videotestsrc num-buffers=5001 ! video/x-raw,framerate=60000/1001,width=320,height=240 ! \
videoconvert ! x264enc bitrate=80000 speed-preset=1 tune=zerolatency ! h264parse ! \
video/x-h264,profile=high-10 ! mp4mux ! filesink location="result2.mp4"
```
This results in a media file that MediaInfo detects as variable
framerate because the 5000th packet has duration 99 instead of 100.
With this patch, the timescale is 60000 and all packets have duration
1001.
Related issue for context: https://bugzilla.gnome.org/show_bug.cgi?id=769041
Co-authored-by: Sebastian Dröge <sebastian@centricular.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3049>
This reverts the decision from
https://bugzilla.gnome.org/show_bug.cgi?id=754230
where it was decided that we rather play safe and only use the `tfdt` if
it is "significantly different" to the sum of sample durations.
As the specification says
If the time expressed in the track fragment decode time (‘tfdt’) box
exceeds the sum of the durations of the samples in the preceding
movie and movie fragments, then the duration of the last sample
preceding this track fragment is extended such that the sum now
equals the time given in this box.
we have to use the `tfdt` in general to allow for it to signal gaps in
the stream.
A muxer producing fragments might not yet know the full duration of the
last sample of a previous fragment if the next fragment starts with a
gap, and knowing the actual start of the next fragment would potentially
require to violate latency requirements.
Additionally, the existence of `tfdt` allows to avoid accumulating
rounding errors from summing up the durations.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3586>
The rtpjitterbuffer test drop_messages_interval uses a GstClockTime for
the message drop interval. This property is defined as a guint. On
systems with 64-bit time_t but 32-bit uint, this can cause the
g_object_set function to fail to read the arguments properly.
Fixes: #1656
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3580>
If we keep the old events they can be end up being passed to the app, that could
discard the protection information because it has been seen before.
Drive by improvement: use g_queue_clear_full instead of foreach+clear for
protection events.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3547>
jpegdec is capable to parse input frames, but if jpegparse is before,
there's no need to reparse frames. This patch configure jpegdec as
packetized, skipping parsing, if negotiated sink caps has the boolean
field 'parsed' as true.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2464>
According to comment in gst_pulsering_stream_latency_cb, latency updates
happen every 100 ms. The code in gst_pulsering_stream_latency_cb does
not care about the actual state of the ringbuffer, pbuf->acquired or
not.
Thus, every 100 ms the ringbuf->segdone may be set, even though the
object itself might be in 'destroyed' state. On next
gst_pulseringbuffer_acquire the segdone is not touched, so playback may
resume with invalid/wrong segdone value. This finally leads to a period
of silence playing after resuming the pipeline.
The problem was found on 'not-yet-released'-hardware and so far was not
reproducible on desktop computer.
Removing the callback as long as the ringbuffer is not in 'acquired'
state solves the problem reliably on the hardware device that the issue
was detected on.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3082>
The stream selection is done on the currently outputting tracks, but in order to
(de)activate the backing streams we can only do it if the input and output
period are identical.
Fixes crash when doing stream selection during period migration
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3525>
This regression was introduce by fix for making buffer pool thread safe. When
we renegotiate, the pool will be setup after we set the format. But the code
has been simplified to only get the pool once before, which caused a null
pointer deref.
Fixes 94ba019 ("v4l2: Fix SIGSEGV on 'change state' during 'format change'")
Related to !3481Fixes#1626
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3513>
Currently in rtp_session_send_rtp(), the existing ntp-64 RTP header
extension timestamp is updated with the actual NTP time before sending
the packet. However, there are some circumstances where we would like
to preserve the original timestamp obtained from reference timestamp
buffer metadata.
This commit provides the ability to configure whether or not to update
the ntp-64 header extension timestamp with the actual NTP time via the
update-ntp64-header-ext boolean property. The property is also exposed
via rtpbin. Default property value of TRUE will preserve existing
behavior (update ntp-64 header ext with actual NTP time).
Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/1580
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3451>
- Based heavily on the existing Qt5 integration however:
- The sharing of OpenGL resources is slightly different
- The integration with the scengraph is a bit different
- Wayland, XCB and KMS have been smoke tested. Android, MacOS/iOS,
Windows may or may not work.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3281>