Timestamps are untouched by default, but the new mode can now be enabled to replace RTP timestamps
with ones generated from the buffer PTS. Making it an enum in case different modes are needed in the future.
That allows for a rtpjitterbuffer to do proper drift compensation, so that the stream coming out of gst-rtsp-server
is not drifting compared to the pipeline clock and also not compared to the RTCP NTP times.
Most of the code is borrowed from rtpbasepayload, as it's exactly its behaviour which I wanted to bring here.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7526>
Valgrind complains about uninitialized memory used in an ioctl
Syscall param ioctl(VKI_V4L2_G_TUNER).reserved points to uninitialised byte(s)
at 0x719294F: ioctl (ioctl.c:36)
by 0x3126A817: gst_v4l2_fill_lists (v4l2_calls.c:185)
by 0x3126A817: gst_v4l2_open (v4l2_calls.c:589)
by 0x3123F1C2: gst_v4l2_device_provider_probe_device (gstv4l2deviceprovider.c:122)
by 0x3123F648: gst_v4l2_device_provider_device_from_udev (gstv4l2deviceprovider.c:301)
by 0x3123F998: provider_thread (gstv4l2deviceprovider.c:395)
by 0x796FA50: ??? (in /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0.7200.4)
by 0x710CAC2: start_thread (pthread_create.c:442)
by 0x719DA03: clone (clone.S:100)
Address 0x44008a34 is on thread 11's stack
in frame #1, created by gst_v4l2_open (v4l2_calls.c:524)
Uninitialised value was created by a stack allocation
at 0x3126A024: gst_v4l2_open (v4l2_calls.c:524)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6144>
This is not trully supported in V4L2, but we can emulate this similar to
what other elements do. In this patch we ensure that 0/1 is supported by
encoders (caps query),and uses a default of 30fps whenever we need to
set a framerate into the driver.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7352>
It has to be included in the block duration but in GStreamer we're not
including it in the buffer duration, so it has to be added again here.
Not including it in the block duration can lead to fatal errors when playing
back with Firefox if there are more padding samples than actual samples, e.g.
> D/MediaDemuxer WebMDemuxer[7f6a0808b900] ::GetNextPacket: Padding frames larger
> than packet size, flagging the packet for error (padding: {13500000,1000000000},
> duration: {6000,1000000}, already processed: false)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7502>
By setting the earliest time to timestamp + 2 * diff there would be a difference
of 1 * diff between the current clock time and the earliest time the element
would let through in the future. If e.g. a frame is arriving 30s late at the
sink, then not just all frames up to that point would be dropped but also 30s of
frames after the current clock time.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7459>
splitmuxsink can't possibly know how much latency it will introduce as it always
keeps one GOP around before outputting something. This breaks the latency
configuration of the pipeline and we're better off just pretending that
everything downstream of the sinkpads is not live.
Especially muxers that are based on aggregator and time out on the latency
deadline can easily misbehave otherwise as the deadline will be exceeded usually.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7499>
When no ports are given, gst_jack_get_ports() is called to get all the
(physical) output ports but then the result is ignored, triggering the
"No physical output ports found..." error.
Instead, move the queried ports to the variable we're going to use
later.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7474>
If two (or more) rtpfunnel elements are cascaded, then only one will
realistically have information on the particular ssrc that is in use for a
particular input stream. As such, any key unit requests may never reach the
corresponding encoder.
This has been discovered by combining simulcast and BUNDLE with webrtcbin.
simulcast uses one rtpfunnel, and BUNDLE uses another rtpfunnel.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7405>
```
In file included from ../subprojects/gst-plugins-good/ext/qt6/gstqsg6material.cc:31:
../subprojects/gst-plugins-good/ext/qt6/gstqsg6material.h:69:17: error: private
field 'mem_' is not used [-Werror,-Wunused-private-field]
69 | GstMemory * mem_;
| ^
```
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7414>
This patch addresses the issue where GStreamer would throw an error when
attempting to use bt2100-hlg colorimetry with V4L2, which is not
supported by the current V4L2 kernel. When bt2100-hlg colorimetry is set
from caps, the check for transfer (GST_VIDEO_TRANSFER_ARIB_STD_B67) is
bypassed.
The main improvement is to avoid checking the transfer value in
gst_v4l2_video_colorimetry_matches when it is
GST_VIDEO_TRANSFER_ARIB_STD_B67. This is because the transfer value in
the cinfo parameter comes from gst_v4l2_object_get_colorspace, which
converts the transfer to another value, causing a mismatch.
Since the kernel does not support GST_VIDEO_TRANSFER_ARIB_STD_B67,
gst_v4l2_object_get_colorspace cannot map it correctly from V4L2 to
GStreamer. Therefore, we ignore this check to prevent errors.
changes:
- Added a condition in gst_v4l2_video_colorimetry_matches to bypass the
transfer check when the transfer is GST_VIDEO_TRANSFER_ARIB_STD_B67.
- Ensured that the pipeline does not throw errors due to unsupported
bt2100-hlg colorimetry in V4L2.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7212>
With GLES 2.0 we are forced to use CopyTextImage2D which requires
passing an internal format. With QT6 eglfs, we need to pass GL_RGB
instead, probably because of how the texture has been created. As its
hard to guess, simply fallback to GL_RGB on failure. This fixes usage
or qml6glsrc with eglfs backend, without loosing support for
semi-transparent window on other platforms.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7321>
In order to use oes-external, the qml6glsink needs a fragment shader that uses
the samplerExternalOES.
The qsb tool is not able to handle shaders that contain samplerExternalOES since
this feature is not supported by all target shading languages. The qsb tool is
able to replace a shader in the qsb file to handle this use case. Use it to
generate a shader variant that uses samplerExternalOES for OpenGL ES and select
that variant if the qml6glsink negotiated texture target oes-external.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7319>
Parts may emit bus messages that want to take the splitmuxsrc
lock and prevent the downward state change. Avoid a deadlock
after a part sends an error message by taking a ref and
dropping the lock around the unprepare call
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7053>
Publish fragment-id in the messages that splitmuxsink and splitmuxsrc
send, so when they are received out of order (due to async finalization,
for example), they can still be identified / ordered correctly.
Fix a race in the splitmuxsink unit test where messages might be
received out of order
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7053>
Add a `num-lookahead` property that will 'prepare' a number of
fragments in advance of the playhead if they have been deactivated
or closed by a limited number of `num-open-fragments`. It can help
to avoid any play stalls reading the indexes or headers of the next
file from high-latency media or on resource limited machines.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7053>
Publish the playback offset for and duration into the
splitmuxsink-fragment-closed bus message as each fragment
finishes.
These can be passed to splitmuxsrc via the 'add-fragment'
signal to avoid splitmuxsrc measuring all files on startup
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7053>
Add a reasonably large default for the number of simulataneous
files to open, that won't affect users that split recordings into
a few large files, but will help prevent fd exhaustion for users
that make recordings with lots of small fragments
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7053>
When calculating the timestamp offset to apply to
media streams in a fragment, ensure that all fragments
are offset "together" to preserve alignment in cases
where there might gaps in a recording at a fragment boundary.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7053>
Add a signal that allows adding fragments with a specific offset
and duration directly to splitmuxsrc's list. By providing the
fragment's offset on the playback timeline and duration directly,
splitmuxsrc doesn't need to measure the fragment making for faster
startup times.
Add a bus message that's published when fragments are measured,
reporting the offset and duration, so they can be cached by an
application and used on future invocations.
Add examples for handling the bus message and using the 'add-fragment'
signal.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7053>
Add a property to limit the number of parts splitmux will open
simultaneously. Modify the part handling to support deactivating
and reactivating the demuxing for each part.
The default is '0', to preserve the existing behaviour of opening
all parts at the beginning.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7053>
If the stream has a special colorimetry that is not in the colorimetry
list, it will cause negotiation to fail. We should allow passing any
colorimetry, so add an extra structure without the colorimetry field.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7029>
video-info supports encoded format to have RGB color-matrix, while
v4l2object just leave the v4l2 matrix to default when mapping
GST_VIDEO_COLOR_MATRIX_RGB. It causes gst matrix changed to be
GST_VIDEO_COLOR_MATRIX_BT601 when mapping v4l2 colorimetry.
So add support for encoded format with RGB color-matrix in v4l2object.
Note that for M2M encoders, we should in theory assume that that we can
transfer this value from OUTPUT to CAPTURE queues, though its only true
if the drivers does not do CSC. For now, we don't support any RGB
codecs, but leaving a note for the future.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3952>
The V4L2_MAP_QUANTIZATION macro has been fixed to something a lot saner,
fix our replica accordingly. The new macro now simply set the quantization
to full range is the pixel formats is RGB based, or if the JPEG
colorspace is used.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3952>
Not doing so would mean that tags would be overidden by any tag events sent by
upstream. Also only send a tag event directly if upstream never sent one.
By default use GST_TAG_MERGE_REPLACE to override tags that exist in both the
upstream event and this element with the ones from this element, but provide a
new "merge-mode" property to adjust the behaviour.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7145>
- Align `glib_debug`, `glib_assert` and `glib_checks` options with GLib,
otherwise glib subproject won't inherit their value. Previous names
and values are preserved using Meson's deprecation mechanism.
- Add `extra-checks` and `benchmarks` options in the main project so it
can be inherited in GStreamer subprojects.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1165>
Specify "layout" field in src template to make sure it's
set and gets fixated properly if the downstream element
supports both interleaved and non-interleaved caps.
Fixes
gst_pad_set_caps: assertion 'caps != NULL && gst_caps_is_fixed (caps)' failed
critical with e.g.
gst-launch-1.0 rtpdtmfsrc ! rtpdtmfdepay ! audioconvert ! fakesink
Not that the layout really matters in our case since we always
output mono anyway, but non-interleaved requires adding AudioMeta,
so this is the easiest fix.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7036>
Upon fatal errors the loop function will first post an error message
then push out an EOS event.
An application may react immediately to the error message by setting the
state of the pipeline to NULL, meaning by the time we push out the EOS
event PAUSED_TO_READY may have reset the seek seqnum to -1.
While this is harmless, the assertion when setting an invalid seqnum
isn't tidy, fix this by simply not resetting to INVALID as it serves no
practical purpose and the next READY_TO_PAUSED will select a new seqnum
anyway.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7032>
Certain V4L2 drivers can report that a video receiver is seeing
some signal, but that it is unable to synchronize to it. IOW: the driver
can sometimes report V4L2_IN_ST_NO_SYNC and not report V4L2_IN_ST_NO_SIGNAL.
In particular, I've seen the tc358743 (HDMI-to-CSI2 converter) driver
sometimes report this when deployed to a fleet of embedded Raspberry Pis.
The relevant kernel code is in [1]. The video output is not practically
usable when V4L2_IN_ST_NO_SYNC is reported (only visually corrupted frames,
sometimes with random "snow", are received). I assume that this happens when
either the HDMI cable is poorly plugged in or damaged or when a CSI2 FFC
cable is used and is damaged.
The change in this commit is useful for detecting this working-but-not-really
condition in application code. Applications already listening for the "Signal lost"
message will gain the ability to handle this condition.
There seem to be more V4L2 error flags like this, see [2]. However, I do not
have practical experience with them and adding only V4L2_IN_ST_NO_SYNC seems
like a safer option.
[1]: https://github.com/raspberrypi/linux/blob/be8498ee21aa/drivers/media/i2c/tc358743.c#L1534
[2]: https://www.kernel.org/doc/html/v6.6/userspace-api/media/v4l/vidioc-enuminput.html
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7021>
The initial goal was to support the case where we are paused watching a live
stream, and when we resume we can no longer resume from the previously
downloaded position. In that case we internally do a flushing seek back to the
"current live head position". This was also extended since to be able to
handle (utterly broken) servers when we can't really figure out where we are
anymore and therefore trigger that lost sync so we can try to get back on our
feet.
This does fix the issue... but results in spurious FLUSH_{START|STOP} events
being sent downstream. While that's fine for regular playback scenarios, it's a
bit of a wild scenario since a lot of pipelines/applications don't expect such
events when it wasn't triggered by downstream/application.
Fixes#3605
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7005>
This attempts to implement the gtkglsink element on Windows using WGL,
as there were some more gotchas that are along the way, since we need to
juggle with libepoxy along the way, meaning that we need a recent
GTK+-3.24.x for this to work properly, i.e. the upcoming GTK+-3.24.43.
Since we are essentially using an overlay compositor only during
rendering, move its initialization and destruction into the
gtk_gst_gl_widget_render() function, so that things are safer as we are
doing things across threads between gstreamer (gst-gl) and GTK, as GL
operations, as above, have more gotchas on Windows.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4289>
Some servers might not provide 100% matching PDT when doing updates, or accross
variants. This would cause the code matching segments using PDT to fail if the
segment PDT was 1 microsecond (or whatever small value) before the candidate
segment. And would pick the (wrong) following segment as the matching one.
In order to be more tolerant when matching, we instead check whether the
candidate segment is within the first segment of the segment we are trying to
match.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6610>
If we end up with a segment with an internal time that varies from the supposed
one, this could be for two reasons:
* We guess-timated the wrong segment to go to when advancing or switching
variants. In that case we try to find the actual segment to go to (just before
this change).
* There was a complete playlist change (for whatever reason) and we can't find a
replacement. In that case we want to carry on playback from this position but
need to remember that we moved (by setting the stream to DISCONT, and
resetting the new mapping).
Fixes playback on several broken stream
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6610>
Since the default value of `m3u8->discont_sequence` (before parsing of the
playlist data) was 0 .. we would never properly detect the presence of that
field if it was present with a value of 0.
This would later on cause havoc in playlist synchronization where we would
assume it didn't have a discontinuity sequence specified (whereas it did, and it
was 0).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6610>
A lot of streams will do a poor job of estimating proper duration of fragments
in the playlist, but over several fragments have it correct.
Instead of constantly trying to realign the estimated stream time, allow for a
more realistic tolerance of 3-4 video frames
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6610>
When updating playlists, we want to know whether the updated playlist is
continuous with the previous one. That is : if we advance, will the next
fragment need to have the DISCONT buffer set on it or not.
If that happens (because we switched variants, or the playlist all of a sudden
changed) we remember that there is a pending discont for the next fragment. That
will be used and resetted the next time we get the fragment information.
Previously this was only partially done. And it was racy because it was set
directly on `GstAdaptiveDemux2Stream->discont` when a playlist was updated,
instead of when the next fragment was prepared.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6610>