It is more appropriate to start closer to the live edge in
live streams. Some live streams maintain a large dvr window
(over few hours in some cases), so starting from the first
fragment will be too far away from the live edge.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1346>
opencv plugin is pulling a header which makses clang++ 10
complain a lot and blocks -werror.
```
/usr/include/opencv4/opencv2/flann/logger.h:83:36: error: format string is not a string literal [-Werror,-Wformat-nonliteral]
int ret = vfprintf(stream, fmt, arglist);
^~~
```
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1494>
Call gst_aggregator_selected_samples() after identifying the
caption buffers that will be added as a meta on the next video
buffer.
Implement GstAggregator.peek_next_sample.
Add an example that demonstrates usage of the new API in
combination with the existing buffer-consumed signal.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1390>
Receiving the WEBKIT_LOAD_COMMITTED event doesn't actually
mean we have committed an SHM buffer / image yet.
As this is the condition we are interested in, check it
instead.
Also wrap g_cond_wait in a loop for extra correctness points.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1476>
"waylandsink: use GstMemory instead of GstBuffer for cache lookup"
changes the cache key to GstMemory, but the cached data still needs
a pointer to the GstBuffer to control the buffer lifecycle.
If the GstMemory used as the cache key moves from one GstBuffer to
another, the pointer in the cached data will be out-of-date.
Update the current GstBuffer pointer for each frame so that it always
represents the currently in use (from attach to release) GstBuffer
for each wl_buffer.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1473>
The GstMemory objects contained in a GstBuffer could be replaced
by an upstream element, which would break the association beteen
the GstBuffer and the wayland wl_buffer, make the cache lookup
results incorrect.
This patch changes the cache lookup to use the first GstMemory
in a buffer instead. For multi-plane buffers, this assumes that
all of the GstMemory(s) will always be moved together as a set,
and that the same (first) GstMemory isn't used with different
combinations of other GstMemory(s).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1401>
Instead of attaching a single wayland wl_buffer to each GStBuffer as qdata,
keep a separate cache for each display.
A unique wl_buffer and associated metadata is created for each display.
This allows for sharing of GstBuffer objects between multiple
displays, such as when using tee elements.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1401>
A pipeline like this:
closedcaption/x-cea-708,format=cdp,framerate=30000/1001 ! ccconverter ! closedcaption/x-cea-708,format=cc_data
would produce a critical/assert:
GStreamer-CRITICAL **: 14:21:11.509: gst_util_fraction_multiply: assertion 'a_d != 0' failed
because there would be no framerate field on ccconverter's output.
Fixed by always fixating a framerate if the input has a framerate.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1393>
Unclear why hotdoc wants 'gstavtp' as the plugin name here,
that's just wrong.
Add since marker and mark private subclasses as plugin API
so hotdoc knows they belong to the plugin and aren't external.
Fix GstAvtpAafTstampMode get_type() function.
keys_exported flag should be set only if keys are actually exported.
For that the next conditions are needed:
1 - SSL_export_keying_material on success
2 - SSL_get_selected_srtp_profile returns a valid profile
3 - The profile ID is SRTP_AES128_CM_SHA1_80 or SRTP_AES128_CM_SHA1_32
Also don't crash if NULL is returned as profile.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1156>
SURROUND is more to spec according to the FIXME comments, so add this.
Also add SIDE for 5 and 5.1 because of ffmpeg compatibility, because the
following pipeline downmixes to mono otherwise:
gst-launch-1.0 audiotestsrc num-buffers=1 ! audio/x-raw, channels=6 !
avenc_ac3 ! avdec_ac3 ! audioconvert ! fdkaacenc ! fakesink -v
Fixes#1327
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1352>
Unfortunately it means those tune enums don't show up in
the docs then, but if that's how it's gotta be..
(Problem at hand is that on Tim's machine x265enc gets an
tune=animation and on the CI machine this doesn't show up.)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1354>
The unit tests only checked for vulkan_dep.found(), which can
be true if the libs are there but glslc was not found, in which
case the plugin wouldn't be built and the unit tests would fail
because of missing vulkan plugins.
Doesn't really make much sense to build the vulkan integration lib
either if we're not going to build the vulkan plugin, so just disable
both for now if glslc is not available.
Fixes#1301
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1307>
This is needed for cross-compiling without a build machine compiler
available. The option was added in 0.54, but we only need this in
Cerbero and it doesn't affect older versions so it should be ok.
Will only cause a spurious warning.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1266>
Instead of storing the raw cc_data, store the 2 cea608 fields individually
as well as the ccp data.
Simply copying the input cc_data to the output cc_data violates a number of
requirements in the cea708 specification. The most prominent being, that
cea608 triples must be placed at the beginning of each cdp.
We also need to comply with the framerate-dpendent limits for both the
cea608 and the ccp data which may involve splitting or merging some
cea608 data but not ccp data or vice versa.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1116>
Add some properties to allow TCP and UDP candidates to be toggled. This
is useful in cases where someone is using this element in an environment
where it is known in advance whether a given transport will work or not
and will prevent wasting time generating and checking candidate pairs
that will not succeed.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1223>
When negotiating the SDP we should only connect the streams that are
actually mentioned in the SDP. All other streams are not relevant at
this time and would likely be part of a future SDP update. Fixes a
couple of the renegotiation webrtc unit tests.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1240>
Proper calculate running time for buffers that are out of current
segment and try to honor them.
A typical case is for AVTP packets coming from avtpcvfpay element, as
those may have DTS that falls out of segment (which is about PTS).
By using gst_segment_to_running_time_full(), avtpsink can properly
calculate when to transmit those buffers.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1004>
Seek events will cause new segments to be sent to avtpcvfpay, and for
flushing seeks, a pipeline running time reset. This running time
reset, which effectively changes pipeline base time, will cause
avtpcvfpay element to generate incorrect DTS for the initial set of
buffers sent after FLUSH_STOP.
This happens due the fact that base time change happens only when the
sink gets the first buffer after the FLUSH_STOP - so avtpcvfpay used
the wrong base time to do its calculations.
However, if the pipeline is paused before the seek, sink will update
base time when pipeline state goes to PLAYING again, before avtpcvfpay
gets the first buffers after the flush. Then avtpcvfpay element will be
able to normally calculate DTS for the outgoing packets.
This patch simply adds a warning message in case a flushing seek is
performed on a playing pipeline.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1004>
TSN streams are expected to send packets to the network in a well
defined "pace", which is arbitrarily defined for each stream. This pace
is defined by the "measurement interval" property of a stream.
When the AVTP CVF payloader element - avtpcvfpay - fragments a video
frame that is too big to be sent to the network, it currently defines
that all fragments should be transmitted at the same time (via DTS
property of GstBuffers generated, as sink will use those to time the
transmission of the AVTPDU). This doesn't comply with stream definition,
which also has a limit on how many packets can be sent on a given
measurement interval.
This patch solves that by spreading in time the DTS of the GstBuffers
containing the AVTPDUs. Two new properties, "measurement-interval" and
"max-interval-frames", added to avptcvfpay element so that it knows
stream measurement interval and how many AVTPDUs it can send on any of
them. More details on the method used to proper spread DTS/PTS according
to measurement interval can be found in a code commentary inside this patch.
Tests also added for the new property and behaviour.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1004>
If the remote is bundling, but we are not and remote is offering.
we cannot put the remote media sections into a bundled transport as that
is not how we are going to respond.
This specific failure case was that the remote ICE credentials were
never set on the ice stream and so ice connectivity would fail.
Technically, this whole bunde-policy=none handling should be removed
eventually when we implement bundle-policy=balanced. Until such time,
we have this workaround.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1231>
This commit introduces the AVTP Clock Reference Format (CRF) Checker
element. This element re-uses the GstAvtpCrfBase class introduced along
with the CRF Synchronizer element.
This element will typically be used along with the avtpsrc element to
ensure that the AVTP timestamp (and H264 timestamp in case of CVF-H264
packets) is "aligned" with the incoming CRF stream. Here, "aligned" means
that the timestamp value should be within 25% of the period of the media
clock recovered from the CRF stream.
The user can also set an option (drop-invalid) in order to drop any packet
whose timestamp is not within the thresholds of the incoming CRF stream.
This commit introduces the AVTP Clock Reference Format (CRF) Synchronizer
element. This element implements the AVTP CRF Listener as described in IEEE
1722-2016 Section 10.
CRF is useful in synchronizing events within different systems by
distributing a common clock. This is useful in a scenario where there are
multiple talkers who are sending data to a single listener which is
processing that data. E.g. CCTV cameras on a network sending AVTP video
streams to a base station to display on the same screen.
It is assumed that all the systems are already time-synchronized with each
other. So, the AVTP Talker essentially adjusts the AVTP Presentation Time
so it's phase-locked with the reference clock provided by the CRF stream.
There are 2 different roles of systems which participate in CRF data
exchange. A system can either be a CRF Talker, which samples it's own
clock and generates a stream of timestamps to transmit over the network, or
a CRF Listener, the system which receives the generated timestamps and
recovers the media clock from the timestamps. It then adjusts it's own
clock to align with recovered media clock. The timestamps generated by the
talker may not be continuous and the listener might have to interpolate
some timestamps to recover the media clock. The number of timestamps to
interpolate is mentioned in the CRF stream AVTPDU (Refer IEEE 1722-2016
Section 10.4 for AVTPDU structure). Only CRF Listener has been implemented
in this commit.
The CRF Sync element will create a separate thread to listen for the CRF
stream. This thread will calculate and store the average period of the
recovered media clock. The pipeline thread will use this stored period
along with the first timestamp of the latest CRF AVTPDU received to
calculate adjustment for timestamps in the audio/video streams. In case of
CRF AVTPDUs with single timestamp, two consecutive CRF AVTPDUs will be used
to figure out the average period of the recovered media clock.
In case of H264 streams, both AVTP timestamp and H264 timestamp will be
adjusted.
In the future commits, another "CRF Checker" element will be introduced
which will validate the timestamps on the AVTP Listener side. Which is why
a lot of code has been implemented as part of the gstcrfbase class.
If we are in a state where we are answering, we would start gathering
when the offer is set which is incorrect for at least two reasons.
1. Sending ICE candidates before sending an answer is a hard error in
all of the major browsers and will fail the negotiation.
2. If libnice ever adds the username fragment to the candidate for
ice-restart hardening, the ice username and fragment would be
incorrect.
JSEP also hints that the right call flow is to only start gathering when
a local description is set in 4.1.9 setLocalDescription
"This API indirectly controls the candidate gathering process."
as well as hints throughout other sections.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1226>
If we receive video buffers with non-perfect timestamps, the
caption buffers' timestamps might fall in the interval between
the end of one video buffer and the start of the next one.
Make our criteria for dropping that the caption buffer has
a timestamp older than the end of the previous video buffer,
not older than the start of the new one, unless of course
this is the first video buffer.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1207>
Use GST_OBJECT_LOCK (srtobject->element) to protect only the fields
involved in property access.
Introduce a new mutex srtobject->sock_lock to go with
srtobject->sock_cond and protect the list of callers from concurrent
access.
The previous version of the SHM export support still required a valid
EGLDisplay. The upcoming WPEBackend-FDO 1.8.x aims to remove this requirement,
hence allowing wpesrc to be used without GPU.
We should directly check the values of the `debug` and `optimization`
options instead.
`get_option('buildtype')` will return `'custom'` for most combinations
of `-Doptimization` and `-Ddebug`, but those two will always be set
correctly if only `-Dbuildtype` is set. So we should look at those
options directly.
For the two-way mapping between `buildtype` and `optimization`
+ `debug`, see this table:
https://mesonbuild.com/Builtin-options.html#build-type-options
Each srtp_stream_t is tied to an specific SSRC, so a
roc_changed flag should be kept per each SSRC in order to
properly reset RTP sequence number on ROC changes.