Call gst_aggregator_selected_samples() after identifying the
caption buffers that will be added as a meta on the next video
buffer.
Implement GstAggregator.peek_next_sample.
Add an example that demonstrates usage of the new API in
combination with the existing buffer-consumed signal.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1390>
Receiving the WEBKIT_LOAD_COMMITTED event doesn't actually
mean we have committed an SHM buffer / image yet.
As this is the condition we are interested in, check it
instead.
Also wrap g_cond_wait in a loop for extra correctness points.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1476>
"waylandsink: use GstMemory instead of GstBuffer for cache lookup"
changes the cache key to GstMemory, but the cached data still needs
a pointer to the GstBuffer to control the buffer lifecycle.
If the GstMemory used as the cache key moves from one GstBuffer to
another, the pointer in the cached data will be out-of-date.
Update the current GstBuffer pointer for each frame so that it always
represents the currently in use (from attach to release) GstBuffer
for each wl_buffer.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1473>
The GstMemory objects contained in a GstBuffer could be replaced
by an upstream element, which would break the association beteen
the GstBuffer and the wayland wl_buffer, make the cache lookup
results incorrect.
This patch changes the cache lookup to use the first GstMemory
in a buffer instead. For multi-plane buffers, this assumes that
all of the GstMemory(s) will always be moved together as a set,
and that the same (first) GstMemory isn't used with different
combinations of other GstMemory(s).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1401>
Instead of attaching a single wayland wl_buffer to each GStBuffer as qdata,
keep a separate cache for each display.
A unique wl_buffer and associated metadata is created for each display.
This allows for sharing of GstBuffer objects between multiple
displays, such as when using tee elements.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1401>
A pipeline like this:
closedcaption/x-cea-708,format=cdp,framerate=30000/1001 ! ccconverter ! closedcaption/x-cea-708,format=cc_data
would produce a critical/assert:
GStreamer-CRITICAL **: 14:21:11.509: gst_util_fraction_multiply: assertion 'a_d != 0' failed
because there would be no framerate field on ccconverter's output.
Fixed by always fixating a framerate if the input has a framerate.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1393>
Unclear why hotdoc wants 'gstavtp' as the plugin name here,
that's just wrong.
Add since marker and mark private subclasses as plugin API
so hotdoc knows they belong to the plugin and aren't external.
Fix GstAvtpAafTstampMode get_type() function.
keys_exported flag should be set only if keys are actually exported.
For that the next conditions are needed:
1 - SSL_export_keying_material on success
2 - SSL_get_selected_srtp_profile returns a valid profile
3 - The profile ID is SRTP_AES128_CM_SHA1_80 or SRTP_AES128_CM_SHA1_32
Also don't crash if NULL is returned as profile.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1156>
SURROUND is more to spec according to the FIXME comments, so add this.
Also add SIDE for 5 and 5.1 because of ffmpeg compatibility, because the
following pipeline downmixes to mono otherwise:
gst-launch-1.0 audiotestsrc num-buffers=1 ! audio/x-raw, channels=6 !
avenc_ac3 ! avdec_ac3 ! audioconvert ! fdkaacenc ! fakesink -v
Fixes#1327
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1352>
Unfortunately it means those tune enums don't show up in
the docs then, but if that's how it's gotta be..
(Problem at hand is that on Tim's machine x265enc gets an
tune=animation and on the CI machine this doesn't show up.)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1354>
The unit tests only checked for vulkan_dep.found(), which can
be true if the libs are there but glslc was not found, in which
case the plugin wouldn't be built and the unit tests would fail
because of missing vulkan plugins.
Doesn't really make much sense to build the vulkan integration lib
either if we're not going to build the vulkan plugin, so just disable
both for now if glslc is not available.
Fixes#1301
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1307>
This is needed for cross-compiling without a build machine compiler
available. The option was added in 0.54, but we only need this in
Cerbero and it doesn't affect older versions so it should be ok.
Will only cause a spurious warning.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1266>
Instead of storing the raw cc_data, store the 2 cea608 fields individually
as well as the ccp data.
Simply copying the input cc_data to the output cc_data violates a number of
requirements in the cea708 specification. The most prominent being, that
cea608 triples must be placed at the beginning of each cdp.
We also need to comply with the framerate-dpendent limits for both the
cea608 and the ccp data which may involve splitting or merging some
cea608 data but not ccp data or vice versa.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1116>
Add some properties to allow TCP and UDP candidates to be toggled. This
is useful in cases where someone is using this element in an environment
where it is known in advance whether a given transport will work or not
and will prevent wasting time generating and checking candidate pairs
that will not succeed.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1223>
When negotiating the SDP we should only connect the streams that are
actually mentioned in the SDP. All other streams are not relevant at
this time and would likely be part of a future SDP update. Fixes a
couple of the renegotiation webrtc unit tests.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1240>
Proper calculate running time for buffers that are out of current
segment and try to honor them.
A typical case is for AVTP packets coming from avtpcvfpay element, as
those may have DTS that falls out of segment (which is about PTS).
By using gst_segment_to_running_time_full(), avtpsink can properly
calculate when to transmit those buffers.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1004>
Seek events will cause new segments to be sent to avtpcvfpay, and for
flushing seeks, a pipeline running time reset. This running time
reset, which effectively changes pipeline base time, will cause
avtpcvfpay element to generate incorrect DTS for the initial set of
buffers sent after FLUSH_STOP.
This happens due the fact that base time change happens only when the
sink gets the first buffer after the FLUSH_STOP - so avtpcvfpay used
the wrong base time to do its calculations.
However, if the pipeline is paused before the seek, sink will update
base time when pipeline state goes to PLAYING again, before avtpcvfpay
gets the first buffers after the flush. Then avtpcvfpay element will be
able to normally calculate DTS for the outgoing packets.
This patch simply adds a warning message in case a flushing seek is
performed on a playing pipeline.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1004>
TSN streams are expected to send packets to the network in a well
defined "pace", which is arbitrarily defined for each stream. This pace
is defined by the "measurement interval" property of a stream.
When the AVTP CVF payloader element - avtpcvfpay - fragments a video
frame that is too big to be sent to the network, it currently defines
that all fragments should be transmitted at the same time (via DTS
property of GstBuffers generated, as sink will use those to time the
transmission of the AVTPDU). This doesn't comply with stream definition,
which also has a limit on how many packets can be sent on a given
measurement interval.
This patch solves that by spreading in time the DTS of the GstBuffers
containing the AVTPDUs. Two new properties, "measurement-interval" and
"max-interval-frames", added to avptcvfpay element so that it knows
stream measurement interval and how many AVTPDUs it can send on any of
them. More details on the method used to proper spread DTS/PTS according
to measurement interval can be found in a code commentary inside this patch.
Tests also added for the new property and behaviour.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1004>
If the remote is bundling, but we are not and remote is offering.
we cannot put the remote media sections into a bundled transport as that
is not how we are going to respond.
This specific failure case was that the remote ICE credentials were
never set on the ice stream and so ice connectivity would fail.
Technically, this whole bunde-policy=none handling should be removed
eventually when we implement bundle-policy=balanced. Until such time,
we have this workaround.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1231>
This commit introduces the AVTP Clock Reference Format (CRF) Checker
element. This element re-uses the GstAvtpCrfBase class introduced along
with the CRF Synchronizer element.
This element will typically be used along with the avtpsrc element to
ensure that the AVTP timestamp (and H264 timestamp in case of CVF-H264
packets) is "aligned" with the incoming CRF stream. Here, "aligned" means
that the timestamp value should be within 25% of the period of the media
clock recovered from the CRF stream.
The user can also set an option (drop-invalid) in order to drop any packet
whose timestamp is not within the thresholds of the incoming CRF stream.
This commit introduces the AVTP Clock Reference Format (CRF) Synchronizer
element. This element implements the AVTP CRF Listener as described in IEEE
1722-2016 Section 10.
CRF is useful in synchronizing events within different systems by
distributing a common clock. This is useful in a scenario where there are
multiple talkers who are sending data to a single listener which is
processing that data. E.g. CCTV cameras on a network sending AVTP video
streams to a base station to display on the same screen.
It is assumed that all the systems are already time-synchronized with each
other. So, the AVTP Talker essentially adjusts the AVTP Presentation Time
so it's phase-locked with the reference clock provided by the CRF stream.
There are 2 different roles of systems which participate in CRF data
exchange. A system can either be a CRF Talker, which samples it's own
clock and generates a stream of timestamps to transmit over the network, or
a CRF Listener, the system which receives the generated timestamps and
recovers the media clock from the timestamps. It then adjusts it's own
clock to align with recovered media clock. The timestamps generated by the
talker may not be continuous and the listener might have to interpolate
some timestamps to recover the media clock. The number of timestamps to
interpolate is mentioned in the CRF stream AVTPDU (Refer IEEE 1722-2016
Section 10.4 for AVTPDU structure). Only CRF Listener has been implemented
in this commit.
The CRF Sync element will create a separate thread to listen for the CRF
stream. This thread will calculate and store the average period of the
recovered media clock. The pipeline thread will use this stored period
along with the first timestamp of the latest CRF AVTPDU received to
calculate adjustment for timestamps in the audio/video streams. In case of
CRF AVTPDUs with single timestamp, two consecutive CRF AVTPDUs will be used
to figure out the average period of the recovered media clock.
In case of H264 streams, both AVTP timestamp and H264 timestamp will be
adjusted.
In the future commits, another "CRF Checker" element will be introduced
which will validate the timestamps on the AVTP Listener side. Which is why
a lot of code has been implemented as part of the gstcrfbase class.
If we are in a state where we are answering, we would start gathering
when the offer is set which is incorrect for at least two reasons.
1. Sending ICE candidates before sending an answer is a hard error in
all of the major browsers and will fail the negotiation.
2. If libnice ever adds the username fragment to the candidate for
ice-restart hardening, the ice username and fragment would be
incorrect.
JSEP also hints that the right call flow is to only start gathering when
a local description is set in 4.1.9 setLocalDescription
"This API indirectly controls the candidate gathering process."
as well as hints throughout other sections.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1226>
If we receive video buffers with non-perfect timestamps, the
caption buffers' timestamps might fall in the interval between
the end of one video buffer and the start of the next one.
Make our criteria for dropping that the caption buffer has
a timestamp older than the end of the previous video buffer,
not older than the start of the new one, unless of course
this is the first video buffer.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1207>
Use GST_OBJECT_LOCK (srtobject->element) to protect only the fields
involved in property access.
Introduce a new mutex srtobject->sock_lock to go with
srtobject->sock_cond and protect the list of callers from concurrent
access.
The previous version of the SHM export support still required a valid
EGLDisplay. The upcoming WPEBackend-FDO 1.8.x aims to remove this requirement,
hence allowing wpesrc to be used without GPU.
We should directly check the values of the `debug` and `optimization`
options instead.
`get_option('buildtype')` will return `'custom'` for most combinations
of `-Doptimization` and `-Ddebug`, but those two will always be set
correctly if only `-Dbuildtype` is set. So we should look at those
options directly.
For the two-way mapping between `buildtype` and `optimization`
+ `debug`, see this table:
https://mesonbuild.com/Builtin-options.html#build-type-options
Each srtp_stream_t is tied to an specific SSRC, so a
roc_changed flag should be kept per each SSRC in order to
properly reset RTP sequence number on ROC changes.
openssl 1.1.1e does some stricker EOF handling and will throw an error
if the EOF is unexpected (like in the middle of a record). As we are
streaming data into openssl here, it is entirely possible that we push
data from multiple buffers/packets into openssl separately.
From the openssl changelog:
Changes between 1.1.1d and 1.1.1e [17 Mar 2020]
*) Properly detect EOF while reading in libssl. Previously if we hit an EOF
while reading in libssl then we would report an error back to the
application (SSL_ERROR_SYSCALL) but errno would be 0. We now add
an error to the stack (which means we instead return SSL_ERROR_SSL) and
therefore give a hint as to what went wrong.
[Matt Caswell]
We can relax the EOF signalling to only return TRUE when we have stopped
for any reason (EOS, error).
Will also remove a spurious EOF error from previous openssl version.
Otherwise when bundling, only the changed streams would be considered as
to whether the bundled transport needs to be blocked as all streams are
inactive.
Scenario is one transceiver changes direction to inactive and as that is
the only change in transciever direction, the entire bundled transport would
be blocked even if there are other active transceivers inside the same bundled
transport that are still active.
Fix by always checking the activeness of a stream regardless of if the
transceiverr has changed direction.
The ICE gathering state can transition to complete prematurely if the
underlying ICE components complete their gathering while the initial
ICE gathering state task is queued and still pending.
In that situation, the ice gathering state task will report complete
while there are still ICE candidates queued for emission.
Prevent that by storing ICE candidates in an array and checking if
there are any pending before reporting a completed ICE gathering
state.
ICE candidates can be added to the array directly from the application
or from the webrtc main loop. Rename it to make it clear that it's
holding remote ICE candidates from the peer, and protect it with a
new mutex
As per discussion in the bug, remove the drop state from transportreceivebin.
Dropping data is necessary, but for bundled config, needs to happen
further downstream after mixed flows have been separated.
Also support switching back to BLOCK from PASS state.
https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/1206
Fixes dependency issues:
FAILED: subprojects/gst-plugins-bad/ext/dash/8bd0b95@@gstdash@sha/gstdashsink.c.obj
cl @subprojects/gst-plugins-bad/ext/dash/8bd0b95@@gstdash@sha/gstdashsink.c.obj.rsp
C:\builds\ystreet\gst-plugins-base\gst-build\subprojects\gst-plugins-base\gst-libs\gst/pbutils/pbutils.h(30): fatal error C1083: Cannot open include file: 'gst/pbutils/pbutils-enumtypes.h': No such file or directory
Instead of synchronising at the ICE transport, do clock sync for the
RTP stream at the DTLS transport via the dtlssrtpenc rtp-sync
property. This avoids delaying RTCP while waiting until it is time
to output an RTP packet when rtcp-mux is enabled.
https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/1212
We do not have a way to know the format modifiers to use with string
functions provided by the system. `G_GUINT64_FORMAT` and other string
modifiers only work for glib string formatting functions. We cannot
use them for string functions provided by the stdlib. See:
https://developer.gnome.org/glib/stable/glib-Basic-Types.html#glib-Basic-Types.description
F.ex.
```
../ext/dash/gstxmlhelper.c: In function 'gst_xml_helper_get_prop_unsigned_integer_64':
../ext/dash/gstxmlhelper.c:473:40: error: unknown conversion type character 'l' in format [-Werror=format=]
if (sscanf ((gchar *) prop_string, "%" G_GUINT64_FORMAT,
^~~
In file included from /builds/nirbheek/cerbero/cerbero-build/dist/windows_x86/include/glib-2.0/glib/gtypes.h:32,
from /builds/nirbheek/cerbero/cerbero-build/dist/windows_x86/include/glib-2.0/glib/galloca.h:32,
from /builds/nirbheek/cerbero/cerbero-build/dist/windows_x86/include/glib-2.0/glib.h:30,
from /builds/nirbheek/cerbero/cerbero-build/dist/windows_x86/include/gstreamer-1.0/gst/gst.h:27,
from ../ext/dash/gstxmlhelper.h:26,
from ../ext/dash/gstxmlhelper.c:22:
/builds/nirbheek/cerbero/cerbero-build/dist/windows_x86/lib/glib-2.0/include/glibconfig.h:69:28: note: format string is defined here
#define G_GUINT64_FORMAT "llu"
^
../ext/dash/gstxmlhelper.c:473:40: error: too many arguments for format [-Werror=format-extra-args]
if (sscanf ((gchar *) prop_string, "%" G_GUINT64_FORMAT,
^~~
```
In the process, we're also following the DASH MPD spec more closely
now, which specifies that ranges must follow RFC 2616 section 14.35.1:
https://tools.ietf.org/html/rfc2616#page-138
Add latency configuration logic to transportsendbin to
isolate it from the overall pipeline latency. That means that
it configures minimum latency internally based on the
latency query, and sends a latency event upstream that
matches.
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/1209
When emitting ICE candidates, also merge them to the local and
pending description so they show up in the SDP if those are
retrieved from the current-local-description and
pending-local-description properties.
https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/676
- Do not send ABORTs for unexpected packets are as response to INIT
- Enable interleaving of messages of different streams
- Configure 1MB send and receive buffer for the socket
- Enable SCTP_SEND_FAILED_EVENT and SCTP_PARTIAL_DELIVERY_EVENT events
- Set SCTP_REUSE_PORT configuration
- Set SCTP_EXPLICIT_EOR and the corresponding send flag. We probably
want to split packets to a maximum size later and only set the flag
on the last packet. Firefox uses 0x4000 as maximum size here.
- Enable SCTP_ENABLE_CHANGE_ASSOC_REQ
- Disable PMTUD and set an maximum initial MTU of 1200
Calling bind() only sets up some data structures and calling connect()
only produces one packet before it returns. That packet is stored in a
queue that is asynchronously forwarded by the encoder's source pad loop,
so not much is happening there either. Especially no waiting is
happening here and no forwarding of data to other elements.
This fixes a race condition during connection setup: the connection
would immediately fail if we pass a packet from the peer to the socket
before bind() and connect() have returned.
This can't happen anymore as bind() and connect() have returned already
before both elements reach the PAUSED state, and in webrtcbin there is
an additional blocking pad probe before the decoder that does not let
any data pass through before that anyway.
The library is thread-safe by itself and potentially calls back into our
code, not only from the same thread but also from other threads. This
can easily lead to deadlocks if we try to hold our mutex on both sides.
Starting from WPEBackend-FDO 1.6.x, software rendering support is available.
This features allows wpesrc to be used on machines without GPU, and/or for
testing purpose. To enable it, set the `LIBGL_ALWAYS_SOFTWARE=true` environment
variable and make sure `video/x-raw, format=BGRA` caps are negotiated by the
wpesrc element.
Otherwise it can happen that e.g. the stream-start event is tried to be
sent as part of pushing the first buffer. Downstream might not be in
PAUSED/PLAYING yet, so the event is rejected with GST_FLOW_FLUSHING and
because it's an event would not cause the blocking pad probe to trigger
first. This would then return GST_FLOW_FLUSHING for the buffer and shut
down all of upstream.
To solve this we return GST_PAD_PROBE_DROP for all events. In case of
sticky events they would be resent again later once we unblocked after
blocking on the buffer and everything works fine.
Don't handle events specifically in sink pad blocking pad probes as here
downstream is not linked yet and we are actually waiting for the
following CAPS event before unblocking can happen.
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/1172
Without this it might happen that received data from the DTLS transport
is already passed to sctpdec before its state was set to PLAYING. This
would cause the data to be dropped, GST_FLOW_FLUSHING to be returned and
the whole DTLS transport to shut down.
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/1172
among other things.
Using a GCond can easily lead to deadlocks and only duplicates the
waiting code from gstpad.c in the best case.
In this case it actually could lead to a deadlock if both RTP and RTCP
were waiting. Only one of them would be woken up because g_cond_signal()
was used instead of g_cond_broadcast().
Change how content-length is set for HTTP POST headers, letting curl set
the header (given the content-length) instead of manually writing it.
This enables curl to know the content-length of the data.
In curl 7.66, if curl does not know the content-length (e.g. when
manually writing the header) curl will use Transfer-Encoding: chunked,
which might not be desired.
Use a double instead of a plain float for intermediary
property values, so we have enough bits to store INT_MAX
and it doesn't get rounded and wrapped to -1 when cast
back to a 32-bit integer.
Fixes criticals like
g_param_spec_int: assertion 'default_value >= minimum && default_value <= maximum' failed
when loading LADSPA plugins from the Linux Studio Plugins
Project (http://lsp-plug.in) in GStreamer.
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/1194
The avtpsink element is expected to transmit AVTPDUs at specific times,
according to GstBuffer timestamps. Currently, the transmission time is
controlled in software via the rendering synchronization mechanism
provided by GstBaseSink class. However, that mechanism may not cope with
some AVB use-cases such as Class A streams, where AVTPDUs are expected
to be transmitted at every 125 us. Thus, this patch introduces avtpsink
own mechanism which leverages the socket transmission scheduling
infrastructure introduced in Linux kernel 4.19. When supported by the
NIC, the transmission scheduling is offloaded to the hardware, improving
transmission time accuracy considerably.
To illustrate that, a before-after experiment was carried out. The
experimental setup consisted in 2 PCs with Intel i210 card connected
back-to-back running an up-to-date Archlinux with kernel 5.3.1. In one
host gst-launch-1.0 was used to generate a 2-minute Class A stream while
the other host captured the packets. The metric under evaluation is the
transmission interval and it is measured by checking the 'time_delta'
information from ethernet frames captured at the receiving side.
The table below shows the outcome for a 48 kHz, 16-bit sample, stereo
audio stream. The unit is nanoseconds.
| Mean | Stdev | Min | Max | Range |
-------+--------+---------+---------+---------+---------+
Before | 125000 │ 2401 │ 110056 │ 288432 │ 178376 |
After | 125000 │ 18 │ 124943 │ 125055 │ 112 |
Before this patch, the transmission interval mean is equal to the
optimal value (Class A stream -> 125 us interval), and it is kept the
same after the patch. The dispersion measurements, however, had
improved considerably, meaning the system is now consistently
transmitting AVTPDUs at the correct time.
Finally, the socket transmission scheduling infrastructure requires the
system clock to be synchronized with PTP clock so this patches modifies
the AVTP plugin documentation to cover how to achieve that.
This patch refactors gst_avtp_sink_start() by moving all socket
initialization code to its own function. This change prepares the code
to the next patch which will introduce avtpsink's own rendering
synchronization mechanism.
Current avtpsink code opens the AF_PACKET socket with SOCK_NONBLOCK
option. However, we actually want sendto() to block in case there isn't
available space in socket buffer.
This patch refactors both avtpsink and avtpsrc code so we use the
if_nametoindex() helper instead of building a request and issuing an
ioctl to get the if_index.
The receive bin should block buffers from reaching dtlsdec before
the dtls connection has started.
While there was code to block its sinkpads until receive_state
was different from BLOCK, nothing was ever setting it to BLOCK
in the first place. This commit corrects this by setting the
initial state to BLOCK, directly in the constructor.
In addition, now that blocking is effective, we want to only
block buffers and buffer lists, as that's what might trigger
errors, we want to still let events and queries go through,
not doing so causes immediate deadlocks when linking the
bin.
And free data with the correct free() function in the receive callback
by passing it to gst_buffer_new_wrapped_full() instead of
gst_buffer_new_wrapped().
When a pipeline is stopped (actually when the waylandsink element
state changes from PAUSED to READY) the video surface is cleared, but
the opaque black surface behind is not. Fix this by actually clearing
both surfaces.
We need the streams' pt maps updated before requesting pads
on rtpbin, because this is what will trigger the requesting
of FEC encoders, and our handler for this request looks for
the payload types in the relevant stream's pt map.
Fixes#1187
Otherwise we would start sending data to the DTLS connection before, and
the DTLS elements consider this an error.
Also RFC 8261 mentions:
o A DTLS connection MUST be established before an SCTP association can
be set up.
For us it can happen that the DTLS transports are still in the process
of connecting while the ICE transport is already completed. This
situation is not specified in the spec but conceptually that means it is
still in the process of connecting.
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/758
Previously we simply logged errors but never reported them to elements
or even to the user. Fatal errors are now properly reported.
Additionally proper connection closing is implemented based on EOS:
- dtlsenc: EOS will cause close_notify to be sent to the peer and only
if the peer also sent back close_notify we will forward the
EOS event.
- dtlsdec: EOS will be forwarded normally, this only means that the
unterlying transport was closed. On receiving a DTLS packet
containing close_notify, return EOS and send EOS downstream.