Internal sources seem to be rtp streams we are sending whereas
non-internal sources are the rtp streams we are receiving. Redo the
statistics with that in mind.
This reverts commit 68fa80e831.
Some wayland servers, especially weston, only expect empty input
region as a request to disable input.
Signed-off-by: Jeffy Chen <jeffy.chen@rock-chips.com>
The function _get_stats_from_ice_transport returns a string which must be
freed by the caller. However, _get_stats_from_dtls_transport was ignoring
the return value from this function, resulting in a leak.
Ran this with valgrind. Before this fix there was a leak of 40 bytes each
time this was called. After there was no leak.
/usr/include/mjpegtools/mpeg2enc/ontheflyratectlpass1.hh:1:9: error: '_ONTHEFLYRATECTLPASS1_HH' is used as a header guard here, followed by #define of a different macro [-Werror,-Wheader-guard]
#ifndef _ONTHEFLYRATECTLPASS1_HH
^~~~~~~~~~~~~~~~~~~~~~~~
/usr/include/mjpegtools/mpeg2enc/ontheflyratectlpass1.hh:2:9: note: '_ONTHELFYRATECTLPASS1_HH' is defined here; did you mean '_ONTHEFLYRATECTLPASS1_HH'?
#define _ONTHELFYRATECTLPASS1_HH
^~~~~~~~~~~~~~~~~~~~~~~~
_ONTHEFLYRATECTLPASS1_HH
In file included from ../subprojects/gst-plugins-bad/ext/mpeg2enc/gstmpeg2encoder.cc:31:
/usr/include/mjpegtools/mpeg2enc/ontheflyratectlpass2.hh:1:9: error: '_ONTHEFLYRATECTLPASS2_HH' is used as a header guard here, followed by #define of a different macro [-Werror,-Wheader-guard]
#ifndef _ONTHEFLYRATECTLPASS2_HH
^~~~~~~~~~~~~~~~~~~~~~~~
/usr/include/mjpegtools/mpeg2enc/ontheflyratectlpass2.hh:2:9: note: '_ONTHELFYRATECTLPASS2_HH' is defined here; did you mean '_ONTHEFLYRATECTLPASS2_HH'?
#define _ONTHELFYRATECTLPASS2_HH
^~~~~~~~~~~~~~~~~~~~~~~~
_ONTHEFLYRATECTLPASS2_HH
/usr/include/mjpegtools/mpeg2enc/encoderparams.hh:82:1: error: struct 'RateCtl' was previously declared as a class; this is valid, but may result in linker errors under the Microsoft C++ ABI [-Werror,-Wmismatched-tags]
struct RateCtl;
^
/usr/include/mjpegtools/mpeg2enc/ratectl.hh:50:7: note: previous use is here
class RateCtl
^
/usr/include/mjpegtools/mpeg2enc/encoderparams.hh:82:1: note: did you mean class here?
struct RateCtl;
^~~~~~
class
../subprojects/gst-plugins-bad/ext/aom/gstav1enc.c:415:34: warning: implicit conversion from enumeration type 'GstAV1EncEndUsageMode' to different enumeration type 'enum aom_rc_mode' [-Wenum-conversion]
av1enc->aom_cfg.rc_end_usage = DEFAULT_END_USAGE;
~ ^~~~~~~~~~~~~~~~~
../subprojects/gst-plugins-bad/ext/aom/gstav1enc.c:162:41: note: expanded from macro 'DEFAULT_END_USAGE'
#define DEFAULT_END_USAGE GST_AV1_ENC_END_USAGE_VBR
^~~~~~~~~~~~~~~~~~~~~~~~~
_decrypt_start() failure will lead to decryption failure eventually
but catching it earlier if possible. The decrpytion start failure means
that the hls plugin was built without crypto library or crypto library
does not want to accept given key and IV.
crypto libraries are not required for hlssink and hlssink2.
Also, hlsdemux with nonencrypted stream can work without crpyto.
Make an error only when users set "hls-crpyto" with non-auto option explicitly,
but no crpyto library was found.
On Windows, if libusrsctp and gstreamer are built with different
C runtimes (CRT), we cannot free memory allocated inside libusrsctp
with the `free()` function from gstreamer's CRT.
`usrsctp_freedumpbuffer()` simply calls `free()`, but because of the
way DLLs work on Windows, it will always call the free function from
the correct CRT.
Don't fixate profile caps which will choose the first profile from list.
Instead, store all profiles allowed by peer and try them until x265 can
accept one of them.
x265 does not allow user to configure a picture size smaller than
at least one CU size, and maxCUSize must be 16, 32, or 64.
Therefore, the CU size must be set according to the input resolution,
and the input resolution can not be less than 16.
When negotiating a data channel, Chrome as recent as 75 still uses SDP
based on version 05 of the SCTP SDP draft, for example:
m=application 9 DTLS/SCTP 5000
a=sctpmap:5000 webrtc-datachannel 1024
Implement support for parsing SCTP port out of SDP message with sctpmap
attribute. Fixes data channel negotiation with Chrome browser.
WebKit's websrc depends on the main-thread for download completion
rendezvous. This exposed a number of deadlocks in adaptivedemux due to
it holding the MANIFEST_LOCK during network requests, and also needing
to hold it to change_state and resolve queries, which frequently occur
during these download windows.
Make demux->running MT-safe so that it can be accessed without using the
MANIFEST_LOCK. In case a source is downloading and requires a MT-thread
notification for completion of the fragment download, a state change
during this download window will deadlock unless we cancel the downloads
and ensure they are not restarted before we finish the state-change.
Also make demux->priv->have_manifest MT-safe. A duration query happening
in the window described above can deadlock for the same reason. Other
src queries (like SEEKING) that happen in this window also could
deadlock, but I haven't hit this scenario.
Increase granularity of API_LOCK'ing in change_state as well. We need to
cancel downloads before trying to take this lock, since sink events
(EOS) will hold it before starting a fragment download.
The row based multi threading control was introduced after 1.0.0 version
of libaom released. It adds a guard to check the relevant control
definition declared. It fixes#1025
There's no reason for it to inherit from GstObject apart from
locking, which is easily replaced, and inheriting from
GInitiallyUnowned made introspection awkward and needlessly
complicated.
We pass-through the video as is, only putting a GstMeta on it from the
caption sinkpad.
This fixes negotation problems caused by not passing through caps
queries in both directions.
Also handle CAPS/ACCEPT_CAPS queries directly for the caption pad
instead of proxying.
Make declare/define a function consistent.
Note that GstBaseTransform::set_caps should return gboolean
Compiling C object subprojects/gst-plugins-bad/ext/vulkan/f3f9d6b@@gstvulkan@sha/vkviewconvert.c.obj.
../subprojects/gst-plugins-bad/ext/vulkan/vkviewconvert.c(644):
warning C4133: '=': incompatible types - from 'GstFlowReturn (__cdecl *)(GstBaseTransform *,GstCaps *,GstCaps *)'
to 'gboolean (__cdecl *)(GstBaseTransform *,GstCaps *,GstCaps *)'
The SPS parsing functions take a parse_vui_param flag
to skip VUI parsing, but there's no indication in the output
SPS struct that the VUI was skipped.
The only caller that ever passed FALSE seems to be the
important gst_h264_parser_parse_nal() function, meaning - so the
cached SPS were always silently invalid. That needs changing
anyway, meaning noone ever passes FALSE.
I don't see any use for saving a few microseconds in
order to silently produce garbage, and since this is still
unstable API, let's remove the parse_vui_param.
This patch adds to the CVF depayloader the capability to regroup H.264
fragmented FU-A packets.
After all packets are regrouped, they are added to the "stash" of H.264
NAL units that will be sent as soon as an AVTP packet with M bit set is
found (usually, the last fragment).
Unrecognized fragments (such as first fragment seen, but with no Start
bit set) are discarded - and any NAL units on the "stash" are sent
downstream, as if a SEQNUM discontinuty happened.
This patch introduces the AVTP Compressed Video Format (CVF) depayloader
specified in IEEE 1722-2016 section 8. Currently, this depayloader only
supports H.264 encapsulation described in section 8.5.
Is also worth noting that only single NAL units are handled: aggregated
and fragmented payloads are not handled.
As stated in AVTP CVF payloader patch, AVTP timestamp is used to define
outgoing buffer DTS, while the H264_TIMESTAMP defines outgoing buffer
PTS.
When an AVTP packet is received, the extracted H.264 NAL unit is added to
a "stash" (the out_buffer) of H.264 NAL units. This "stash" is pushed
downstream as single buffer (with NAL units aggregated according to format
used on GStreamer, based on ISO/IEC 14496-15) as soon as we get the AVTP
packet with M bit set.
This patch groups NAL units using a fixed NAL size lenght, sent downstream
on the `codec_data` capability.
The "stash" of NAL units can be prematurely sent downstream if a
discontinuity (a missing SEQNUM) happens.
This patch reuses the infra provided by gstavtpbasedepayload.c.
Based on `mtu` property, the CVF payloader is now capable of properly
fragmenting H.264 NAL units that are bigger than MTU in several AVTP
packets.
AVTP spec defines two methods for fragmenting H.264 packets, but this
patch only generates non-interleaved FU-A fragments.
Usually, only the last NAL unit from a group of NAL units in a single
buffer will be big enough to be fragmented. Nevertheless, only the last
AVTP packet sent for a group of NAL units will have the M bit set (this
means that the AVTP packet for the last fragment will only have the M
bit set if there's no more NAL units in the group).
This patch introduces the AVTP Compressed Video Format (CVF) payloader
specified in IEEE 1722-2016 section 8. Currently, this payload only
supports H.264 encapsulation described in section 8.5.
Is also worth noting that only single NAL units are encapsulated: no
aggregation or fragmentation is performed by the payloader.
An interesting characteristic of CVF H.264 spec is that it defines an
H264_TIMESTAMP, in addition to the AVTP timestamp. The later is
translated to the GST_BUFFER_DTS while the former is translated to the
GST_BUFFER_PTS. From AVTP CVF H.264 spec, it is clear that the AVTP
timestamp is related to the decoding order, while the H264_TIMESTAMP is
an ancillary information to the H.264 decoder.
Upon receiving a buffer containing a group of NAL units, the avtpcvfpay
element will extract each NAL unit and payload them into individual AVTP
packets. The last AVTP packet generated for a group of NAL units will
have the M bit set, so the depayloader is able to properly regroup them.
The exact format of the buffer of NAL units is described on the
'codec_data' capability, which is parsed by the avtpcvfpay, in the same
way done in rtph264pay.
This patch reuses the infra provided by gstavtpbasepayload.c.
This patch introduces the avtpsrc element which implements a typical
network source. The avtpsrc element receives AVTPDUs encapsulated into
Ethernet frames and push them downstream in the GStreamer pipeline.
Implementation if pretty straightforward since the burden is implemented
by GstPushSrc class.
Likewise the avtpsink element, applications that utilize this element
must have CAP_NET_RAW capability since it is required by Linux to open
sockets from AF_PACKET domain.
This patch introduces the avtpsink elements which implements a typical
network sink. Implementation is pretty straightforward since the burden
is implemented by GstBaseSink class.
The avtpsink element defines three new properties: 1) network interface
from where AVTPDU should be transmitted, 2) destination MAC address
(usually a multicast address), and 3) socket priority (SO_PRIORITY).
Socket setup and teardown are done in start/stop virtual methods while
AVTPDU transmission is carried out by render(). AVTPDUs are encapsulated
into Ethernet frames and transmitted to the network via AF_PACKET socket
domain. Linux requires CAP_NET_RAW capability in order to open an
AF_PACKET socket so the application that utilize this element must have
it. For further info about AF_PACKET socket domain see packet(7).
Finally, AVTPDUs are expected to be transmitted at specific times -
according to the GstBuffer presentation timestamp - so the 'sync'
property from GstBaseSink is set to TRUE by default.
This patch introduces the AAF depayloader element, the counterpart from
the AAF payloader. As expected, this element inputs AVTPDUs and outputs
audio raw data and supports AAF PCM encapsulation only.
The AAF depayloader srcpad produces a fixed format that is encoded
within the AVTPDU. Once the first AVTPDU is received by the element, the
audio features e.g. sample format, rate, number of channels, are decoded
and the srcpad caps are set accordingly. Also, at this point, the
element pushes a SEGMENT event downstream defining the segment according
to the AVTP presentation time.
All AVTP depayloaders will share some common code. For that reason, this
patch introduces the GstAvtpBaseDepayload abstract class that implements
common depayloader functionalities. AAF-specific functionalities are
implemented in the derived class GstAvtpAafDepay.
This patch introduces the AVTP Audio Format (AAF) payloader element from
the AVTP plugin. The element inputs audio raw data and outputs AVTP
packets (aka AVTPDUs), implementing a typical protocol payloader element
from GStreamer.
AAF is one of the available formats to transport audio data in an AVTP
system. AAF is specified in IEEE 1722-2016 section 7 and provides two
encapsulation mode: PCM and AES3. This patch implements PCM
encapsulation mode only.
The AAF payloader working mechanism consists of building the AAF header,
prepending it to the GstBuffer received on the sink pad, and pushing the
buffer downstream. Payloader parameters such as stream ID, maximum
transit time, time uncertainty, and timestamping mode are passed via
element properties. AAF doesn't support all possible sample format and
sampling rate values so the sink pad caps template from the payloader is
a subset of audio/x-raw. Additionally, this patch implements only
"normal" timestamping mode from AAF. "Sparse" mode should be implemented
in future.
Upcoming patches will introduce other AVTP payloader elements that will
have some common code. For that reason, this patch introduces the
GstAvtpBasePayload abstract class that implements common payloader
functionalities, and the GstAvtpAafPay class that extends the
GstAvtpBasePayload class, implementing AAF-specific functionalities.
The AAF payloader element is most likely to be used with the AVTP sink
element (to be introduced by a later patch) but it could also be used
with UDP sink element to implement AVTP over UDP as described in IEEE
1722-2016 Annex J.
This element was inspired by RTP payloader elements.
This patch introduces the bootstrap code from the AVTP plugin (plugin
definition and init) as well as the build system files. Upcoming patches
will introduce payloaders, source and sink elements provided by the AVTP
plugin. These elements can be utilized by a GStreamer pipeline to
implement TSN audio/video applications.
Regarding the plugin build system files, both autotools and meson files
are introduced. The AVTP plugin is landed in ext/ since it has an
external dependency on libavtp, an opensource AVTP packetization
library. For further information about libavtp check [1].
[1] https://github.com/AVnu/libavtp
The agent itself will take a ref on the property setter, so we'll be
left with two references to the certificate object, when actually there
should be only one
When WPEBackend-fdo >= 1.3.0 is detected, the threaded view now relies on the
wpe_fdo_egl_exported_image API instead of the EGLImageKHR-based API which is
going to be deprecated in 2.26. The GLib sources created by the view now use the
default priority as well, the custom priority is no longer required.
Regression introduced by b4bdcf15b7
This commit prevents the handshake from reaching dtlsdec when
the receive state of the receive bin is set to DROP (for example
when transceivers are sendonly).
This preserves the intent of the commit, by blocking the bin
at its sinks until the receive state is no longer BLOCK, but
makes sure the handshake still goes through, by only dropping
data at the src pads, as was the case before.
The timestamp/PTS alone is meaningless without the segment and usually
applications care about the running-time or stream-time.
This also keeps the messages in sync with the spectrum and level
elements.
Header data must be forwarded to downstream, but if demux does not finish
to finding type (e.g., ts, mp4 and etc), this header data can be cleared
by _stream_clear_pending_data(). Moreover, although demux finish downloading
header data, still it has fragment date to be downloaded, fragment sequence
shouldn't be advanced yet at that moment.
https://bugzilla.gnome.org/show_bug.cgi?id=776928
Including gl.h from WPEThreadedView.h leads to GST_LEVEL_DEFAULT detected as
redefined. The proposed fix is to include config.h from the CPP implementation
file and disable gl.h inclusion in the header, by using forward declarations.
1. The spec indicates that the notification should occur near the end of
'setting the description' processing
2. The current location with the drop of the lock could cause the 'check
if negotiation is needed' logic to execute and become confused about
the state of the webrtcbin's current local descriptions.
In the bad case, the following assertions could be hit:
g_assert (trans->mline < gst_sdp_message_medias_len (webrtc->current_local_description->sdp));
g_assert (trans->mline < gst_sdp_message_medias_len (webrtc->current_remote_description->sdp));
Moving the signalling state change later in the set description task
means that checking for a renegotiation will early abort as the
signalling state is not STABLE before the session description and
transceivers have been updated.
The start of each segment is relative to the Period start, minus
the presentation time offset.
As specified in section 5.3.9.6 of the MPEG DASH specification:
The value of the @t attribute minus the value of the
@presentationTimeOffset specifies the MPD start time of
the first Segment in the series.
dashdemux was not taking account of presentationTimeOffset and in
some methods was not taking into account the Period start time.
This commit modifies the segment->start value to always be
relative to the MPD start time (zero for VOD,
availabilityStartTime for live streams). This makes all uses of
the segment list consistent.
Fixes#841
If both data channels become ready simultaneously, then the two integer
read-add-update cycles can execute concurrently and only ever increment
once instead of the required twice. Use an atomic add instead.
It is very possible for badly behaving signalling or peers to send
us ICE candidates before we receive an SDP. While we had consideration
for that on the first set SDP, subsequent SDP's could result in
misconfigured ICE transports. Expand the previous code to also take
into account reconfigurations.
Limitations:
- No transport changes at all (ICE, DTLS)
- Codec changes are untested and probably don't work
- Stream removal doesn't remove transports (i.e. non-bundled transports
will stay around until webrtcbin is shutdown)
- Unified Plan SDP only. No Plan-B support.
Currently, if one was to set -Dhls-crypto to either libgcrypt or openssl
instead of auto, the following lines would fail because hls_crypto_dep is not
yet set:
if not hls_crypto_dep.found() and ['auto', 'libgcrypt'].contains(hls_crypto)
if not hls_crypto_dep.found() and ['auto', 'openssl'].contains(hls_crypto)
Instead, change "if not hls_crypto_dep.found()" to "if not have_hls_crypto"
which fixes the error.
Use a timeout to limit that amount of time we wait after the compositor
for the initial configure event. Compositor are support to emit a
configure event before any wl_buffer can be attached. The problem is
that Weston strongly enforce this, while gnome-shell simply does not
emit such an event.
When buffer is used by compositor, we don't need attach it and hold one
more reference. Just check used_by_compositor, just return if it is true.
Assert error log is not need, this is normal behavior.