gstreamer/ext/avtp/gstavtp.c

271 lines
10 KiB
C
Raw Normal View History

/*
* GStreamer AVTP Plugin
* Copyright (C) 2019 Intel Corporation
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later
* version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, write to the
* Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor,
* Boston, MA 02110-1301 USA
*/
/**
* plugin-avtp:
*
* ## Audio Video Transport Protocol (AVTP) Plugin
*
* The AVTP plugin implements typical Talker and Listener functionalities that
* can be leveraged by GStreamer-based applications in order to implement TSN
* audio/video applications.
*
* ### Dependencies
*
* The plugin uses libavtp to handle AVTP packetization. Libavtp source code can
* be found in https://github.com/AVnu/libavtp as well as instructions to build
* and install it.
*
* If libavtp isn't detected by configure, the plugin isn't built.
*
avtp: Introduce AAF payloader element This patch introduces the AVTP Audio Format (AAF) payloader element from the AVTP plugin. The element inputs audio raw data and outputs AVTP packets (aka AVTPDUs), implementing a typical protocol payloader element from GStreamer. AAF is one of the available formats to transport audio data in an AVTP system. AAF is specified in IEEE 1722-2016 section 7 and provides two encapsulation mode: PCM and AES3. This patch implements PCM encapsulation mode only. The AAF payloader working mechanism consists of building the AAF header, prepending it to the GstBuffer received on the sink pad, and pushing the buffer downstream. Payloader parameters such as stream ID, maximum transit time, time uncertainty, and timestamping mode are passed via element properties. AAF doesn't support all possible sample format and sampling rate values so the sink pad caps template from the payloader is a subset of audio/x-raw. Additionally, this patch implements only "normal" timestamping mode from AAF. "Sparse" mode should be implemented in future. Upcoming patches will introduce other AVTP payloader elements that will have some common code. For that reason, this patch introduces the GstAvtpBasePayload abstract class that implements common payloader functionalities, and the GstAvtpAafPay class that extends the GstAvtpBasePayload class, implementing AAF-specific functionalities. The AAF payloader element is most likely to be used with the AVTP sink element (to be introduced by a later patch) but it could also be used with UDP sink element to implement AVTP over UDP as described in IEEE 1722-2016 Annex J. This element was inspired by RTP payloader elements.
2019-01-17 01:16:59 +00:00
* ### The application/x-avtp mime type
*
* For valid AVTPDUs encapsulated in GstBuffers, we use the caps with mime type
* application/x-avtp.
*
* AVTP mime type is pretty simple and has no fields.
*
avtpsink: Implement synchronization mechanism The avtpsink element is expected to transmit AVTPDUs at specific times, according to GstBuffer timestamps. Currently, the transmission time is controlled in software via the rendering synchronization mechanism provided by GstBaseSink class. However, that mechanism may not cope with some AVB use-cases such as Class A streams, where AVTPDUs are expected to be transmitted at every 125 us. Thus, this patch introduces avtpsink own mechanism which leverages the socket transmission scheduling infrastructure introduced in Linux kernel 4.19. When supported by the NIC, the transmission scheduling is offloaded to the hardware, improving transmission time accuracy considerably. To illustrate that, a before-after experiment was carried out. The experimental setup consisted in 2 PCs with Intel i210 card connected back-to-back running an up-to-date Archlinux with kernel 5.3.1. In one host gst-launch-1.0 was used to generate a 2-minute Class A stream while the other host captured the packets. The metric under evaluation is the transmission interval and it is measured by checking the 'time_delta' information from ethernet frames captured at the receiving side. The table below shows the outcome for a 48 kHz, 16-bit sample, stereo audio stream. The unit is nanoseconds. | Mean | Stdev | Min | Max | Range | -------+--------+---------+---------+---------+---------+ Before | 125000 │ 2401 │ 110056 │ 288432 │ 178376 | After | 125000 │ 18 │ 124943 │ 125055 │ 112 | Before this patch, the transmission interval mean is equal to the optimal value (Class A stream -> 125 us interval), and it is kept the same after the patch. The dispersion measurements, however, had improved considerably, meaning the system is now consistently transmitting AVTPDUs at the correct time. Finally, the socket transmission scheduling infrastructure requires the system clock to be synchronized with PTP clock so this patches modifies the AVTP plugin documentation to cover how to achieve that.
2019-10-04 18:39:10 +00:00
* ### gPTP Setup
2019-05-17 23:00:24 +00:00
*
avtpsink: Implement synchronization mechanism The avtpsink element is expected to transmit AVTPDUs at specific times, according to GstBuffer timestamps. Currently, the transmission time is controlled in software via the rendering synchronization mechanism provided by GstBaseSink class. However, that mechanism may not cope with some AVB use-cases such as Class A streams, where AVTPDUs are expected to be transmitted at every 125 us. Thus, this patch introduces avtpsink own mechanism which leverages the socket transmission scheduling infrastructure introduced in Linux kernel 4.19. When supported by the NIC, the transmission scheduling is offloaded to the hardware, improving transmission time accuracy considerably. To illustrate that, a before-after experiment was carried out. The experimental setup consisted in 2 PCs with Intel i210 card connected back-to-back running an up-to-date Archlinux with kernel 5.3.1. In one host gst-launch-1.0 was used to generate a 2-minute Class A stream while the other host captured the packets. The metric under evaluation is the transmission interval and it is measured by checking the 'time_delta' information from ethernet frames captured at the receiving side. The table below shows the outcome for a 48 kHz, 16-bit sample, stereo audio stream. The unit is nanoseconds. | Mean | Stdev | Min | Max | Range | -------+--------+---------+---------+---------+---------+ Before | 125000 │ 2401 │ 110056 │ 288432 │ 178376 | After | 125000 │ 18 │ 124943 │ 125055 │ 112 | Before this patch, the transmission interval mean is equal to the optimal value (Class A stream -> 125 us interval), and it is kept the same after the patch. The dispersion measurements, however, had improved considerably, meaning the system is now consistently transmitting AVTPDUs at the correct time. Finally, the socket transmission scheduling infrastructure requires the system clock to be synchronized with PTP clock so this patches modifies the AVTP plugin documentation to cover how to achieve that.
2019-10-04 18:39:10 +00:00
* The Linuxptp project provides the ptp4l daemon, which synchronizes the PTP
* clock from NIC, and the pmc tool which communicates with ptp4l to get/set
* some runtime settings. The project also provides the phc2sys daemon which
* synchronizes the PTP clock and system clock.
2019-05-17 23:00:24 +00:00
*
avtpsink: Implement synchronization mechanism The avtpsink element is expected to transmit AVTPDUs at specific times, according to GstBuffer timestamps. Currently, the transmission time is controlled in software via the rendering synchronization mechanism provided by GstBaseSink class. However, that mechanism may not cope with some AVB use-cases such as Class A streams, where AVTPDUs are expected to be transmitted at every 125 us. Thus, this patch introduces avtpsink own mechanism which leverages the socket transmission scheduling infrastructure introduced in Linux kernel 4.19. When supported by the NIC, the transmission scheduling is offloaded to the hardware, improving transmission time accuracy considerably. To illustrate that, a before-after experiment was carried out. The experimental setup consisted in 2 PCs with Intel i210 card connected back-to-back running an up-to-date Archlinux with kernel 5.3.1. In one host gst-launch-1.0 was used to generate a 2-minute Class A stream while the other host captured the packets. The metric under evaluation is the transmission interval and it is measured by checking the 'time_delta' information from ethernet frames captured at the receiving side. The table below shows the outcome for a 48 kHz, 16-bit sample, stereo audio stream. The unit is nanoseconds. | Mean | Stdev | Min | Max | Range | -------+--------+---------+---------+---------+---------+ Before | 125000 │ 2401 │ 110056 │ 288432 │ 178376 | After | 125000 │ 18 │ 124943 │ 125055 │ 112 | Before this patch, the transmission interval mean is equal to the optimal value (Class A stream -> 125 us interval), and it is kept the same after the patch. The dispersion measurements, however, had improved considerably, meaning the system is now consistently transmitting AVTPDUs at the correct time. Finally, the socket transmission scheduling infrastructure requires the system clock to be synchronized with PTP clock so this patches modifies the AVTP plugin documentation to cover how to achieve that.
2019-10-04 18:39:10 +00:00
* The AVTP plugin requires system clock is synchronized with PTP clock and
* TAI offset is properly set in the kernel. ptp4l and phc2sys can be set up
* in many different ways, below we provide an example that fullfils the plugin
* requirements. For further information check ptp4l(8) and phc2sys(8).
2019-05-17 23:00:24 +00:00
*
avtpsink: Implement synchronization mechanism The avtpsink element is expected to transmit AVTPDUs at specific times, according to GstBuffer timestamps. Currently, the transmission time is controlled in software via the rendering synchronization mechanism provided by GstBaseSink class. However, that mechanism may not cope with some AVB use-cases such as Class A streams, where AVTPDUs are expected to be transmitted at every 125 us. Thus, this patch introduces avtpsink own mechanism which leverages the socket transmission scheduling infrastructure introduced in Linux kernel 4.19. When supported by the NIC, the transmission scheduling is offloaded to the hardware, improving transmission time accuracy considerably. To illustrate that, a before-after experiment was carried out. The experimental setup consisted in 2 PCs with Intel i210 card connected back-to-back running an up-to-date Archlinux with kernel 5.3.1. In one host gst-launch-1.0 was used to generate a 2-minute Class A stream while the other host captured the packets. The metric under evaluation is the transmission interval and it is measured by checking the 'time_delta' information from ethernet frames captured at the receiving side. The table below shows the outcome for a 48 kHz, 16-bit sample, stereo audio stream. The unit is nanoseconds. | Mean | Stdev | Min | Max | Range | -------+--------+---------+---------+---------+---------+ Before | 125000 │ 2401 │ 110056 │ 288432 │ 178376 | After | 125000 │ 18 │ 124943 │ 125055 │ 112 | Before this patch, the transmission interval mean is equal to the optimal value (Class A stream -> 125 us interval), and it is kept the same after the patch. The dispersion measurements, however, had improved considerably, meaning the system is now consistently transmitting AVTPDUs at the correct time. Finally, the socket transmission scheduling infrastructure requires the system clock to be synchronized with PTP clock so this patches modifies the AVTP plugin documentation to cover how to achieve that.
2019-10-04 18:39:10 +00:00
* In the following instructions, replace $IFNAME by your PTP capable NIC
* interface. The gPTP.cfg file mentioned below can be found in /usr/share/
* doc/linuxptp/ (depending on your distro).
2019-05-17 23:00:24 +00:00
*
avtpsink: Implement synchronization mechanism The avtpsink element is expected to transmit AVTPDUs at specific times, according to GstBuffer timestamps. Currently, the transmission time is controlled in software via the rendering synchronization mechanism provided by GstBaseSink class. However, that mechanism may not cope with some AVB use-cases such as Class A streams, where AVTPDUs are expected to be transmitted at every 125 us. Thus, this patch introduces avtpsink own mechanism which leverages the socket transmission scheduling infrastructure introduced in Linux kernel 4.19. When supported by the NIC, the transmission scheduling is offloaded to the hardware, improving transmission time accuracy considerably. To illustrate that, a before-after experiment was carried out. The experimental setup consisted in 2 PCs with Intel i210 card connected back-to-back running an up-to-date Archlinux with kernel 5.3.1. In one host gst-launch-1.0 was used to generate a 2-minute Class A stream while the other host captured the packets. The metric under evaluation is the transmission interval and it is measured by checking the 'time_delta' information from ethernet frames captured at the receiving side. The table below shows the outcome for a 48 kHz, 16-bit sample, stereo audio stream. The unit is nanoseconds. | Mean | Stdev | Min | Max | Range | -------+--------+---------+---------+---------+---------+ Before | 125000 │ 2401 │ 110056 │ 288432 │ 178376 | After | 125000 │ 18 │ 124943 │ 125055 │ 112 | Before this patch, the transmission interval mean is equal to the optimal value (Class A stream -> 125 us interval), and it is kept the same after the patch. The dispersion measurements, however, had improved considerably, meaning the system is now consistently transmitting AVTPDUs at the correct time. Finally, the socket transmission scheduling infrastructure requires the system clock to be synchronized with PTP clock so this patches modifies the AVTP plugin documentation to cover how to achieve that.
2019-10-04 18:39:10 +00:00
* Synchronize PTP clock with PTP time:
2019-05-17 23:00:24 +00:00
*
avtpsink: Implement synchronization mechanism The avtpsink element is expected to transmit AVTPDUs at specific times, according to GstBuffer timestamps. Currently, the transmission time is controlled in software via the rendering synchronization mechanism provided by GstBaseSink class. However, that mechanism may not cope with some AVB use-cases such as Class A streams, where AVTPDUs are expected to be transmitted at every 125 us. Thus, this patch introduces avtpsink own mechanism which leverages the socket transmission scheduling infrastructure introduced in Linux kernel 4.19. When supported by the NIC, the transmission scheduling is offloaded to the hardware, improving transmission time accuracy considerably. To illustrate that, a before-after experiment was carried out. The experimental setup consisted in 2 PCs with Intel i210 card connected back-to-back running an up-to-date Archlinux with kernel 5.3.1. In one host gst-launch-1.0 was used to generate a 2-minute Class A stream while the other host captured the packets. The metric under evaluation is the transmission interval and it is measured by checking the 'time_delta' information from ethernet frames captured at the receiving side. The table below shows the outcome for a 48 kHz, 16-bit sample, stereo audio stream. The unit is nanoseconds. | Mean | Stdev | Min | Max | Range | -------+--------+---------+---------+---------+---------+ Before | 125000 │ 2401 │ 110056 │ 288432 │ 178376 | After | 125000 │ 18 │ 124943 │ 125055 │ 112 | Before this patch, the transmission interval mean is equal to the optimal value (Class A stream -> 125 us interval), and it is kept the same after the patch. The dispersion measurements, however, had improved considerably, meaning the system is now consistently transmitting AVTPDUs at the correct time. Finally, the socket transmission scheduling infrastructure requires the system clock to be synchronized with PTP clock so this patches modifies the AVTP plugin documentation to cover how to achieve that.
2019-10-04 18:39:10 +00:00
* $ ptp4l -f gPTP.cfg -i $IFNAME
2019-05-17 23:00:24 +00:00
*
avtpsink: Implement synchronization mechanism The avtpsink element is expected to transmit AVTPDUs at specific times, according to GstBuffer timestamps. Currently, the transmission time is controlled in software via the rendering synchronization mechanism provided by GstBaseSink class. However, that mechanism may not cope with some AVB use-cases such as Class A streams, where AVTPDUs are expected to be transmitted at every 125 us. Thus, this patch introduces avtpsink own mechanism which leverages the socket transmission scheduling infrastructure introduced in Linux kernel 4.19. When supported by the NIC, the transmission scheduling is offloaded to the hardware, improving transmission time accuracy considerably. To illustrate that, a before-after experiment was carried out. The experimental setup consisted in 2 PCs with Intel i210 card connected back-to-back running an up-to-date Archlinux with kernel 5.3.1. In one host gst-launch-1.0 was used to generate a 2-minute Class A stream while the other host captured the packets. The metric under evaluation is the transmission interval and it is measured by checking the 'time_delta' information from ethernet frames captured at the receiving side. The table below shows the outcome for a 48 kHz, 16-bit sample, stereo audio stream. The unit is nanoseconds. | Mean | Stdev | Min | Max | Range | -------+--------+---------+---------+---------+---------+ Before | 125000 │ 2401 │ 110056 │ 288432 │ 178376 | After | 125000 │ 18 │ 124943 │ 125055 │ 112 | Before this patch, the transmission interval mean is equal to the optimal value (Class A stream -> 125 us interval), and it is kept the same after the patch. The dispersion measurements, however, had improved considerably, meaning the system is now consistently transmitting AVTPDUs at the correct time. Finally, the socket transmission scheduling infrastructure requires the system clock to be synchronized with PTP clock so this patches modifies the AVTP plugin documentation to cover how to achieve that.
2019-10-04 18:39:10 +00:00
* Enable TAI offset to be automatically set by phc2sys:
*
* $ pmc -u -t 1 -b 0 'SET GRANDMASTER_SETTINGS_NP \
* clockClass 248 clockAccuracy 0xfe \
* offsetScaledLogVariance 0xffff \
* currentUtcOffset 37 leap61 0 leap59 0 \
* currentUtcOffsetValid 1 ptpTimescale 1 \
* timeTraceable 1 frequencyTraceable 0 timeSource 0xa0'
*
* Synchronize system clock with PTP clock:
*
* $ phc2sys -f gPTP.cfg -s $IFNAME -c CLOCK_REALTIME -w
*
* The commands above should be run on both AVTP Talker and Listener hosts.
*
* With clocks properly synchronized, applications using the AVTP plugin
* should use GstSytemClock with GST_CLOCK_TYPE_REALTIME as the pipeline
* clock.
*
avtp: Introduce the CRF Sync Element This commit introduces the AVTP Clock Reference Format (CRF) Synchronizer element. This element implements the AVTP CRF Listener as described in IEEE 1722-2016 Section 10. CRF is useful in synchronizing events within different systems by distributing a common clock. This is useful in a scenario where there are multiple talkers who are sending data to a single listener which is processing that data. E.g. CCTV cameras on a network sending AVTP video streams to a base station to display on the same screen. It is assumed that all the systems are already time-synchronized with each other. So, the AVTP Talker essentially adjusts the AVTP Presentation Time so it's phase-locked with the reference clock provided by the CRF stream. There are 2 different roles of systems which participate in CRF data exchange. A system can either be a CRF Talker, which samples it's own clock and generates a stream of timestamps to transmit over the network, or a CRF Listener, the system which receives the generated timestamps and recovers the media clock from the timestamps. It then adjusts it's own clock to align with recovered media clock. The timestamps generated by the talker may not be continuous and the listener might have to interpolate some timestamps to recover the media clock. The number of timestamps to interpolate is mentioned in the CRF stream AVTPDU (Refer IEEE 1722-2016 Section 10.4 for AVTPDU structure). Only CRF Listener has been implemented in this commit. The CRF Sync element will create a separate thread to listen for the CRF stream. This thread will calculate and store the average period of the recovered media clock. The pipeline thread will use this stored period along with the first timestamp of the latest CRF AVTPDU received to calculate adjustment for timestamps in the audio/video streams. In case of CRF AVTPDUs with single timestamp, two consecutive CRF AVTPDUs will be used to figure out the average period of the recovered media clock. In case of H264 streams, both AVTP timestamp and H264 timestamp will be adjusted. In the future commits, another "CRF Checker" element will be introduced which will validate the timestamps on the AVTP Listener side. Which is why a lot of code has been implemented as part of the gstcrfbase class.
2020-02-06 00:17:39 +00:00
* ### Clock Reference Format (CRF)
*
* Even though the systems are synchronized by PTP, it is possible that
* different talkers can send media streams which are out of phase or the
* frequencies do not exactly match. This is partcularly important when there
* is a single listener processing data from multiple talkers. The systems in
* this scenario can benefit if a common clock is distributed among the
* systems.
*
* This can be achieved by using the avtpcrfsync element which implements CRF
* as described in Chapter 10 of IEEE 1722-2016. avtpcrfcheck can also be used
* to validate that the adjustment conforms to the criteria specified in the
* spec. For further details, look at the documentation for the respective
* elements.
avtp: Introduce the CRF Sync Element This commit introduces the AVTP Clock Reference Format (CRF) Synchronizer element. This element implements the AVTP CRF Listener as described in IEEE 1722-2016 Section 10. CRF is useful in synchronizing events within different systems by distributing a common clock. This is useful in a scenario where there are multiple talkers who are sending data to a single listener which is processing that data. E.g. CCTV cameras on a network sending AVTP video streams to a base station to display on the same screen. It is assumed that all the systems are already time-synchronized with each other. So, the AVTP Talker essentially adjusts the AVTP Presentation Time so it's phase-locked with the reference clock provided by the CRF stream. There are 2 different roles of systems which participate in CRF data exchange. A system can either be a CRF Talker, which samples it's own clock and generates a stream of timestamps to transmit over the network, or a CRF Listener, the system which receives the generated timestamps and recovers the media clock from the timestamps. It then adjusts it's own clock to align with recovered media clock. The timestamps generated by the talker may not be continuous and the listener might have to interpolate some timestamps to recover the media clock. The number of timestamps to interpolate is mentioned in the CRF stream AVTPDU (Refer IEEE 1722-2016 Section 10.4 for AVTPDU structure). Only CRF Listener has been implemented in this commit. The CRF Sync element will create a separate thread to listen for the CRF stream. This thread will calculate and store the average period of the recovered media clock. The pipeline thread will use this stored period along with the first timestamp of the latest CRF AVTPDU received to calculate adjustment for timestamps in the audio/video streams. In case of CRF AVTPDUs with single timestamp, two consecutive CRF AVTPDUs will be used to figure out the average period of the recovered media clock. In case of H264 streams, both AVTP timestamp and H264 timestamp will be adjusted. In the future commits, another "CRF Checker" element will be introduced which will validate the timestamps on the AVTP Listener side. Which is why a lot of code has been implemented as part of the gstcrfbase class.
2020-02-06 00:17:39 +00:00
*
avtpsink: Implement synchronization mechanism The avtpsink element is expected to transmit AVTPDUs at specific times, according to GstBuffer timestamps. Currently, the transmission time is controlled in software via the rendering synchronization mechanism provided by GstBaseSink class. However, that mechanism may not cope with some AVB use-cases such as Class A streams, where AVTPDUs are expected to be transmitted at every 125 us. Thus, this patch introduces avtpsink own mechanism which leverages the socket transmission scheduling infrastructure introduced in Linux kernel 4.19. When supported by the NIC, the transmission scheduling is offloaded to the hardware, improving transmission time accuracy considerably. To illustrate that, a before-after experiment was carried out. The experimental setup consisted in 2 PCs with Intel i210 card connected back-to-back running an up-to-date Archlinux with kernel 5.3.1. In one host gst-launch-1.0 was used to generate a 2-minute Class A stream while the other host captured the packets. The metric under evaluation is the transmission interval and it is measured by checking the 'time_delta' information from ethernet frames captured at the receiving side. The table below shows the outcome for a 48 kHz, 16-bit sample, stereo audio stream. The unit is nanoseconds. | Mean | Stdev | Min | Max | Range | -------+--------+---------+---------+---------+---------+ Before | 125000 │ 2401 │ 110056 │ 288432 │ 178376 | After | 125000 │ 18 │ 124943 │ 125055 │ 112 | Before this patch, the transmission interval mean is equal to the optimal value (Class A stream -> 125 us interval), and it is kept the same after the patch. The dispersion measurements, however, had improved considerably, meaning the system is now consistently transmitting AVTPDUs at the correct time. Finally, the socket transmission scheduling infrastructure requires the system clock to be synchronized with PTP clock so this patches modifies the AVTP plugin documentation to cover how to achieve that.
2019-10-04 18:39:10 +00:00
* ### Traffic Control Setup
2019-05-17 23:00:24 +00:00
*
* FQTSS (Forwarding and Queuing Enhancements for Time-Sensitive Streams) can be
* enabled on Linux with the help of the mqprio and cbs qdiscs provided by the
* Linux Traffic Control. Below we provide an example to configure those qdiscs
* in order to transmit a CVF H.264 stream 1280x720@30fps. For further
* information on how to configure these qdiscs check tc-mqprio(8) and
* tc-cbs(8) man pages.
*
* On the host that will run as AVTP Talker (pipeline that generates the video
* stream), run the following commands:
*
avtpsink: Implement synchronization mechanism The avtpsink element is expected to transmit AVTPDUs at specific times, according to GstBuffer timestamps. Currently, the transmission time is controlled in software via the rendering synchronization mechanism provided by GstBaseSink class. However, that mechanism may not cope with some AVB use-cases such as Class A streams, where AVTPDUs are expected to be transmitted at every 125 us. Thus, this patch introduces avtpsink own mechanism which leverages the socket transmission scheduling infrastructure introduced in Linux kernel 4.19. When supported by the NIC, the transmission scheduling is offloaded to the hardware, improving transmission time accuracy considerably. To illustrate that, a before-after experiment was carried out. The experimental setup consisted in 2 PCs with Intel i210 card connected back-to-back running an up-to-date Archlinux with kernel 5.3.1. In one host gst-launch-1.0 was used to generate a 2-minute Class A stream while the other host captured the packets. The metric under evaluation is the transmission interval and it is measured by checking the 'time_delta' information from ethernet frames captured at the receiving side. The table below shows the outcome for a 48 kHz, 16-bit sample, stereo audio stream. The unit is nanoseconds. | Mean | Stdev | Min | Max | Range | -------+--------+---------+---------+---------+---------+ Before | 125000 │ 2401 │ 110056 │ 288432 │ 178376 | After | 125000 │ 18 │ 124943 │ 125055 │ 112 | Before this patch, the transmission interval mean is equal to the optimal value (Class A stream -> 125 us interval), and it is kept the same after the patch. The dispersion measurements, however, had improved considerably, meaning the system is now consistently transmitting AVTPDUs at the correct time. Finally, the socket transmission scheduling infrastructure requires the system clock to be synchronized with PTP clock so this patches modifies the AVTP plugin documentation to cover how to achieve that.
2019-10-04 18:39:10 +00:00
* Configure mpqrio qdisc (replace $MQPRIO_HANDLE_ID by an unused handle ID):
2019-05-17 23:00:24 +00:00
*
avtpsink: Implement synchronization mechanism The avtpsink element is expected to transmit AVTPDUs at specific times, according to GstBuffer timestamps. Currently, the transmission time is controlled in software via the rendering synchronization mechanism provided by GstBaseSink class. However, that mechanism may not cope with some AVB use-cases such as Class A streams, where AVTPDUs are expected to be transmitted at every 125 us. Thus, this patch introduces avtpsink own mechanism which leverages the socket transmission scheduling infrastructure introduced in Linux kernel 4.19. When supported by the NIC, the transmission scheduling is offloaded to the hardware, improving transmission time accuracy considerably. To illustrate that, a before-after experiment was carried out. The experimental setup consisted in 2 PCs with Intel i210 card connected back-to-back running an up-to-date Archlinux with kernel 5.3.1. In one host gst-launch-1.0 was used to generate a 2-minute Class A stream while the other host captured the packets. The metric under evaluation is the transmission interval and it is measured by checking the 'time_delta' information from ethernet frames captured at the receiving side. The table below shows the outcome for a 48 kHz, 16-bit sample, stereo audio stream. The unit is nanoseconds. | Mean | Stdev | Min | Max | Range | -------+--------+---------+---------+---------+---------+ Before | 125000 │ 2401 │ 110056 │ 288432 │ 178376 | After | 125000 │ 18 │ 124943 │ 125055 │ 112 | Before this patch, the transmission interval mean is equal to the optimal value (Class A stream -> 125 us interval), and it is kept the same after the patch. The dispersion measurements, however, had improved considerably, meaning the system is now consistently transmitting AVTPDUs at the correct time. Finally, the socket transmission scheduling infrastructure requires the system clock to be synchronized with PTP clock so this patches modifies the AVTP plugin documentation to cover how to achieve that.
2019-10-04 18:39:10 +00:00
* $ tc qdisc add dev $IFNAME parent root handle $MQPRIO_HANDLE_ID mqprio \
2019-05-17 23:00:24 +00:00
* num_tc 3 map 2 2 1 0 2 2 2 2 2 2 2 2 2 2 2 2 \
* queues 1@0 1@1 2@2 hw 0
*
avtpsink: Implement synchronization mechanism The avtpsink element is expected to transmit AVTPDUs at specific times, according to GstBuffer timestamps. Currently, the transmission time is controlled in software via the rendering synchronization mechanism provided by GstBaseSink class. However, that mechanism may not cope with some AVB use-cases such as Class A streams, where AVTPDUs are expected to be transmitted at every 125 us. Thus, this patch introduces avtpsink own mechanism which leverages the socket transmission scheduling infrastructure introduced in Linux kernel 4.19. When supported by the NIC, the transmission scheduling is offloaded to the hardware, improving transmission time accuracy considerably. To illustrate that, a before-after experiment was carried out. The experimental setup consisted in 2 PCs with Intel i210 card connected back-to-back running an up-to-date Archlinux with kernel 5.3.1. In one host gst-launch-1.0 was used to generate a 2-minute Class A stream while the other host captured the packets. The metric under evaluation is the transmission interval and it is measured by checking the 'time_delta' information from ethernet frames captured at the receiving side. The table below shows the outcome for a 48 kHz, 16-bit sample, stereo audio stream. The unit is nanoseconds. | Mean | Stdev | Min | Max | Range | -------+--------+---------+---------+---------+---------+ Before | 125000 │ 2401 │ 110056 │ 288432 │ 178376 | After | 125000 │ 18 │ 124943 │ 125055 │ 112 | Before this patch, the transmission interval mean is equal to the optimal value (Class A stream -> 125 us interval), and it is kept the same after the patch. The dispersion measurements, however, had improved considerably, meaning the system is now consistently transmitting AVTPDUs at the correct time. Finally, the socket transmission scheduling infrastructure requires the system clock to be synchronized with PTP clock so this patches modifies the AVTP plugin documentation to cover how to achieve that.
2019-10-04 18:39:10 +00:00
* Configure cbs qdisc (replace $CBS_HANDLE_ID by an unused handle ID):
*
* $ tc qdisc replace dev $IFNAME parent $MQPRIO_HANDLE_ID:1 \
* handle $CBS_HANDLE_ID cbs idleslope 27756 sendslope -972244 \
* hicredit 42 locredit -1499 offload 1
*
* Also, the plugin implements a transmission scheduling mechanism that relies
* on ETF qdisc so make sure it is properly configured in your system. It could
* be configured in many ways, below follows an example.
2019-05-17 23:00:24 +00:00
*
avtpsink: Implement synchronization mechanism The avtpsink element is expected to transmit AVTPDUs at specific times, according to GstBuffer timestamps. Currently, the transmission time is controlled in software via the rendering synchronization mechanism provided by GstBaseSink class. However, that mechanism may not cope with some AVB use-cases such as Class A streams, where AVTPDUs are expected to be transmitted at every 125 us. Thus, this patch introduces avtpsink own mechanism which leverages the socket transmission scheduling infrastructure introduced in Linux kernel 4.19. When supported by the NIC, the transmission scheduling is offloaded to the hardware, improving transmission time accuracy considerably. To illustrate that, a before-after experiment was carried out. The experimental setup consisted in 2 PCs with Intel i210 card connected back-to-back running an up-to-date Archlinux with kernel 5.3.1. In one host gst-launch-1.0 was used to generate a 2-minute Class A stream while the other host captured the packets. The metric under evaluation is the transmission interval and it is measured by checking the 'time_delta' information from ethernet frames captured at the receiving side. The table below shows the outcome for a 48 kHz, 16-bit sample, stereo audio stream. The unit is nanoseconds. | Mean | Stdev | Min | Max | Range | -------+--------+---------+---------+---------+---------+ Before | 125000 │ 2401 │ 110056 │ 288432 │ 178376 | After | 125000 │ 18 │ 124943 │ 125055 │ 112 | Before this patch, the transmission interval mean is equal to the optimal value (Class A stream -> 125 us interval), and it is kept the same after the patch. The dispersion measurements, however, had improved considerably, meaning the system is now consistently transmitting AVTPDUs at the correct time. Finally, the socket transmission scheduling infrastructure requires the system clock to be synchronized with PTP clock so this patches modifies the AVTP plugin documentation to cover how to achieve that.
2019-10-04 18:39:10 +00:00
* $ tc qdisc add dev $IFNAME parent $CBS_HANDLE_ID:1 etf \
* clockid CLOCK_TAI delta 500000 offload
2019-05-17 23:00:24 +00:00
*
avtpsink: Implement synchronization mechanism The avtpsink element is expected to transmit AVTPDUs at specific times, according to GstBuffer timestamps. Currently, the transmission time is controlled in software via the rendering synchronization mechanism provided by GstBaseSink class. However, that mechanism may not cope with some AVB use-cases such as Class A streams, where AVTPDUs are expected to be transmitted at every 125 us. Thus, this patch introduces avtpsink own mechanism which leverages the socket transmission scheduling infrastructure introduced in Linux kernel 4.19. When supported by the NIC, the transmission scheduling is offloaded to the hardware, improving transmission time accuracy considerably. To illustrate that, a before-after experiment was carried out. The experimental setup consisted in 2 PCs with Intel i210 card connected back-to-back running an up-to-date Archlinux with kernel 5.3.1. In one host gst-launch-1.0 was used to generate a 2-minute Class A stream while the other host captured the packets. The metric under evaluation is the transmission interval and it is measured by checking the 'time_delta' information from ethernet frames captured at the receiving side. The table below shows the outcome for a 48 kHz, 16-bit sample, stereo audio stream. The unit is nanoseconds. | Mean | Stdev | Min | Max | Range | -------+--------+---------+---------+---------+---------+ Before | 125000 │ 2401 │ 110056 │ 288432 │ 178376 | After | 125000 │ 18 │ 124943 │ 125055 │ 112 | Before this patch, the transmission interval mean is equal to the optimal value (Class A stream -> 125 us interval), and it is kept the same after the patch. The dispersion measurements, however, had improved considerably, meaning the system is now consistently transmitting AVTPDUs at the correct time. Finally, the socket transmission scheduling infrastructure requires the system clock to be synchronized with PTP clock so this patches modifies the AVTP plugin documentation to cover how to achieve that.
2019-10-04 18:39:10 +00:00
* No Traffic Control configuration is required at the host running as AVTP
* Listener.
2019-05-17 23:00:24 +00:00
*
* ### Capabilities
*
* The `avtpsink` and `avtpsrc` elements open `AF_PACKET` sockets, which require
* `CAP_NET_RAW` capability. Therefore, applications must have that capability
* in order to successfully use this element. For instance, one can use:
*
* $ sudo setcap cap_net_raw+ep <application>
*
* Applications can drop this capability after the sockets are open, after
* `avtpsrc` or `avtpsink` elements transition to PAUSED state. See setcap(8)
* man page for more information.
*
* ### Elements configuration
*
* Each element has its own configuration properties, with some being common
* to several elements. Basic properties are:
*
avtp: Introduce the CRF Sync Element This commit introduces the AVTP Clock Reference Format (CRF) Synchronizer element. This element implements the AVTP CRF Listener as described in IEEE 1722-2016 Section 10. CRF is useful in synchronizing events within different systems by distributing a common clock. This is useful in a scenario where there are multiple talkers who are sending data to a single listener which is processing that data. E.g. CCTV cameras on a network sending AVTP video streams to a base station to display on the same screen. It is assumed that all the systems are already time-synchronized with each other. So, the AVTP Talker essentially adjusts the AVTP Presentation Time so it's phase-locked with the reference clock provided by the CRF stream. There are 2 different roles of systems which participate in CRF data exchange. A system can either be a CRF Talker, which samples it's own clock and generates a stream of timestamps to transmit over the network, or a CRF Listener, the system which receives the generated timestamps and recovers the media clock from the timestamps. It then adjusts it's own clock to align with recovered media clock. The timestamps generated by the talker may not be continuous and the listener might have to interpolate some timestamps to recover the media clock. The number of timestamps to interpolate is mentioned in the CRF stream AVTPDU (Refer IEEE 1722-2016 Section 10.4 for AVTPDU structure). Only CRF Listener has been implemented in this commit. The CRF Sync element will create a separate thread to listen for the CRF stream. This thread will calculate and store the average period of the recovered media clock. The pipeline thread will use this stored period along with the first timestamp of the latest CRF AVTPDU received to calculate adjustment for timestamps in the audio/video streams. In case of CRF AVTPDUs with single timestamp, two consecutive CRF AVTPDUs will be used to figure out the average period of the recovered media clock. In case of H264 streams, both AVTP timestamp and H264 timestamp will be adjusted. In the future commits, another "CRF Checker" element will be introduced which will validate the timestamps on the AVTP Listener side. Which is why a lot of code has been implemented as part of the gstcrfbase class.
2020-02-06 00:17:39 +00:00
* * streamid (avtpaafpay, avtpcvfpay, avtpaafdepay, avtpcvfdepay,
* avtpcrfsync, avtpcrfcheck): Stream ID associated with the stream.
2019-05-17 23:00:24 +00:00
*
* * ifname (avtpsink, avtpsrc, avtpcrfsync, avtpcrfcheck): Network interface
avtp: Introduce the CRF Sync Element This commit introduces the AVTP Clock Reference Format (CRF) Synchronizer element. This element implements the AVTP CRF Listener as described in IEEE 1722-2016 Section 10. CRF is useful in synchronizing events within different systems by distributing a common clock. This is useful in a scenario where there are multiple talkers who are sending data to a single listener which is processing that data. E.g. CCTV cameras on a network sending AVTP video streams to a base station to display on the same screen. It is assumed that all the systems are already time-synchronized with each other. So, the AVTP Talker essentially adjusts the AVTP Presentation Time so it's phase-locked with the reference clock provided by the CRF stream. There are 2 different roles of systems which participate in CRF data exchange. A system can either be a CRF Talker, which samples it's own clock and generates a stream of timestamps to transmit over the network, or a CRF Listener, the system which receives the generated timestamps and recovers the media clock from the timestamps. It then adjusts it's own clock to align with recovered media clock. The timestamps generated by the talker may not be continuous and the listener might have to interpolate some timestamps to recover the media clock. The number of timestamps to interpolate is mentioned in the CRF stream AVTPDU (Refer IEEE 1722-2016 Section 10.4 for AVTPDU structure). Only CRF Listener has been implemented in this commit. The CRF Sync element will create a separate thread to listen for the CRF stream. This thread will calculate and store the average period of the recovered media clock. The pipeline thread will use this stored period along with the first timestamp of the latest CRF AVTPDU received to calculate adjustment for timestamps in the audio/video streams. In case of CRF AVTPDUs with single timestamp, two consecutive CRF AVTPDUs will be used to figure out the average period of the recovered media clock. In case of H264 streams, both AVTP timestamp and H264 timestamp will be adjusted. In the future commits, another "CRF Checker" element will be introduced which will validate the timestamps on the AVTP Listener side. Which is why a lot of code has been implemented as part of the gstcrfbase class.
2020-02-06 00:17:39 +00:00
* used to send/receive AVTP packets.
2019-05-17 23:00:24 +00:00
*
* * dst-macaddr (avtpsink, avtpsrc): Destination MAC address for the stream.
*
* * priority (avtpsink): Priority used by the plugin to transmit AVTP
* traffic.
*
* * mtt (avtpaafpay, avtpcvfpay): Maximum Transit Time, in nanoseconds, as
* defined in AVTP spec.
*
* * tu (avtpaafpay, avtpcvfpay): Maximum Time Uncertainty, in nanoseconds, as
* defined in AVTP spec.
*
* * processing-deadline (avtpaafpay, avtpcvfpay, avtpsink): Maximum amount of
* time, in nanoseconds, that the pipeline is expected to process any
* buffer. This value should be in sync between the one used on the
* payloader and the sink, as this time is also taken into consideration to
* define the correct presentation time of the packets on the AVTP listener
* side. It should be as low as possible (zero if possible).
*
* * tstamp-mode (avtpaafpay): AAF timestamping mode, as defined in AVTP spec.
*
* * mtu (avtpcvfpay): Maximum Transmit Unit of the underlying network, used
* to determine when to fragment a CVF packet and how big it should be.
*
* Check each element documentation for more details.
*
*
* ### Running a sample pipeline
*
* The following pipelines assume a hypothetical `-k ptp` flag that forces the
* pipeline clock to be GstPtpClock. A real application would programmatically
* define GstPtpClock as the pipeline clock (see next section). It is also
* assumed that `gst-launch-1.0` has CAP_NET_RAW capability.
*
* On the AVTP talker, the following pipeline can be used to generate an H.264
* stream to be sent via network using AVTP:
*
* $ gst-launch-1.0 -k ptp videotestsrc is-live=true ! clockoverlay ! \
* x264enc ! avtpcvfpay processing-deadline=20000000 ! \
avtp: Introduce the CRF Sync Element This commit introduces the AVTP Clock Reference Format (CRF) Synchronizer element. This element implements the AVTP CRF Listener as described in IEEE 1722-2016 Section 10. CRF is useful in synchronizing events within different systems by distributing a common clock. This is useful in a scenario where there are multiple talkers who are sending data to a single listener which is processing that data. E.g. CCTV cameras on a network sending AVTP video streams to a base station to display on the same screen. It is assumed that all the systems are already time-synchronized with each other. So, the AVTP Talker essentially adjusts the AVTP Presentation Time so it's phase-locked with the reference clock provided by the CRF stream. There are 2 different roles of systems which participate in CRF data exchange. A system can either be a CRF Talker, which samples it's own clock and generates a stream of timestamps to transmit over the network, or a CRF Listener, the system which receives the generated timestamps and recovers the media clock from the timestamps. It then adjusts it's own clock to align with recovered media clock. The timestamps generated by the talker may not be continuous and the listener might have to interpolate some timestamps to recover the media clock. The number of timestamps to interpolate is mentioned in the CRF stream AVTPDU (Refer IEEE 1722-2016 Section 10.4 for AVTPDU structure). Only CRF Listener has been implemented in this commit. The CRF Sync element will create a separate thread to listen for the CRF stream. This thread will calculate and store the average period of the recovered media clock. The pipeline thread will use this stored period along with the first timestamp of the latest CRF AVTPDU received to calculate adjustment for timestamps in the audio/video streams. In case of CRF AVTPDUs with single timestamp, two consecutive CRF AVTPDUs will be used to figure out the average period of the recovered media clock. In case of H264 streams, both AVTP timestamp and H264 timestamp will be adjusted. In the future commits, another "CRF Checker" element will be introduced which will validate the timestamps on the AVTP Listener side. Which is why a lot of code has been implemented as part of the gstcrfbase class.
2020-02-06 00:17:39 +00:00
* avtpcrfsync ifname=$IFNAME ! avtpsink ifname=$IFNAME
2019-05-17 23:00:24 +00:00
*
* On the AVTP listener host, the following pipeline can be used to get the
* AVTP stream, depacketize it and show it on the screen:
*
* $ gst-launch-1.0 -k ptp avtpsrc ifname=$IFNAME ! \
* avtpcrfcheck ifname=$IFNAME ! avtpcvfdepay ! \
2019-05-17 23:00:24 +00:00
* vaapih264dec ! videoconvert ! clockoverlay halignment=right ! \
* queue ! autovideosink
*
* ### Pipeline clock
*
* The AVTP plugin elements require that the pipeline clock is in sync with
* the network PTP clock. As GStreamer has a GstPtpClock, using it should be
* the simplest way of achieving that.
*
* However, as there's no way of forcing a clock to a pipeline using
* gst-launch-1.0 application, even for quick tests, it's necessary to have
* an application. One can refer to GStreamer "hello world" application,
* remembering to set the pipeline clock to GstPtpClock before putting the
* pipeline on "PLAYING" state. Some code like:
*
* GstClock *clk = gst_ptp_clock_new("ptp-clock", 0);
* gst_clock_wait_for_sync(clk, GST_CLOCK_TIME_NONE);
* gst_pipeline_use_clock (GST_PIPELINE (pipeline), clk);
*
* Would do the trick.
*
* ### Disclaimer
*
* It's out of scope for the AVTP plugin to verify how it is invoked, should
* a malicious software do it for Denial of Service attempts, or other
* compromises attempts.
*
*/
#ifdef HAVE_CONFIG_H
#include <config.h>
#endif
#include <gst/gst.h>
#include "gstavtpaafdepay.h"
avtp: Introduce AAF payloader element This patch introduces the AVTP Audio Format (AAF) payloader element from the AVTP plugin. The element inputs audio raw data and outputs AVTP packets (aka AVTPDUs), implementing a typical protocol payloader element from GStreamer. AAF is one of the available formats to transport audio data in an AVTP system. AAF is specified in IEEE 1722-2016 section 7 and provides two encapsulation mode: PCM and AES3. This patch implements PCM encapsulation mode only. The AAF payloader working mechanism consists of building the AAF header, prepending it to the GstBuffer received on the sink pad, and pushing the buffer downstream. Payloader parameters such as stream ID, maximum transit time, time uncertainty, and timestamping mode are passed via element properties. AAF doesn't support all possible sample format and sampling rate values so the sink pad caps template from the payloader is a subset of audio/x-raw. Additionally, this patch implements only "normal" timestamping mode from AAF. "Sparse" mode should be implemented in future. Upcoming patches will introduce other AVTP payloader elements that will have some common code. For that reason, this patch introduces the GstAvtpBasePayload abstract class that implements common payloader functionalities, and the GstAvtpAafPay class that extends the GstAvtpBasePayload class, implementing AAF-specific functionalities. The AAF payloader element is most likely to be used with the AVTP sink element (to be introduced by a later patch) but it could also be used with UDP sink element to implement AVTP over UDP as described in IEEE 1722-2016 Annex J. This element was inspired by RTP payloader elements.
2019-01-17 01:16:59 +00:00
#include "gstavtpaafpay.h"
#include "gstavtpcvfdepay.h"
#include "gstavtpcvfpay.h"
#include "gstavtpsink.h"
#include "gstavtpsrc.h"
avtp: Introduce the CRF Sync Element This commit introduces the AVTP Clock Reference Format (CRF) Synchronizer element. This element implements the AVTP CRF Listener as described in IEEE 1722-2016 Section 10. CRF is useful in synchronizing events within different systems by distributing a common clock. This is useful in a scenario where there are multiple talkers who are sending data to a single listener which is processing that data. E.g. CCTV cameras on a network sending AVTP video streams to a base station to display on the same screen. It is assumed that all the systems are already time-synchronized with each other. So, the AVTP Talker essentially adjusts the AVTP Presentation Time so it's phase-locked with the reference clock provided by the CRF stream. There are 2 different roles of systems which participate in CRF data exchange. A system can either be a CRF Talker, which samples it's own clock and generates a stream of timestamps to transmit over the network, or a CRF Listener, the system which receives the generated timestamps and recovers the media clock from the timestamps. It then adjusts it's own clock to align with recovered media clock. The timestamps generated by the talker may not be continuous and the listener might have to interpolate some timestamps to recover the media clock. The number of timestamps to interpolate is mentioned in the CRF stream AVTPDU (Refer IEEE 1722-2016 Section 10.4 for AVTPDU structure). Only CRF Listener has been implemented in this commit. The CRF Sync element will create a separate thread to listen for the CRF stream. This thread will calculate and store the average period of the recovered media clock. The pipeline thread will use this stored period along with the first timestamp of the latest CRF AVTPDU received to calculate adjustment for timestamps in the audio/video streams. In case of CRF AVTPDUs with single timestamp, two consecutive CRF AVTPDUs will be used to figure out the average period of the recovered media clock. In case of H264 streams, both AVTP timestamp and H264 timestamp will be adjusted. In the future commits, another "CRF Checker" element will be introduced which will validate the timestamps on the AVTP Listener side. Which is why a lot of code has been implemented as part of the gstcrfbase class.
2020-02-06 00:17:39 +00:00
#include "gstavtpcrfsync.h"
#include "gstavtpcrfcheck.h"
avtp: Introduce AAF payloader element This patch introduces the AVTP Audio Format (AAF) payloader element from the AVTP plugin. The element inputs audio raw data and outputs AVTP packets (aka AVTPDUs), implementing a typical protocol payloader element from GStreamer. AAF is one of the available formats to transport audio data in an AVTP system. AAF is specified in IEEE 1722-2016 section 7 and provides two encapsulation mode: PCM and AES3. This patch implements PCM encapsulation mode only. The AAF payloader working mechanism consists of building the AAF header, prepending it to the GstBuffer received on the sink pad, and pushing the buffer downstream. Payloader parameters such as stream ID, maximum transit time, time uncertainty, and timestamping mode are passed via element properties. AAF doesn't support all possible sample format and sampling rate values so the sink pad caps template from the payloader is a subset of audio/x-raw. Additionally, this patch implements only "normal" timestamping mode from AAF. "Sparse" mode should be implemented in future. Upcoming patches will introduce other AVTP payloader elements that will have some common code. For that reason, this patch introduces the GstAvtpBasePayload abstract class that implements common payloader functionalities, and the GstAvtpAafPay class that extends the GstAvtpBasePayload class, implementing AAF-specific functionalities. The AAF payloader element is most likely to be used with the AVTP sink element (to be introduced by a later patch) but it could also be used with UDP sink element to implement AVTP over UDP as described in IEEE 1722-2016 Annex J. This element was inspired by RTP payloader elements.
2019-01-17 01:16:59 +00:00
static gboolean
plugin_init (GstPlugin * plugin)
{
avtp: Introduce AAF payloader element This patch introduces the AVTP Audio Format (AAF) payloader element from the AVTP plugin. The element inputs audio raw data and outputs AVTP packets (aka AVTPDUs), implementing a typical protocol payloader element from GStreamer. AAF is one of the available formats to transport audio data in an AVTP system. AAF is specified in IEEE 1722-2016 section 7 and provides two encapsulation mode: PCM and AES3. This patch implements PCM encapsulation mode only. The AAF payloader working mechanism consists of building the AAF header, prepending it to the GstBuffer received on the sink pad, and pushing the buffer downstream. Payloader parameters such as stream ID, maximum transit time, time uncertainty, and timestamping mode are passed via element properties. AAF doesn't support all possible sample format and sampling rate values so the sink pad caps template from the payloader is a subset of audio/x-raw. Additionally, this patch implements only "normal" timestamping mode from AAF. "Sparse" mode should be implemented in future. Upcoming patches will introduce other AVTP payloader elements that will have some common code. For that reason, this patch introduces the GstAvtpBasePayload abstract class that implements common payloader functionalities, and the GstAvtpAafPay class that extends the GstAvtpBasePayload class, implementing AAF-specific functionalities. The AAF payloader element is most likely to be used with the AVTP sink element (to be introduced by a later patch) but it could also be used with UDP sink element to implement AVTP over UDP as described in IEEE 1722-2016 Annex J. This element was inspired by RTP payloader elements.
2019-01-17 01:16:59 +00:00
if (!gst_avtp_aaf_pay_plugin_init (plugin))
return FALSE;
if (!gst_avtp_aaf_depay_plugin_init (plugin))
return FALSE;
if (!gst_avtp_sink_plugin_init (plugin))
return FALSE;
if (!gst_avtp_src_plugin_init (plugin))
return FALSE;
if (!gst_avtp_cvf_pay_plugin_init (plugin))
return FALSE;
if (!gst_avtp_cvf_depay_plugin_init (plugin))
return FALSE;
avtp: Introduce the CRF Sync Element This commit introduces the AVTP Clock Reference Format (CRF) Synchronizer element. This element implements the AVTP CRF Listener as described in IEEE 1722-2016 Section 10. CRF is useful in synchronizing events within different systems by distributing a common clock. This is useful in a scenario where there are multiple talkers who are sending data to a single listener which is processing that data. E.g. CCTV cameras on a network sending AVTP video streams to a base station to display on the same screen. It is assumed that all the systems are already time-synchronized with each other. So, the AVTP Talker essentially adjusts the AVTP Presentation Time so it's phase-locked with the reference clock provided by the CRF stream. There are 2 different roles of systems which participate in CRF data exchange. A system can either be a CRF Talker, which samples it's own clock and generates a stream of timestamps to transmit over the network, or a CRF Listener, the system which receives the generated timestamps and recovers the media clock from the timestamps. It then adjusts it's own clock to align with recovered media clock. The timestamps generated by the talker may not be continuous and the listener might have to interpolate some timestamps to recover the media clock. The number of timestamps to interpolate is mentioned in the CRF stream AVTPDU (Refer IEEE 1722-2016 Section 10.4 for AVTPDU structure). Only CRF Listener has been implemented in this commit. The CRF Sync element will create a separate thread to listen for the CRF stream. This thread will calculate and store the average period of the recovered media clock. The pipeline thread will use this stored period along with the first timestamp of the latest CRF AVTPDU received to calculate adjustment for timestamps in the audio/video streams. In case of CRF AVTPDUs with single timestamp, two consecutive CRF AVTPDUs will be used to figure out the average period of the recovered media clock. In case of H264 streams, both AVTP timestamp and H264 timestamp will be adjusted. In the future commits, another "CRF Checker" element will be introduced which will validate the timestamps on the AVTP Listener side. Which is why a lot of code has been implemented as part of the gstcrfbase class.
2020-02-06 00:17:39 +00:00
if (!gst_avtp_crf_sync_plugin_init (plugin))
return FALSE;
if (!gst_avtp_crf_check_plugin_init (plugin))
return FALSE;
avtp: Introduce AAF payloader element This patch introduces the AVTP Audio Format (AAF) payloader element from the AVTP plugin. The element inputs audio raw data and outputs AVTP packets (aka AVTPDUs), implementing a typical protocol payloader element from GStreamer. AAF is one of the available formats to transport audio data in an AVTP system. AAF is specified in IEEE 1722-2016 section 7 and provides two encapsulation mode: PCM and AES3. This patch implements PCM encapsulation mode only. The AAF payloader working mechanism consists of building the AAF header, prepending it to the GstBuffer received on the sink pad, and pushing the buffer downstream. Payloader parameters such as stream ID, maximum transit time, time uncertainty, and timestamping mode are passed via element properties. AAF doesn't support all possible sample format and sampling rate values so the sink pad caps template from the payloader is a subset of audio/x-raw. Additionally, this patch implements only "normal" timestamping mode from AAF. "Sparse" mode should be implemented in future. Upcoming patches will introduce other AVTP payloader elements that will have some common code. For that reason, this patch introduces the GstAvtpBasePayload abstract class that implements common payloader functionalities, and the GstAvtpAafPay class that extends the GstAvtpBasePayload class, implementing AAF-specific functionalities. The AAF payloader element is most likely to be used with the AVTP sink element (to be introduced by a later patch) but it could also be used with UDP sink element to implement AVTP over UDP as described in IEEE 1722-2016 Annex J. This element was inspired by RTP payloader elements.
2019-01-17 01:16:59 +00:00
return TRUE;
}
GST_PLUGIN_DEFINE (GST_VERSION_MAJOR, GST_VERSION_MINOR,
avtp, "Audio/Video Transport Protocol (AVTP) plugin",
plugin_init, VERSION, GST_LICENSE, GST_PACKAGE_NAME, GST_PACKAGE_ORIGIN);