Instead of storing the raw cc_data, store the 2 cea608 fields individually
as well as the ccp data.
Simply copying the input cc_data to the output cc_data violates a number of
requirements in the cea708 specification. The most prominent being, that
cea608 triples must be placed at the beginning of each cdp.
We also need to comply with the framerate-dpendent limits for both the
cea608 and the ccp data which may involve splitting or merging some
cea608 data but not ccp data or vice versa.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1116>
This will stop stripping four bytes start code. This was fixed and broken
again as it was causing the a timestamp shift. We now call
gst_base_parse_set_ts_at_offset() with the offset of the first NAL to ensure
that fixing a moderatly broken input stream won't affect the timestamps. We
also fixes the unit test, removing a comment about the stripping behaviour not
being correct.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1251>
TSN streams are expected to send packets to the network in a well
defined "pace", which is arbitrarily defined for each stream. This pace
is defined by the "measurement interval" property of a stream.
When the AVTP CVF payloader element - avtpcvfpay - fragments a video
frame that is too big to be sent to the network, it currently defines
that all fragments should be transmitted at the same time (via DTS
property of GstBuffers generated, as sink will use those to time the
transmission of the AVTPDU). This doesn't comply with stream definition,
which also has a limit on how many packets can be sent on a given
measurement interval.
This patch solves that by spreading in time the DTS of the GstBuffers
containing the AVTPDUs. Two new properties, "measurement-interval" and
"max-interval-frames", added to avptcvfpay element so that it knows
stream measurement interval and how many AVTPDUs it can send on any of
them. More details on the method used to proper spread DTS/PTS according
to measurement interval can be found in a code commentary inside this patch.
Tests also added for the new property and behaviour.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1004>
This is the first version of AV1 parser implementation in GStreamer.
A test file is also provied with several test cases. It contains a
test sequence taken from the aom testdata set, with one key and one
inter-frame. The same test sequence has been reencoded to annexb.
testdata is taken from aom testdata (and reencoded for annexb) as well
as handcrafted testcases. Once reference testdata is available, the
testing could be imporved aswell.
Co-author: He Junyan <junyan.he@hotmail.com>
Co-author: Víctor Manuel Jáquez Leal <vjaquez@igalia.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/785>
This adds tests to validate whether the avtpcrfsync element applies the
adjustment correctly.
Also, the infrastructure to include additional source files while compiling
is added. This change is exactly the same as the one in gst-plugins-good.
We can't be sure about the reference count if the muxer is currently
running, which can happen in the test_reappearing_pad test. An
additional reference might temporarily be owned by the srcpad task of
tsmux while iterating over the pads.
Until now, any streams in tsmux had to be present when the element
started its first buffer. Now they can appear at any point during the
stream, or even disappear and reappear later using the same PID.
`pipe()` isn't used since 15927b6511,
and `socketpair()` from `#include <sys/socket.h>` is used only in the
examples. In practice, you can use probably also use anything that
allows you to create fd pairs, such as named pipes or anonymous pipes.
We use the cross-platform GstPollFD API in the plugin.
Some raw h264 encoded files trigger the assignment of wrong PTS to buffers
when some SEI data is provided. This change prevents it to happen.
Also ensure this behavior is being tested.
Add static or dynamic mpd with:
- baseURL
- period
- adaptation_set
- representaton
- SegmentList
- SegmentURL
- SegmentTemplate
Support multiple audio and video streams.
Pass conformance test with DashIF.org
The SVT-HEVC (Scalable Video Technology[0] for HEVC) Encoder is an
open source video coding technology[1] that is highly optimized for
Intel Xeon Scalable processors and Intel Xeon D processors.
[0] https://01.org/svt
[1] https://github.com/OpenVisualCloud/SVT-HEVC
According to H264 ITU standards from 06/19, GST_H264_PROFILE_HIGH_422
(profile_idc = 122) with constraint_set1_flag = 0 and
constraint_set3_flag = 0 can be mapped to high-4:2:2 or high-4:4:4.
GST_H264_PROFILE_HIGH_422 with constraint_set1_flag = 0 and
constraint_set3_flag = 1 can be mapped to high-4:2:2, high-4:4:4,
high-4:2:2-intra or high-4:4:4-intra.
Weak refs don't quite work here correctly as there is always a race with
taking the lock between find_view() and remove_view(). If find_view()
returns a view that is going to removed by remove_view() then we have an
interesting situation.
In theory, the number and type of views for an image are relatively
constant and should not change one they've been set up which means that
it is actually practical to perform pool-like reference counting here
where the image holds a pool of different views that it can give out
as necessary.
Current code would change any non-ok return from gst_pad_push to
GST_FLOW_ERROR, thus hiding meaningful returns such as GST_FLOW_EOS.
Tests also added.
Exposure mode property, extra colour tone values (aqua, emboss, sketch, neon), extra scene modes (backlight, flowers, AR, HDR).
Missing vmethods for exposure mode, analog gain, lens focus, colour temperature, min & max exposure time
Contribs by Mohammed Sameer <msameer@foolab.org>, Adam Pigg <adam@piggz.co.uk>
To allow curlhttpsrc to support DASH streams that use the on-demand
profile, it needs to support HTTP Range GETs. In GStreamer, the RANGE
is specified by issuing a GST_FORMAT_BYTES seek to set the start and
end of the range. curlhttpsrc needs to implement seek and set the
appropriate curl options to make it add the Range header to the
request.
Sometimes, one wants to force a clock on some pipelines - for instance,
when testing TSN related pipelines, one usually uses GstPtpClock or
CLOCK_REALTIME (assuming system realtime clock is in sync with network
one). Until now, one needs to write an application for that - not
difficult, but quite boring if one just wants to test something. This
patch presents a new element to help that: clockselect.
clockselect is a pipeline with two properties to select a clock. One
property, "clock-id", enables one to choose between "monotonic",
"realtime", "ptp" or "default" clock - where default keeps pipeline
behaviour of choosing a clock based on its elements. The other property,
"ptp-domain" gives one the choice of which PTP domain should be used.
Some very simple tests also added for this new element.
This adds two properties:
* scte-35-pid: If not 0, enables the SCTE-35 support for the current
program. This will write the proper PMT and send SCTE-35 NULL
commands (i.e. heartbeats) at a regular interval
* scte-35-null-interval: This specifies the interval at which the
NULL commands should be sent
Sending SCTE-35 commands is done by creating the appropriate SCTE-35
GstMpegtsSection and then sending them on the muxer. See the
associated example
x265 does not allow user to configure a picture size smaller than
at least one CU size, and maxCUSize must be 16, 32, or 64.
Therefore, the CU size must be set according to the input resolution,
and the input resolution can not be less than 16.
... and add our stub cuda header.
Newly introduced stub cuda.h file is defining minimal types in order to
build nvcodec plugin without system installed CUDA toolkit dependency.
This will make cross-compile possible.
... and put them into new nvcodec plugin.
* nvcodec plugin
Now each nvenc and nvdec element is moved to be a part of nvcodec plugin
for better interoperability.
Additionally, cuda runtime API header dependencies
(i.e., cuda_runtime_api.h and cuda_gl_interop.h) are removed.
Note that cuda runtime APIs have prefix "cuda". Since 1.16 release with
Windows support, only "cuda.h" and "cudaGL.h" dependent symbols have
been used except for some defined types. However, those types could be
replaced with other types which were defined by "cuda.h".
* dynamic library loading
CUDA library will be opened with g_module_open() instead of build-time linking.
On Windows, nvcuda.dll is installed to system path by CUDA Toolkit
installer, and on *nix, user should ensure that libcuda.so.1 can be
loadable (i.e., via LD_LIBRARY_PATH or default dlopen path)
Therefore, NVIDIA_VIDEO_CODEC_SDK_PATH env build time dependency for Windows
is removed.