The rtpbin sends signals for all SSRCs. Don't send an EOS when the SSRC
does not match the stream SSRC.
This avoids problems when an SSRC from another receiver times out.
In some scenarios the fakevideosink shouldn't advertize the overlay-composition
meta for instance, so that overlay elements perform subtitles blending
themselves.
According to the specification, the adaptation field length must be 183 if
there is no payload data and < 183 if the packet contains an adaptation
field and payload data.
Unfortunately some payloaders always set the flag for payload data, even if
the adaptation field length is 183.
Don't return with an error in this case. Clear the payload data flag
instead and parse the adaptation field as usual. This avoids visual
artefacts for such streams.
For interlaced video:
* set the interlace mode in the src caps
* double the height from SPS in the caps.
* set field latency, instead of frame latency.
Fix#778
Add new property to signalling that there is no incoming data
from peer. This can be useful if users want to stop the streaming
when the connection is alive but no packet is arriving.
Both 2 and 4 are supported version of AAC ADTS format stream.
So we need to set correct version to help negotiation
especially for non-autopluggable pipeline.
Some mpeg-ts (HLS, DVB, ...) streams out there have completely broken
PCR streams on which we can't reliably recover correct timestamps.
For those, provide a property that will ignore the program PCR stream
(by faking that it's not present (0x1fff)).
Initially the case "only codec_data is different" was addressed in
https://bugzilla.gnome.org/show_bug.cgi?id=705333 in order for
unusual bitstreams to be handled. That's the case where sps and pps
are placed in bitstream. When sps/pps are signalled only via caps
by upstream, however, the updated codec_data is mandatory for decoder
and therefore we shouldn't ignore them.
According to following two specs, add support for AC4 in tsdemux.
1. ETSI TS 103 190-2 V1.2.1 (2018-02) : Annex D (normative): AC-4 in MPEG-2 transport streams
2. ETSI EN 300 468 V1.16.1 (2019-08) : Annex D (normative):Service information implementation of AC-3, EnhancedAC-3, and AC-4 audio in DVB systems
- Display caps of the pad we actually tried to link.
- Use the template caps as the filter is likely to not have any caps set
yet.
- Log pad name as well.
The approach is quite simple and doesn't take all use cases into account,
it only implements support when we are using the internal timecode we
create ourself.
Also the way we compute the sought frame count is naive, but it works
for simple cases.
Filter operates on raw data so don't allow decodebin to produce
encoded data if one is defined.
My use case here is keeping the video stream untouched but apply a filter
on the audio one, while keeping the same audio format.
In case the application has to deal with fussy servers. User agent
sniffing is so last decade.
Adds a property to set the Flash version on both the sink and the src.
The default stays the same (IIRC, Flash plugin for Linux from 2009).
The former uses a thread-safe way of getting statistics from the
connection without having to protect the fields with a lock.
The latter produces a zeroed statistics structure for use when no
connection exists.
Apply outgoing sizes only after writing the chunk to the peer. This is
important particularly for the set chunk size and allows exposing it
without threading issues.
Move output chunking from gst_rtmp_connection_queue_message into
gst_rtmp_connection_start_write, which effectively moves it from the
streaming thread into the loop thread.
This allows us to handle the outgoing chunk-size message (which is
generated by changing the future chunk-size property) properly, which
could come from any other thread.
Serializes an RTMP message into a series of chunks, all in one buffer.
Similar to what gst_rtmp_connection_queue_message does to serialize
into a GByteArray.
Similar to gst_rtmp_output_stream_write_all_bytes_async, but takes a
GstBuffer instead of a GBytes. It can also return the number of bytes
written, which might be lower in case of an error.
OBJECT_LOCK is used to protect property access only. self->lock is
used to access the RtmpConnection, mostly between the streaming thread
and the loop thread.
To avoid deadlocks involving these two locks, we obey a lock order:
If both self->lock and OBJECT_LOCK are needed, self->lock must be locked
first. Clarify this.
alignment works like in mpegtsmux, joining several MpegTS packets into
one buffer. Default value of 0 joins as many as possible for each
incoming buffer, to optimise CPU usage.
If we have no DTS but a PTS then this means both are the same, and we
should update the last_ts with the PTS. Only if both are unknown then we
don't know the current position and should not update it at all.
Previously we would always update the last_ts to GST_CLOCK_TIME_NONE if
the DTS is unknown, which caused the position to jump around and to
cause spurious gap events to be sent.
Instead of doing it on each packet and doing it based on the distance to
the previous SCR instead of based on the DTS.
Previously we would send gap events for audio all the time if the SCR
distance was 400ms because the threshold for audio is 300ms and by only
ever updating the position when the SCR updates we would always be 100ms
above the threshold and send needless gap events.
This fixes audio glitches on various files caused by gap events.
Some raw h264 encoded files trigger the assignment of wrong PTS to buffers
when some SEI data is provided. This change prevents it to happen.
Also ensure this behavior is being tested.
We might have some old timecodes that are in the future now and have to
drop those to make sure that our queue is correctly ordered and we don't
have multiple timecodes for the same running time.
Directly read them out of the decoder as soon as we passed audio and
then store them in a queue that we handle internally together with their
timestamps. This cleans up memory management and gives us proper control
over the queue instead of guessing how the queue inside the LTC decoder
actually works and when it overflows.
And also introduce 6 instead of 2 frames of latency compared to the LTC
audio input as that seems to be an upper bound for how much the LTC
library is lagging behind.
As the H265/H264 bitstream can support multiple slices,
mastering_display_info_state and content_light_level_state
should be changed only on first slice segment.
Fix#1152
... by seeking to target offset determined by new seek segment,
rather than that of the previous segment. The latter would typically
seek back to start for a non-accurate seek, and lead to a lot
of skipping in case of an accurate seek.
If one of the inputs is live, add a latency of 2 frames to the video
stream and wait on the clock for that much time to pass to allow for the
LTC audio to be ahead.
In case of live LTC, don't do any waiting but only ensure that we don't
overflow the LTC queue.
Also in non-live LTC audio mode, flush too old items from the LTC queue
if the video is actually ahead instead of potentially waiting forever.
This could've happened if there was a bigger gap in the video stream.
According to H264 ITU standards from 06/19, GST_H264_PROFILE_HIGH_422
(profile_idc = 122) with constraint_set1_flag = 0 and
constraint_set3_flag = 0 can be mapped to high-4:2:2 or high-4:4:4.
GST_H264_PROFILE_HIGH_422 with constraint_set1_flag = 0 and
constraint_set3_flag = 1 can be mapped to high-4:2:2, high-4:4:4,
high-4:2:2-intra or high-4:4:4-intra.
The previous implementation had a very high reproducibility race where
if after a track switch, the ex-active track pad completed a buffer
chain (now returning not-linked) the flow combiner had all their pads in
non-linked state, propagating it as an error and stopping the pipeline.
By resetting the flow combiner in response to RECONFIGURE events that
race is made impossible.
Incrementing it afterwards will always have to phase_index >= 1 and we
will never be at the beginning (0) of the phase again, and thus never
reset timestamp tracking accordingly.
This was broken in bea13ef43b in 2010, and
causes interlace to run into integer overflows after 2^31 frames or
about 5 hours at 29.97fps. Due to usage of wrong types for the integers
this then causes negative numbers to be used in calculations and all
calculations spectacularly fail, leading to all following buffers to
have the timestamp of the first buffer minus one nanosecond.
Allowing the UDP elements to bind on an interface is needed in more
complex networks where there are mutiple networks interfaces without
default gateway
From the documentation of gst_base_parse_set_infer_ts, it should be
disabled for non-audio data. Currently just disabling for all video
parsers that have reordered data: h264, h265, mpeg, mpeg4, vc1. Was
already disabled in h263.
Currently tsdemux timestamps only the PTS, and only issues the DTS if
it's different. In that case, parsers tend to estimate the next DTS
based on the previous DTS and the duration, which can accumulate
rounding errors.
Clean up path objects nicely when shutting down,
first by dropping pointers to elements during dispose,
and then by making sure to drop the ref to the path object
when finalizing the switch bin.
Fixes valgrind checks in the unit test.
Coverity rightly complains that checking a pointer for NULL after
dereferencing it is pointless.
Remove the check, and to be safe, assert that gst_buffer_add_meta
returns non-NULL.
CID 1455485
The message buffers are created using `gst_rtmp_message_new` and thus
always contain a GstRtmpMeta. Add checks to appease Coverity's static
analysis.
CID 1455596
CID 1455384
According to the spec, the least significant bit is reserved and should
always we set to 1. Though, some wrong file has been found. Considering
how low important this reserved bit is, relax the validation.
Related to #660
Packets of a given PID are meant to have sequential continuity counters
(modulo 16). If there are not sequential, this is the sign of a broken
stream, which we then consider as a discontinuity.
But if that new packet is a frame start (PUSI is true), then we can resume
from that packet without any damage.
Sometimes, one wants to force a clock on some pipelines - for instance,
when testing TSN related pipelines, one usually uses GstPtpClock or
CLOCK_REALTIME (assuming system realtime clock is in sync with network
one). Until now, one needs to write an application for that - not
difficult, but quite boring if one just wants to test something. This
patch presents a new element to help that: clockselect.
clockselect is a pipeline with two properties to select a clock. One
property, "clock-id", enables one to choose between "monotonic",
"realtime", "ptp" or "default" clock - where default keeps pipeline
behaviour of choosing a clock based on its elements. The other property,
"ptp-domain" gives one the choice of which PTP domain should be used.
Some very simple tests also added for this new element.
By passing NULL to `g_signal_new` instead of a marshaller, GLib will
actually internally optimize the signal (if the marshaller is available
in GLib itself) by also setting the valist marshaller. This makes the
signal emission a bit more performant than the regular marshalling,
which still needs to box into `GValue` and call libffi in case of a
generic marshaller.
Note that for custom marshallers, one would use
`g_signal_set_va_marshaller()` with the valist marshaller instead.
The peer is already gone when pad_removed_cb() called, so the ghost cannot
be removed. Use g_object_set_data() instead to remember the ghost pad.
Copied from similar code in GstRTPBin.
Since we are not requiring that profile equals GST_JPEG2000_PARSE_PROFILE_BC_SINGLE,
(as the standard requires) we can allow profile to be null. We relax this condition because
OpenJPEG can't create broadcast profiles.
1. only recalculate src codec format if sink caps change
2. use correct value for "jp2c" magic in J2C box ID
3. only parse J2K magic once, and store result
4. more sanity checks comparing caps to parsed codec
This adds two properties:
* scte-35-pid: If not 0, enables the SCTE-35 support for the current
program. This will write the proper PMT and send SCTE-35 NULL
commands (i.e. heartbeats) at a regular interval
* scte-35-null-interval: This specifies the interval at which the
NULL commands should be sent
Sending SCTE-35 commands is done by creating the appropriate SCTE-35
GstMpegtsSection and then sending them on the muxer. See the
associated example
We were unconditionally adding top-level descriptors in the PMT which
were only related to bluray support for PS3 (from 10 years ago).
These should be re-added conditionally
This went un-noticed for 6 years :( The issue is that for short
sections (without subtables and CRC), we would always fail when
checking whether we had enough data or not and then default to the
long section checking.
Use the long section checking would then cause interesting side-effects
for short sections (such as believing they were already seen and therefore
would be dropped/ignored).
This allows selecting whether we continue updating our last known
upstream timecode whenever a new one arrives or instead only keep the
last known one and from there on count up.
This uses the last known upstream timecode (counted up per frame), or
otherwise zero if none was known.
The normal last-known timestamp uses the internal timecode as fallback
if no upstream timecode was ever known.
The convert-error signal is emitted whenever we get a GstFlowReturn
other than GST_FLOW_OK. The handler can then decide what to convert that
into - for instance, return the same GstFlowReturn to not convert it.
The default handler will act according to the ignore-error,
ignore-notlinked, ignore-notnegotiated and convert-to properties. If a
handler is connected, these properties are ignored.
In file included from ../../../../dist/linux_x86_64/include/gstreamer-1.0/gst/gstbin.h:27,
from ../../../../dist/linux_x86_64/include/gstreamer-1.0/gst/gst.h:35,
from ../gst/rtp/gstrtpsink.h:23,
from ../gst/rtp/gstrtpsink.c:49:
In function ‘gst_rtp_sink_start’,
inlined from ‘gst_rtp_sink_change_state’ at ../gst/rtp/gstrtpsink.c:509:11:
../../../../dist/linux_x86_64/include/gstreamer-1.0/gst/gstelement.h:422:18: error: ‘%s’ directive argument is null [-Werror=format-overflow=]
422 | gchar *__txt = _gst_element_error_printf text; \
../gst/rtp/gstrtpsink.c:476:3: note: in expansion of macro ‘GST_ELEMENT_ERROR’
476 | GST_ELEMENT_ERROR (self, RESOURCE, NOT_FOUND,
| ^~~~~~~~~~~~~~~~~
../gst/rtp/gstrtpsink.c: In function ‘gst_rtp_sink_change_state’:
../gst/rtp/gstrtpsink.c:477:37: note: format string is defined here
477 | ("Could not resolve hostname '%s'", remote_addr),
| ^~
In file included from ../../../../dist/linux_x86_64/include/gstreamer-1.0/gst/gstbin.h:27,
from ../../../../dist/linux_x86_64/include/gstreamer-1.0/gst/gst.h:35,
from ../../../../dist/linux_x86_64/include/gstreamer-1.0/gst/rtp/gstrtpdefs.h:27,
from ../../../../dist/linux_x86_64/include/gstreamer-1.0/gst/rtp/rtp.h:25,
from ../gst/rist/gstristsink.c:72:
In function ‘gst_rist_sink_setup_rtcp_socket’,
inlined from ‘gst_rist_sink_start’ at ../gst/rist/gstristsink.c:658:10,
inlined from ‘gst_rist_sink_change_state’ at ../gst/rist/gstristsink.c:801:13:
../../../../dist/linux_x86_64/include/gstreamer-1.0/gst/gstelement.h:422:18: error: ‘%s’ directive argument is null [-Werror=format-overflow=]
422 | gchar *__txt = _gst_element_error_printf text; \
../gst/rist/gstristsink.c:595:3: note: in expansion of macro ‘GST_ELEMENT_ERROR’
595 | GST_ELEMENT_ERROR (sink, RESOURCE, NOT_FOUND,
| ^~~~~~~~~~~~~~~~~
../gst/rist/gstristsink.c: In function ‘gst_rist_sink_change_state’:
../gst/rist/gstristsink.c:596:37: note: format string is defined here
596 | ("Could not resolve hostname '%s'", remote_addr),
| ^~
Allows for "low latency" mpeg-ts mode which is not standard, but somewhat common.
For this to work the sender has to put timestamps at a higher frequency than the spec requires.
PES packets with size 0 are unbounded, and
could therefore overflow the 32-bit size
accumulator.
Add a 32MB limit, which is larger than
any PES packet should ever get. If one does,
then output a 32MB chunk and continue.
A guint32 greater than 2^31 would be interpreted as negative by
gst_util_uint64_scale_int() and critical. Use the 64-bit integer version
of the function instead.
Don't signal a pipeline error when processing incomplete
j2pk PES packets that are too small. That can happen normally
during a DISCONT and shouldn't shut down the whole pipeline
Remove some custom and incomplete seek calculation
logic in favour of gst_segment_do_seek(), and
short-circuit any actual seeking or recalculation
if the position didn't change and just send an updated
segment directly.
This removes the custom seeking logic in favour of
using standard core seek handling.
The default behaviour of rtponviftimestamp is to drop buffers
outside the segment. This creates obvious problems for reverse
playback.
The ONVIF specification unfortunately doesn't describe how to handle
that specific use case, but we can expose a property to let the
user disable the dropping behaviour, and forward these buffers with
a G_MAXUINT64 ONVIF timestamp.
Also modify rtponvifparse to handle such timestamps appropriately.
We reject caps with other framerates as it's impossible to generate
timecodes unless we actually know a constant framerate. Reflect this
also in the pad template caps.
Instead of using a static hardcoded PCR interval, allow the user
to configure it.
Also revert back the default to a 40 ms interval, that was changed
in recent patches for no good reason.
There's no point in working with invalid LTC timestamps as all future
calculations will be wrong based on this, and invalid LTC timestamps can
sometimes be read via the audio input.
This patch just enforces boudaries for the access to the
standard_deviation array (64 floats). Such case can be
seen with a corrupted stream, where there's no hope to
obtain a valid decoded frame anyway.
https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/1002
The D bit is meant to be set whenever there is a discontinuity
in transmission, and directly maps to the DISCONT flag.
The E bit is not meant to be set on every buffer preceding a
discontinuity, but only on the last buffer of a contiguous section
of recording. This has to be signaled through the unfortunately-named
"discont" field of the custom NtpOffset event.
Based on a patch by
Georg Lippitsch <glippitsch@toolsonair.com>
Vivia Nikolaidou <vivia@toolsonair.com>
Using libltc from https://github.com/x42/libltc
We now have a single property to select the timecode source that should
be applied, and for each timecode source the timecode is updated at
every frame. Then based on a set mode, the timecode is added to the
frame if none exists already or all existing timecodes are removed and
the timecode is added.
In addition the real-time clock is considered a proper timecode source
now instead of only allowing to initialize once in the beginning with
it, and also instead of just taking the current time we now take the
current time at the clock time of the video frame.
The SPS parsing functions take a parse_vui_param flag
to skip VUI parsing, but there's no indication in the output
SPS struct that the VUI was skipped.
The only caller that ever passed FALSE seems to be the
important gst_h264_parser_parse_nal() function, meaning - so the
cached SPS were always silently invalid. That needs changing
anyway, meaning noone ever passes FALSE.
I don't see any use for saving a few microseconds in
order to silently produce garbage, and since this is still
unstable API, let's remove the parse_vui_param.
Fix a recently introduced segfault. Don't de-reference a NULL
SPS pointer when attempting to update source caps before SPS
has been seen in the stream.
Interleaved frames can be fragmented between
incoming frames. Thus, we can have multiple
frames within the single input frame, as well as
incomplete frame. Now it preserves parsing
state and handle both situations.
Fixes#991
The length of the TCP payload is the IP plus TCP header length
subtracted from the IP datagram length specified in the IP header.
Prior to this, the size was calculated incorrectly, considering
all data after TCP header as a payload till the end of a packet.
Fixes#995
If recording is set to FALSE after the last audio or video buffer and
before the EOS event then recording stop is never signalled.
Similarly, we should signal recording stop once both audio and video are
EOS, regardless of the recording property, as there's nothing to be
recorded anymore.
We generally always prefer the information from upstream for other
metadata (pixel-aspect-ration, etc.) and should also do so here.
Other parsers (h264parse) already do the same.
When sps parsing fails we use a fallback sps from the caps, since we
have got an sps we need to update parser state and header as in the case the
sps was successfully parsed
When sps parsing fails we use a fallback sps from the caps, since we
have got an sps we need to update parser state and header as in the case the
sps was successfully parsed
Closes#503
Set more unhandled flags to general_constraint_indicator_flags field.
The field is required for building "Codecs" parameter as defined
ISO/IEC 14496-15 Annex E. The resulting "Codecs" string might be used
in various places (e.g., HLS/DASH manifest, browser, player, etc)
This is a re-implementation of the RTP elements that are submitted in
2013 to handle RTP streams. The elements handle a correct connection
for the bi-directional use of the RTCP sockets.
https://bugzilla.gnome.org/show_bug.cgi?id=703111
The rtpsink and rtpsrc elements add an URI interface so that streams
can be decoded with decodebin using the rtp:// interface.
The code can be used as follows
```
gst-launch-1.0 videotestsrc ! x264enc ! rtph264pay config-interval=3 ! rtpsink uri=rtp://239.1.1.1:1234
gst-launch-1.0 videotestsrc ! x264enc ! rtph264pay config-interval=1 ! rtpsink uri=rtp://239.1.2.3:5000
gst-launch-1.0 rtpsrc uri=rtp://239.1.2.3:5000?encoding-name=H264 ! rtph264depay ! avdec_h264 ! videoconvert ! xvimagesink
gst-launch-1.0 videotestsrc ! avenc_mpeg4 ! rtpmp4vpay config-interval=1 ! rtpsink uri=rtp://239.1.2.3:5000
gst-launch-1.0 rtpsrc uri=rtp://239.1.2.3:5000?encoding-name=MP4V-ES ! rtpmp4vdepay ! avdec_mpeg4 ! videoconvert ! xvimagesink
```
rtpmanagerbad: add pkg-config
rtpmanagerbad: Rtp should be uppercase
rtpmanagerbad: add G_OS_WIN32 for shielding unix headers
rtpmanagerbad: remove Since from documentation
rtpmanagerbad: rename lib name from nrtp to rtpmanagerbad
rtpmanagerbad: sync meson.build with other modules
rtpmanagerbad: add Makefile.am
rtpmanagerbad: use GstElement to count pads
rtpmanagerbad: use gst_bin_set_suppressed_flags
rtpmanagerbad: check element creation
rtpmanagerbad: post message when trying to access missing rtpbin
rtpmanagerbad: return FALSE with g_return tests
rtpmanagerbad: use gsocket multicast check
rtpmanagerbad: use gst_caps_new_empty_simple iso gst_caps_from_string
rtpmanagerbad: sync with gstrtppayloads.h
rtpmanagerbad: correct media type X-GST
rtpmanagerbad: test if a compatible pad was found
rtpmanagerbad: remove evil copy of GstRTPPayloadInfo
rtpmanagerbad: add gio_dep to meson
rtpmanagerbad: revert to old glib boilerplate
GStreamer 1.16 does not yet support the newer GLib templates, so revert.
rtpmanagerbad: return GST_STATE_CHANGE_NO_PREROLL for live sources
for live sources, NO_PREROLL should be returned for PLAYING->PAUSED and
READY->PAUSED transitions.
rtpmanagerbad: use GstElement pad counting
rtpmanagerbad: just use template name to request pad
rtpmanagerbad: remove commented code
rtpmanagerbad: use funnel to send multiple streams on one socket
rtpmanagerbad: avoid beaches
beaches should only be used during the summer, so rewrite the code to
return explicitly and avoid beaches during the winter.
rtpmanagerbad: add copyright to test code
rtpmanagerbad: g_free is NULL safe
rtpmanagerbad: do not trace rtpbin
rtpmanagerbad: return NULL explitly
rtpmanagerbad: warn when data port is not even
According to RFC 3550, RTP data should be sent on even ports, while RTCP
is sent on the following odd port.
rtpmanagerbad: document port allocation in rtpsink/src
rtpmanagerbad: improve uri description
rtpmanagerbad: add comment re-use socket
rtpmanagerbad: rename gst_object_set_properties_from_uri_query
rtpmanagerbad: loan prop/val setter from rist
rtpmanagerbad: rtpsrc: fix unitialised pointer
rtpmanagerbad: fix silly typo
rtpmanagerbad: test for empty key/value
rtpmanagerbad: rtpsrc: deprecate ssrc collision to INFO
rtpmanagerbad: sync debug with rist
rtpmanagerbad: small strings allocated on stack
rtpmanagerbad: correct rename
rtpmanagerbad: add locking on prop setters/getters
Locking is added because the URI allows to access the properties too.
rtpmanagerbad: allow for RTCP through NAT
rtpmanagerbad: move gio to header file
rtpmanagerbad: free small strings too
rtpmanagerbad: ttl_mc for ttl on dynudpsink
rtpmanagerbad: add comments on the URI registered
rtpmanagerbad: correct macro after file rename
rtpmanagerbad: code style
rtpmanagerbad: handle wrong URIs in setter
rtpmanagerbad: nit URI notation correction
In an URI, the first key/value pair should not have an ampersand, the
parser did not die though.
As sections can be provided by the user through send_event
when the element state is NULL, their lifetime is expected
to match that of the muxer, and they must be preserved when
the state changes
We can have multiple TsMuxPacketInfo objects for the same PID
with user-provided sections, for example ATSC requires multiple
tables with the same PID.
This might be necessary temporarily for changing the previous settings.
Make it an actual error if the settings are like this while processing a
buffer.