If the input has a miss-placed filler zero byte (e.g. a filler without a 4
bytes start code on the next NAL), we would endup using the same timestamp
twice. Ask the base class to read the timestamp from the buffer were the NAL
actually starts.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1251>
This will stop stripping four bytes start code. This was fixed and broken
again as it was causing the a timestamp shift. We now call
gst_base_parse_set_ts_at_offset() with the offset of the first NAL to ensure
that fixing a moderatly broken input stream won't affect the timestamps. We
also fixes the unit test, removing a comment about the stripping behaviour not
being correct.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1251>
The volatile is not needed here and causes compiler warnings
with newer GLib versions.
gstautoconvert.c: In function ‘gst_auto_convert_dispose’ (and elsewhere):
glib/gatomic.h:108:3: warning: initialization discards ‘volatile’ qualifier from pointer target type [-Wdiscarded-qualifiers]
gstautoconvert.c:224:24: note: in expansion of macro ‘g_atomic_pointer_get’
224 | GList *factories = g_atomic_pointer_get (&autoconvert->factories);
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1237>
Otherwise we may endup pushing incomplete caps, which cause a renegotiation.
Note that this has the effect that caps are no longer pushed twice in presence
of valid framerate in the headers.
Otherwise we may endup pushing incomplete caps. Note that this has the side
effect that caps are no longer pushed twice in presence of VUI with valid
framerate.
There is some code to fixup broken stream that uses the SEI location,
this code is meant to locate SUFFIX SEI only. This should prevent
unwanted side effect if SUFFIX SEI is used.
Waiting for the next NAL increases the latency. If alignment=nal/au
has been negotiated, assumes the the buffer contains a complete
NAL and don't expect a second start-code. This way, nal -> nal,
au -> au and au -> nal no longer introduce latency.
As a side effect, the collect_pad() function was not able to poke at the
following NAL. This call is now moved before processing the NAL, so
it's looking at the current NAL before it's ingested into the parser
state in order to dermin if the end of an AU has been reached. The AUD
injection state as been adapted to support this.
This change will break pipelines if alignment=nal is used without respecting the
alignment. Effectively, the parser will no longer fix the broken aligment
which will result in parser error and the termination of the pipeline. Such
issue existed in tsdemux element and might exist in any forks of that code.
Related to https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1193
Waiting for the next NAL increases the latency. If alignment=nal/au
has been negotiated, assumes that the buffer contains a complete
NAL and don't expect a second start-code. This way, nal -> nal,
au -> au and au -> nal no longer introduce latency.
As a side effect, the collect_pad() function was not able to poke at the
following NAL. This call is now moved before processing the NAL, so
it's looking at the current NAL before it's ingested into the parser
state in order to dermin if the end of an AU has been reached. The AUD
injection state as been adapted to support this.
This change will break pipelines if alignment=nal is used without respecting the
alignment. Effectively, the parser will no longer fix the broken aligment
which will result in parser error and the termination of the pipeline. Such
issue existed in tsdemux element and might exist in any forks of that code.
Related to https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1193
Until now, any streams in tsmux had to be present when the element
started its first buffer. Now they can appear at any point during the
stream, or even disappear and reappear later using the same PID.
Per specification in 2.14.2 "For PES packetization, no specific data
alignment constraints apply". So we should not advertise NAL
alignment.
This bug was introduced at the same moment the alignment field was introduced
10 years ago. The plan was that alignment=none (or no alignment field) was to
be used for mpegtsdemux, but no one noticed the error. The reason is that at
the same moment, everything dealing with H264 started defaulting to AU
alignment.
https://bugzilla.gnome.org/show_bug.cgi?id=606662#c22
This patch will have a side effect that a parser is now needed after the
tsdemux element. The following pipeline will not negotiate anymore as the
mpegtsmux element requires alignment={nal,au}.
... ! tsdemux ! mpegtsmux ! ...
As a side effect, anyone that forked from tsdemux should updated their code to
fix this bug.
The comparision was not testing anything meaninful. This fixes the comparision
so we now update the caps whenever the value differ. This was detected by
coverity.
CID 1461291
We need to do this without holding the lock as the `g_async_queue_pop`
waits on the loop thread to deliver the stats. The loop thread might
attempt to take the lock as well, leading to a deadlock.
Taking a reference to the connection should be enough to keep this
safe.
Add new property "update-timecode" to allow updating timecode
in picture timing SEI depending on timecode meta. Since the picture
timing SEI message requires proper VUI setting but we don't support
re-writing SPS, this might not work for some streams
The rtpbin sends signals for all SSRCs. Don't send an EOS when the SSRC
does not match the stream SSRC.
This avoids problems when an SSRC from another receiver times out.
In some scenarios the fakevideosink shouldn't advertize the overlay-composition
meta for instance, so that overlay elements perform subtitles blending
themselves.
According to the specification, the adaptation field length must be 183 if
there is no payload data and < 183 if the packet contains an adaptation
field and payload data.
Unfortunately some payloaders always set the flag for payload data, even if
the adaptation field length is 183.
Don't return with an error in this case. Clear the payload data flag
instead and parse the adaptation field as usual. This avoids visual
artefacts for such streams.
For interlaced video:
* set the interlace mode in the src caps
* double the height from SPS in the caps.
* set field latency, instead of frame latency.
Fix#778
Add new property to signalling that there is no incoming data
from peer. This can be useful if users want to stop the streaming
when the connection is alive but no packet is arriving.
Both 2 and 4 are supported version of AAC ADTS format stream.
So we need to set correct version to help negotiation
especially for non-autopluggable pipeline.
Some mpeg-ts (HLS, DVB, ...) streams out there have completely broken
PCR streams on which we can't reliably recover correct timestamps.
For those, provide a property that will ignore the program PCR stream
(by faking that it's not present (0x1fff)).
Initially the case "only codec_data is different" was addressed in
https://bugzilla.gnome.org/show_bug.cgi?id=705333 in order for
unusual bitstreams to be handled. That's the case where sps and pps
are placed in bitstream. When sps/pps are signalled only via caps
by upstream, however, the updated codec_data is mandatory for decoder
and therefore we shouldn't ignore them.
According to following two specs, add support for AC4 in tsdemux.
1. ETSI TS 103 190-2 V1.2.1 (2018-02) : Annex D (normative): AC-4 in MPEG-2 transport streams
2. ETSI EN 300 468 V1.16.1 (2019-08) : Annex D (normative):Service information implementation of AC-3, EnhancedAC-3, and AC-4 audio in DVB systems
- Display caps of the pad we actually tried to link.
- Use the template caps as the filter is likely to not have any caps set
yet.
- Log pad name as well.
The approach is quite simple and doesn't take all use cases into account,
it only implements support when we are using the internal timecode we
create ourself.
Also the way we compute the sought frame count is naive, but it works
for simple cases.
Filter operates on raw data so don't allow decodebin to produce
encoded data if one is defined.
My use case here is keeping the video stream untouched but apply a filter
on the audio one, while keeping the same audio format.
In case the application has to deal with fussy servers. User agent
sniffing is so last decade.
Adds a property to set the Flash version on both the sink and the src.
The default stays the same (IIRC, Flash plugin for Linux from 2009).
The former uses a thread-safe way of getting statistics from the
connection without having to protect the fields with a lock.
The latter produces a zeroed statistics structure for use when no
connection exists.
Apply outgoing sizes only after writing the chunk to the peer. This is
important particularly for the set chunk size and allows exposing it
without threading issues.
Move output chunking from gst_rtmp_connection_queue_message into
gst_rtmp_connection_start_write, which effectively moves it from the
streaming thread into the loop thread.
This allows us to handle the outgoing chunk-size message (which is
generated by changing the future chunk-size property) properly, which
could come from any other thread.
Serializes an RTMP message into a series of chunks, all in one buffer.
Similar to what gst_rtmp_connection_queue_message does to serialize
into a GByteArray.
Similar to gst_rtmp_output_stream_write_all_bytes_async, but takes a
GstBuffer instead of a GBytes. It can also return the number of bytes
written, which might be lower in case of an error.
OBJECT_LOCK is used to protect property access only. self->lock is
used to access the RtmpConnection, mostly between the streaming thread
and the loop thread.
To avoid deadlocks involving these two locks, we obey a lock order:
If both self->lock and OBJECT_LOCK are needed, self->lock must be locked
first. Clarify this.
alignment works like in mpegtsmux, joining several MpegTS packets into
one buffer. Default value of 0 joins as many as possible for each
incoming buffer, to optimise CPU usage.
If we have no DTS but a PTS then this means both are the same, and we
should update the last_ts with the PTS. Only if both are unknown then we
don't know the current position and should not update it at all.
Previously we would always update the last_ts to GST_CLOCK_TIME_NONE if
the DTS is unknown, which caused the position to jump around and to
cause spurious gap events to be sent.
Instead of doing it on each packet and doing it based on the distance to
the previous SCR instead of based on the DTS.
Previously we would send gap events for audio all the time if the SCR
distance was 400ms because the threshold for audio is 300ms and by only
ever updating the position when the SCR updates we would always be 100ms
above the threshold and send needless gap events.
This fixes audio glitches on various files caused by gap events.
Some raw h264 encoded files trigger the assignment of wrong PTS to buffers
when some SEI data is provided. This change prevents it to happen.
Also ensure this behavior is being tested.
We might have some old timecodes that are in the future now and have to
drop those to make sure that our queue is correctly ordered and we don't
have multiple timecodes for the same running time.
Directly read them out of the decoder as soon as we passed audio and
then store them in a queue that we handle internally together with their
timestamps. This cleans up memory management and gives us proper control
over the queue instead of guessing how the queue inside the LTC decoder
actually works and when it overflows.
And also introduce 6 instead of 2 frames of latency compared to the LTC
audio input as that seems to be an upper bound for how much the LTC
library is lagging behind.
As the H265/H264 bitstream can support multiple slices,
mastering_display_info_state and content_light_level_state
should be changed only on first slice segment.
Fix#1152
... by seeking to target offset determined by new seek segment,
rather than that of the previous segment. The latter would typically
seek back to start for a non-accurate seek, and lead to a lot
of skipping in case of an accurate seek.
If one of the inputs is live, add a latency of 2 frames to the video
stream and wait on the clock for that much time to pass to allow for the
LTC audio to be ahead.
In case of live LTC, don't do any waiting but only ensure that we don't
overflow the LTC queue.
Also in non-live LTC audio mode, flush too old items from the LTC queue
if the video is actually ahead instead of potentially waiting forever.
This could've happened if there was a bigger gap in the video stream.
According to H264 ITU standards from 06/19, GST_H264_PROFILE_HIGH_422
(profile_idc = 122) with constraint_set1_flag = 0 and
constraint_set3_flag = 0 can be mapped to high-4:2:2 or high-4:4:4.
GST_H264_PROFILE_HIGH_422 with constraint_set1_flag = 0 and
constraint_set3_flag = 1 can be mapped to high-4:2:2, high-4:4:4,
high-4:2:2-intra or high-4:4:4-intra.
The previous implementation had a very high reproducibility race where
if after a track switch, the ex-active track pad completed a buffer
chain (now returning not-linked) the flow combiner had all their pads in
non-linked state, propagating it as an error and stopping the pipeline.
By resetting the flow combiner in response to RECONFIGURE events that
race is made impossible.
Incrementing it afterwards will always have to phase_index >= 1 and we
will never be at the beginning (0) of the phase again, and thus never
reset timestamp tracking accordingly.
This was broken in bea13ef43b in 2010, and
causes interlace to run into integer overflows after 2^31 frames or
about 5 hours at 29.97fps. Due to usage of wrong types for the integers
this then causes negative numbers to be used in calculations and all
calculations spectacularly fail, leading to all following buffers to
have the timestamp of the first buffer minus one nanosecond.