Due to us not properly acknowleding the time when the last RTX was sent
when scheduling a new one, it can easily happen that due to the packet
you are requesting have a PTS that is slightly old (but not too old when
adding the latency of the jitterbuffer), both its calculated second and
third (etc.) timeout could already have passed. This would lead to a burst
of RTX requests, which acts completely against its purpose, potentially
spending a lot more bandwidth than needed.
This has been properly reproduced in the test:
test_rtx_not_bursting_requests
The good news is that slightly re-thinking the logic concerning
re-requesting RTX, made it a lot simpler to understand, and allows us
to remove two members of the RtpTimer which no longer serves any purpose
due to the refactoring. If desirable the whole "delay" concept can actually
be removed completely from the timers, and simply just added to the timeout
by the caller of the API. But that can be a change for a another time.
The only external change (other than the improved behavior around bursting
RTX) is that the "delay" field now stricly represents the delay between
the PTS of the RTX-requested packet and the time it is requested on,
whereas before this calculation was more about the theoretical calculated
delay. This is visible in three other RTX-tests where the delay had
to be adjusted slightly. I am confident however that this change is
correct.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/789>
- introduce two new properties:
* temporal-scalability-layer-flags:
Provide fine-grained control of layer encoding to the
outside world. The flags sequence should be a multiple of
the periodicity and is indexed by a running count of encoded
frames modulo the sequence length.
* temporal-scalability-layer-sync-flags:
Specify the pattern of inter-layer synchronisation (i.e.
which of the frames generated by the layer encoding
specification represent an inter-layer synchronisation).
There must be one entry per entry in
temporal-scalability-layer-flags.
- apply temporal scalability settings and expose as buffer
metadata.
This allows the codec to allocate a given frame to the correct
internal bitrate allocator. Additionally, all the
non-bitstream metadata needed to payload a temporally scaled
stream is now attached to each output buffer as a
GstVideoVP8Meta.
- add unit test for temporally scaled encoding.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/728>
Scenario:
- gap event causes h264parse to push made up caps that may fail checks
inside qtmux (e.g missing codec_data).
- the caps event has already been marked as received and is sticky on
the sink pad
- gst_qt_mux_pad_can_renegotiate() will retrieve the failed caps event
using gst_pad_get_current_caps() and reject the correct updated caps
with codec_data.
- Failure!
Keep track of the configured caps ourselves instead of relying on the
sticky event on the pad.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/732>
If we have not received a FU with a start bit set, any subsequent FU
data is not useful at all and would result in an invalid stream.
This case is constructed from multiple requirements in
RFC 3984 Section 5.8 and RFC 7798 Section 4.4.3. Following are excerpts
from RFC 3984 but RFC 7798 contains similar language.
The FU in a single FU case is forbidden:
A fragmented NAL unit MUST NOT be transmitted in one FU; i.e., the
Start bit and End bit MUST NOT both be set to one in the same FU
header.
and dropping is possible:
If a fragmentation unit is lost, the receiver SHOULD discard all
following fragmentation units in transmission order corresponding to
the same fragmented NAL unit.
The jump in seqnum case is supported by this from the specification
instead of implementing the forbidden_zero_bit mangling:
If a fragmentation unit is lost, the receiver SHOULD discard all
following fragmentation units in transmission order corresponding to
the same fragmented NAL unit.
A receiver in an endpoint or in a MANE MAY aggregate the first n-1
fragments of a NAL unit to an (incomplete) NAL unit, even if fragment
n of that NAL unit is not received. In this case, the
forbidden_zero_bit of the NAL unit MUST be set to one to indicate a
syntax violation.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/730>
Used by some proprietary software for their fragmented files.
Adds some support for multi-stream fragmented files
Flow is as follows.
1. The first 'fragment' is written as a self-contained fragmented
mdat+moov complete with an edit list and durations, tags, etc.
2. Subsequent fragments are written with a mdat+moof and each stream is
interleaved as data arrives (currently ignoring the interleave-*
properties). data-offsets in both the traf and the trun ensure
data is read from the correct place on demuxing. Data/chunk offsets
are also kept for writing out the final moov.
3. On finalisation, the initial moov is invalidated to a hoov and the
size of the first mdat is extended to cover the entire file contents.
Then a moov is written as regularly would in moov-at-end mode (the
default).
This results in a file that is playable throughout while leaving a
finalised file on completion for players that do not understand
fragmented mp4.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/643>
- make test_encode_simple cope with libvpx built with
CONFIG_REALTIME_ONLY. Sadly, there's no way to detect this at
runtime beyond trying to set lag-in-frames to >0, pushing a
buffer and catching the GST_FLOW_NOT_NEGOTIATED return.
- fix bitrot in test_encode_simple_when_bitrate_set_to_zero.
- port test_encode_simple to GstHarness and introduce a separate
test for the lag-in-frames property.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/708>
Previously, the user input for stsd entries is trusted completely, and
so a maliciously crafted file could choose the length of the stsd
entries arbitrarily and cause qtdemux to try to allocate up to 2GB of
memory (half of a 32 bit max int).
This patch fixes this by sanity checking the stsd input against the
size of the entire stsd atom.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/670>
During trak parsing, we need to check for the existence of stsd_entries,
otherwise, we end up with a NULL pointer to them. It is entirely
possible for the stsd to exist, but for it to have no entries, which the
previous checks did not take into account.
This patch adds a simply check to ensure that all files that do not
contain a stsd entry are deemed corrupt, and adds a test case to prevent
a regression.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/670>
As part of this also change the default bitrate value to 0. The default
value was 256000 previously. In reality, if the property was not set the
bitrate value would be scaled according to the resolution which is not
very intuitive behavior. It is better to use 0 for this purpose. Now
together with newly introduced property "bits-per-pixel" 0 means to
assign the bitrate according to resolution/framerate.
The default bitrates are now
- 1.2Mbps for VP8 720p@30fps
- 0.8Mbps for VP9 720p@30fps
and scaled accordingly for different resolutions/framerates.
Previously the default bitrate was also not scaled according to the
framerate but only took the resolution into account.
This also fixes the side effect of setting bitrate to 0. Previously
encoder would not produce any data at all.
Addition from Sebastian Dröge <sebastian@centricular.com> to assume
30fps if no framerate is given in the caps instead of not calculating
any bitrate at all.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/611>
For VP8 it's possible to signal width or height to be 0, but it does
not make sense to do so. For VP9 it's impossible. Hence, we most
likely have a corrupt stream. Trying to negotiate caps downstream with
either width or height as 0 will fail with something like
gst_video_decoder_negotiate_default: assertion 'GST_VIDEO_INFO_WIDTH (&state->info) != 0' failed
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/610>
The problem was this:
Due to the highly irregular arrival of RTX-packet the max-misorder variable
could be pushed very low. (-10).
If you then at some point get a big in the sequence-numbers (62 in the
test) you end up sending RTX-requests for some of those packets, and then
if the sender answers those requests, you are going to get a bunch of
RTX-packets arriving. (-13 and then 5 more packets in the test)
Now, if max-misorder is pushed very low at this point, these RTX-packets
will trigger the handle_big_gap_buffer() logic, and because they arriving
so neatly in order, (as they would, since they have been requested like
that), the gst_rtp_jitter_buffer_reset() will be called, and two things
will happen:
1. priv->next_seqnum will be set to the first RTX packet
2. the 5 RTX-packet will be pushed into the chain() function
However, at this point, these RTX-packets are no longer valid, the
jitterbuffer has already pushed lost-events for these, so they will now
be dropped on the floor, and never make it to the waiting loop-function.
And, since we now have a priv->next_seqnum that will never arrive
in the loop-function, the jitterbuffer is now stalled forever, and will
not push out another buffer.
The proposed fixes:
1. Don't use RTX in calculation of the packet-rate.
2. Don't use RTX in large-gap logic, as they are likely to be dropped.
gst_buffer_map () results in memcopying when a GstBuffer contains
more than one GstMemory.
This has quite an impact on performance on systems with limited amount
of resources. With this patch the whole GstBuffer will not be mapped at
once, instead each individual GstMemory will be iterated and mapped
separately.
There is a use-case for a server to re-payload opus going through it.
Problem was that the payloader requires channels in the caps, but
this is not something the depayloader can parse out of the stream, meaning
caps-negotiation would fail.
Removing the requirement of channels in the template-caps fixes this.
Mainly generalize all the latest tests that have found various stalls
in the jitterbuffer, so that they only consist of a series of packets
with various seqnum/rtptime/rtx combinations, arriving at a specific time.
This means future tests can be more easily written to prove certain
behavior does not cause stalls.
Also fix the warning on windows:
warning C4244: 'initializing': conversion from 'double' to 'gint', possible loss of data
This is a concept that only applies when a buffer arrives in the chain
function, and it has already been scheduled as part of a "multi"-lost
timer.
However, "multi"-lost timers are now a thing of the past, making this
whole concept superflous, and this buffer is now simply counted as "late",
having already been pushed out (albeit as a lost-event).
There is a problem with the code today, where a single timer will
be scheduled for a series of lost packets, and then if the first packet
in that series arrives, it will cause a rescheduling of that timer, going
from a "multi"-timer to a single-timer, causing a lot of the packets
in that timer to be unaccounted for, and creating a situation in where
the jitterbuffer will never again push out another packet.
This patch solves the problem by instead of scheduling those lost packets
as another timer, it instead asks to have that lost-event pushed straight
out.
This very much goes with the intent of the code here: These packets are
so desperately late that no cure exists, and we might as well get the
lost-event out of the way and get on with it.
This change has some interesting knock-on effect being presented in
later commits. It completely removes the concept of "already-lost", so
that is why that test has been disabled in this commit, to be
removed later.
gst_buffer_map () results in memcopying when a GstBuffer contains
more than one GstMemory and when AVC (length-prefixed) alignment is used.
This has quite an impact on performance on systems with limited amount of
resources. With this patch the whole GstBuffer will not be mapped at once,
instead each individual GstMemory will be iterated and mapped separately.
When calling gst_rtp_jitter_buffer_reset you pass in a seqnum.
This is considered the starting-point for a new stream.
However, the old behavior would unref this buffer, basically lying to
the thread that is pushing out buffers saying that it can expect
this buffer, when it would never arrive. The resulting effect being no
more buffer pushed out of the jitterbuffer, and it would buffer
incoming data indefinitely.
By instead inserting the buffer in the gap_packets queue, the _reset()
function will take responsibility for using that as the first buffer
of the new stream.
Fixes#703
In order to concatenate fragments, splitmuxsrc offsets
the start of each fragment PTS to 0 to align it with the
previous file. This means that DTS can go negative for
the first fragment, with really bad results.
Add a fixed offset to outgoing timestamp ranges to
avoid that.
When using the testclock for determining clock in test, it is sometimes observed
that the clock entry is not registered in time by the aggregator. So deadlock occurs
between the aggregator and the test thread.
* Organize GstRtpFunnelPad and GstRtpFunnel separately
* Use G_GNUC_UNUSED instead of (void) casts
* Don't call an event "caps"
* Use semicolons after GST_END_TEST (helps gst-indent)
When used for processing bundled media streams within rtpbin the rtpssrcdemux element may
receive bad RTP and RTCP packets, these should not be treated as a fatal error.
The property is useful against atacks when the sender changes SSRC for
every RTP packet. The property with the same name introduced in rtpbin
was not enough, because we still can end up with thousands of pads
allocated in rtpssrcdemux.