- make test_encode_simple cope with libvpx built with
CONFIG_REALTIME_ONLY. Sadly, there's no way to detect this at
runtime beyond trying to set lag-in-frames to >0, pushing a
buffer and catching the GST_FLOW_NOT_NEGOTIATED return.
- fix bitrot in test_encode_simple_when_bitrate_set_to_zero.
- port test_encode_simple to GstHarness and introduce a separate
test for the lag-in-frames property.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/708>
Previously, the user input for stsd entries is trusted completely, and
so a maliciously crafted file could choose the length of the stsd
entries arbitrarily and cause qtdemux to try to allocate up to 2GB of
memory (half of a 32 bit max int).
This patch fixes this by sanity checking the stsd input against the
size of the entire stsd atom.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/670>
During trak parsing, we need to check for the existence of stsd_entries,
otherwise, we end up with a NULL pointer to them. It is entirely
possible for the stsd to exist, but for it to have no entries, which the
previous checks did not take into account.
This patch adds a simply check to ensure that all files that do not
contain a stsd entry are deemed corrupt, and adds a test case to prevent
a regression.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/670>
Set up our plugin include list for tests in such a way that
we don't pull in *all* plugins from -bad but only the one
used in the splitmuxsink unit test, i.e. the timecode plugin,
so we don't accidentally use other encoders/decoders such as
nvenc/dec for example.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/617>
As part of this also change the default bitrate value to 0. The default
value was 256000 previously. In reality, if the property was not set the
bitrate value would be scaled according to the resolution which is not
very intuitive behavior. It is better to use 0 for this purpose. Now
together with newly introduced property "bits-per-pixel" 0 means to
assign the bitrate according to resolution/framerate.
The default bitrates are now
- 1.2Mbps for VP8 720p@30fps
- 0.8Mbps for VP9 720p@30fps
and scaled accordingly for different resolutions/framerates.
Previously the default bitrate was also not scaled according to the
framerate but only took the resolution into account.
This also fixes the side effect of setting bitrate to 0. Previously
encoder would not produce any data at all.
Addition from Sebastian Dröge <sebastian@centricular.com> to assume
30fps if no framerate is given in the caps instead of not calculating
any bitrate at all.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/611>
For VP8 it's possible to signal width or height to be 0, but it does
not make sense to do so. For VP9 it's impossible. Hence, we most
likely have a corrupt stream. Trying to negotiate caps downstream with
either width or height as 0 will fail with something like
gst_video_decoder_negotiate_default: assertion 'GST_VIDEO_INFO_WIDTH (&state->info) != 0' failed
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/610>
If core is built as a subproject (e.g. as in gst-build), make sure to use
the gst-plugin-scanner from the built subproject. Without this, gstreamer
might accidentally use the gst-plugin-scanner from the install prefix if
that exists, which in turn might drag in gst library versions we didn't
mean to drag in. Those gst library versions might then be older than
what our current build needs, and might cause our newly-built plugins
to get blacklisted in the test registry because they rely on a symbol
that the wrongly-pulled in gst lib doesn't have.
This should fix running of unit tests in gst-build when invoking
meson test or ninja test from outside the devenv for the case where
there is an older or different-version gst-plugin-scanner installed
in the install prefix.
In case no gst-plugin-scanner is installed in the install prefix, this
will fix "GStreamer-WARNING: External plugin loader failed. This most
likely means that the plugin loader helper binary was not found or
could not be run. You might need to set the GST_PLUGIN_SCANNER
environment variable if your setup is unusual." warnings when running
the unit tests.
In the case where we find GStreamer core via pkg-config we use
a newly-added pkg-config var "pluginscannerdir" to get the right
directory. This has the benefit of working transparently for both
installed and uninstalled pkg-config files/setups.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/603>
The problem was this:
Due to the highly irregular arrival of RTX-packet the max-misorder variable
could be pushed very low. (-10).
If you then at some point get a big in the sequence-numbers (62 in the
test) you end up sending RTX-requests for some of those packets, and then
if the sender answers those requests, you are going to get a bunch of
RTX-packets arriving. (-13 and then 5 more packets in the test)
Now, if max-misorder is pushed very low at this point, these RTX-packets
will trigger the handle_big_gap_buffer() logic, and because they arriving
so neatly in order, (as they would, since they have been requested like
that), the gst_rtp_jitter_buffer_reset() will be called, and two things
will happen:
1. priv->next_seqnum will be set to the first RTX packet
2. the 5 RTX-packet will be pushed into the chain() function
However, at this point, these RTX-packets are no longer valid, the
jitterbuffer has already pushed lost-events for these, so they will now
be dropped on the floor, and never make it to the waiting loop-function.
And, since we now have a priv->next_seqnum that will never arrive
in the loop-function, the jitterbuffer is now stalled forever, and will
not push out another buffer.
The proposed fixes:
1. Don't use RTX in calculation of the packet-rate.
2. Don't use RTX in large-gap logic, as they are likely to be dropped.
gst_buffer_map () results in memcopying when a GstBuffer contains
more than one GstMemory.
This has quite an impact on performance on systems with limited amount
of resources. With this patch the whole GstBuffer will not be mapped at
once, instead each individual GstMemory will be iterated and mapped
separately.
There is a use-case for a server to re-payload opus going through it.
Problem was that the payloader requires channels in the caps, but
this is not something the depayloader can parse out of the stream, meaning
caps-negotiation would fail.
Removing the requirement of channels in the template-caps fixes this.
Mainly generalize all the latest tests that have found various stalls
in the jitterbuffer, so that they only consist of a series of packets
with various seqnum/rtptime/rtx combinations, arriving at a specific time.
This means future tests can be more easily written to prove certain
behavior does not cause stalls.
Also fix the warning on windows:
warning C4244: 'initializing': conversion from 'double' to 'gint', possible loss of data
This is a concept that only applies when a buffer arrives in the chain
function, and it has already been scheduled as part of a "multi"-lost
timer.
However, "multi"-lost timers are now a thing of the past, making this
whole concept superflous, and this buffer is now simply counted as "late",
having already been pushed out (albeit as a lost-event).
There is a problem with the code today, where a single timer will
be scheduled for a series of lost packets, and then if the first packet
in that series arrives, it will cause a rescheduling of that timer, going
from a "multi"-timer to a single-timer, causing a lot of the packets
in that timer to be unaccounted for, and creating a situation in where
the jitterbuffer will never again push out another packet.
This patch solves the problem by instead of scheduling those lost packets
as another timer, it instead asks to have that lost-event pushed straight
out.
This very much goes with the intent of the code here: These packets are
so desperately late that no cure exists, and we might as well get the
lost-event out of the way and get on with it.
This change has some interesting knock-on effect being presented in
later commits. It completely removes the concept of "already-lost", so
that is why that test has been disabled in this commit, to be
removed later.
gst_buffer_map () results in memcopying when a GstBuffer contains
more than one GstMemory and when AVC (length-prefixed) alignment is used.
This has quite an impact on performance on systems with limited amount of
resources. With this patch the whole GstBuffer will not be mapped at once,
instead each individual GstMemory will be iterated and mapped separately.
When calling gst_rtp_jitter_buffer_reset you pass in a seqnum.
This is considered the starting-point for a new stream.
However, the old behavior would unref this buffer, basically lying to
the thread that is pushing out buffers saying that it can expect
this buffer, when it would never arrive. The resulting effect being no
more buffer pushed out of the jitterbuffer, and it would buffer
incoming data indefinitely.
By instead inserting the buffer in the gap_packets queue, the _reset()
function will take responsibility for using that as the first buffer
of the new stream.
Fixes#703
In order to concatenate fragments, splitmuxsrc offsets
the start of each fragment PTS to 0 to align it with the
previous file. This means that DTS can go negative for
the first fragment, with really bad results.
Add a fixed offset to outgoing timestamp ranges to
avoid that.
When using the testclock for determining clock in test, it is sometimes observed
that the clock entry is not registered in time by the aggregator. So deadlock occurs
between the aggregator and the test thread.
* Organize GstRtpFunnelPad and GstRtpFunnel separately
* Use G_GNUC_UNUSED instead of (void) casts
* Don't call an event "caps"
* Use semicolons after GST_END_TEST (helps gst-indent)
When used for processing bundled media streams within rtpbin the rtpssrcdemux element may
receive bad RTP and RTCP packets, these should not be treated as a fatal error.
The property is useful against atacks when the sender changes SSRC for
every RTP packet. The property with the same name introduced in rtpbin
was not enough, because we still can end up with thousands of pads
allocated in rtpssrcdemux.
When connected to an upstream rtpfunnel element, payload-type,
ssrc and clock-rate will not be present in the received caps.
rtprtxsend can already deal with only the clock rate being
present there, a new property is exposed to allow users to
provide a payload-type -> clock-rate map, this enables the
use of the max-size-time property for bundled streams.
The key is to make sure the jitterbuffer is set to NULL *before* the
ptdemux.
The race that existed would basically happen when ptdemux had reached
READY, and the jitterbuffer would then push a buffer, triggering a new
pad with a new payloadtype being added and ghosted to the rtpbin itself.
However, the srcpad of the ptdemux would now be inactive, and all the
sticky-event pushed on it would be swallowed, not allowing any to reach
the ghost-pad. Then the buffer in-flight would come to the ghostpad,
and we would assert that a buffer arrived before the necessary
events.
By simply re-ordering the state-changes, we ensure that there will be
no buffer racing into the ptdemux while its state is being changed,
and the problem disappears completely.
Notice also that there is not point in disconnecting the signals on the
ptdemux before this point, since we need the push-thread to settle
down before we can do this in a non-racy way.