CPU waits are more expensive and are only required if the CPU is ever going to
access the data. GPU waits perform inter-context synchronisation and are cheaper
as they don't require CPU intervention.
Adds unit tests similar to the ones that we have for DASH and HLS.
Tests:
* manifest parsing finishes successfully
* some queries (duration, seekable, latency)
* seeking with various values and flags
Similar to HLS but DASH has the extra issue that it can have
multiple streams so snapping can be tricky as streams usually
won't be aligned.
For now, those tests handle the case of only having a single
stream.
https://bugzilla.gnome.org/show_bug.cgi?id=759158
Adaptive demuxers need to start downloading from specific positions
(fragments) for every stream, this means that all streams can snap-seek
to a different position when requested. Snap seeking in this case will
be done in 2 steps:
1) do the snap seeking on the pad that received the seek event and
get the final position
2) use this position to do a regular seek on the other streams to
make sure they all start from the same position
More arguments were added to the stream_seek function, allowing better control
of how seeking is done. Knowing the flags and the playback direction allows
subclasses to handle snap-seeking.
And also adds a new return parameter to inform of the final
selected seeking position that is used to align the other streams.
https://bugzilla.gnome.org/show_bug.cgi?id=759158
Since commit b77f8e172a the new value
assigned to mview_mode hasn't been used. That commit changed the following
"if" check to an "else if", which means the original value of mview_mode
is used.
All uses of query->context in gstglquery.c assume it exists. We can assume
this as well before unrefing it. Furthermore, gst_object_unref() will just
silently return if it ever were to not exist.
When converting from avc to byte-stream, there will not be any codec_data
in the src caps. Remove it before the equality check to avoid sending caps
events downstream on every SPS/PPS change.
https://bugzilla.gnome.org/show_bug.cgi?id=761014
If we have a stream that contains an unchanging SPS/PPS for every video frame,
we don't need to to constantly query downstream for it's supported caps if the
current caps are compatible with the negotiated caps.
https://bugzilla.gnome.org/show_bug.cgi?id=761014
Improves the responsiveness of the pipeline when resources are close/above the
limitations of the hardware.
Any subclass that wishes not to enable qos can do so themselves.
https://bugzilla.gnome.org/show_bug.cgi?id=761519
When we are not waiting, we need to pass -1 to signal that we just want to check
that the frame was/n't rendered. Avoids waiting for frames that will never be
rendered.
https://bugzilla.gnome.org/show_bug.cgi?id=761014
When not rendering the video frame, e.g. when freeing an unreleased sync frame,
we will not receive a frame listener callback.
Reduces the amount of 'on_frame_available miss detected' messages when dropping
frames.
https://bugzilla.gnome.org/show_bug.cgi?id=761014
FEC may only be used when PLC is enabled on the audio decoder,
as it relies on empty buffers to generate audio from the next
buffer. Hooking to the gap events doesn't work as the audio
decoder does not like more buffers output than it sends.
The length of data to generate using FEC from the next packet
is determined by rounding the gap duration to nearest. This
ensures that duration imprecision does not cause quantization
to 2.5 milliseconds less than available. Doing so causes the
Opus API to fail decoding. Such duration imprecision is common
in live cases.
The buffer to consider when determining the length of audio
to be decoded is the previous buffer when using FEC, and the
new buffer otherwise. In the FEC case, this means we determine
the amount of audio from the previous buffer, whether it was
missing or not (and get the data either from this buffer, or
the current one if the previous one was missing).