Similar to HLS but DASH has the extra issue that it can have
multiple streams so snapping can be tricky as streams usually
won't be aligned.
For now, those tests handle the case of only having a single
stream.
https://bugzilla.gnome.org/show_bug.cgi?id=759158
Adaptive demuxers need to start downloading from specific positions
(fragments) for every stream, this means that all streams can snap-seek
to a different position when requested. Snap seeking in this case will
be done in 2 steps:
1) do the snap seeking on the pad that received the seek event and
get the final position
2) use this position to do a regular seek on the other streams to
make sure they all start from the same position
More arguments were added to the stream_seek function, allowing better control
of how seeking is done. Knowing the flags and the playback direction allows
subclasses to handle snap-seeking.
And also adds a new return parameter to inform of the final
selected seeking position that is used to align the other streams.
https://bugzilla.gnome.org/show_bug.cgi?id=759158
Since commit b77f8e172a the new value
assigned to mview_mode hasn't been used. That commit changed the following
"if" check to an "else if", which means the original value of mview_mode
is used.
All uses of query->context in gstglquery.c assume it exists. We can assume
this as well before unrefing it. Furthermore, gst_object_unref() will just
silently return if it ever were to not exist.
When converting from avc to byte-stream, there will not be any codec_data
in the src caps. Remove it before the equality check to avoid sending caps
events downstream on every SPS/PPS change.
https://bugzilla.gnome.org/show_bug.cgi?id=761014
If we have a stream that contains an unchanging SPS/PPS for every video frame,
we don't need to to constantly query downstream for it's supported caps if the
current caps are compatible with the negotiated caps.
https://bugzilla.gnome.org/show_bug.cgi?id=761014
Improves the responsiveness of the pipeline when resources are close/above the
limitations of the hardware.
Any subclass that wishes not to enable qos can do so themselves.
https://bugzilla.gnome.org/show_bug.cgi?id=761519
When we are not waiting, we need to pass -1 to signal that we just want to check
that the frame was/n't rendered. Avoids waiting for frames that will never be
rendered.
https://bugzilla.gnome.org/show_bug.cgi?id=761014
When not rendering the video frame, e.g. when freeing an unreleased sync frame,
we will not receive a frame listener callback.
Reduces the amount of 'on_frame_available miss detected' messages when dropping
frames.
https://bugzilla.gnome.org/show_bug.cgi?id=761014
FEC may only be used when PLC is enabled on the audio decoder,
as it relies on empty buffers to generate audio from the next
buffer. Hooking to the gap events doesn't work as the audio
decoder does not like more buffers output than it sends.
The length of data to generate using FEC from the next packet
is determined by rounding the gap duration to nearest. This
ensures that duration imprecision does not cause quantization
to 2.5 milliseconds less than available. Doing so causes the
Opus API to fail decoding. Such duration imprecision is common
in live cases.
The buffer to consider when determining the length of audio
to be decoded is the previous buffer when using FEC, and the
new buffer otherwise. In the FEC case, this means we determine
the amount of audio from the previous buffer, whether it was
missing or not (and get the data either from this buffer, or
the current one if the previous one was missing).
This reverts commit 96b9666d59.
This reverts commit d11385d167.
This breaks the texture sharing with the applemedia elements as
CVOpenGLESTextureCache seems to have an arbitrary restriction on GLES2 only.
We may need them to transform into a different set of formats.
Fixes YUV->YUV with two glcolorconverts, e.g:
format=I420 ! glcolorconvert ! glcolorconvert ! format=NV12
1.0 / width does not offset by one pixel in rectangular textures (which use
unnormalized coordinates).
Provide the actual pixel offset as a uniform to the shader.
1. Correctly describe what we can caps we can transform to/from.
i.e. no YUV->YUV or GRAY->YUV or YUV->GRAY (except for passthrough).
2. Prefer similar formats and ignore incompatible formats on fixation.
Useful when gst_gl_window.c::gst_gl_window_new is not used.
This is the case when using a custom GstGLWindow.
(ex: GstGLWindowGPUProcess from Chromium)
Was never ported and doesn't look like
we want it or need it in this form, can
do the same with the libgstvideo sample
conversion utility API now, but better
and in a more flexible way.