Make sure the manifest update loop is stopped before proceeding with the
resetting of the manifest data. Otherwise, the updates loop will try to
use it and it leads to a segfault
https://bugzilla.gnome.org/show_bug.cgi?id=783028
As we release the MANIFEST_LOCK in stop_tasks,
demux->priv->old_streams can be set, we need to free these
otherwise we may end up trying to dispose elements in the
READY state.
https://bugzilla.gnome.org/show_bug.cgi?id=783256
When an accurate seek is requested on a live stream, only requests the
exact value for the "starting position" (i.e. start in forward playback
and stop in reverse playback).
https://bugzilla.gnome.org/show_bug.cgi?id=782698
And to make that 100% obvious, only use variables declared within the
switch cases instead of function-wide ones.
Also remove useless one-time-use-only variable.
CID #1409857
The live seeking range was only checked when doing actual seeks. This was
assuming that the rate would always be 1.0 (i.e. the playback would
advance in realtime, and therefore fragments would always be available
since the seeking window moves at the same rate).
With non-1.0 rates, this no longer becomes valid, and therefore we need
to check whether we are still within the live seeking range when advancing.
https://bugzilla.gnome.org/show_bug.cgi?id=783075
What we want is to retry downloading the fragment on 4xx/5xx errors
however returning EOS will cause waiting for a manifest update for live
(which may be a really long time) or stop everything for non-live.
Change that to only return EOS/ERROR once we've reached the error limit.
https://bugzilla.gnome.org/show_bug.cgi?id=776609
GL_RGB565 is sized internal glformat, the corresponding glformat
should be GL_RGB and type is GL_UNSIGNED_SHORT_565. Otherwise will
return GL_INVALID_ENUM when creating texture.
https://bugzilla.gnome.org/show_bug.cgi?id=783066
This ensures that they really get processed in order with
buffers. Just waiting for the queue to be empty is sometimes not
enough as the buffers are dropped from the pad before the result is
pushed to the next element, sometimes resulting in surprising
re-ordering.
In the case an aggregator is created and pads are requested but only
linked later, we end up never updating the upstream latency.
This was because latency queries on pads that are not linked succeed,
so we never did a new query once a live source has been linked, so the
thread was never started.
https://bugzilla.gnome.org/show_bug.cgi?id=757548
When caps changes while streaming, the new caps was getting processed
immediately in videoaggregator, but the next buffer in the queue that
corresponds to this new caps was not necessarily being used immediately,
which resulted sometimes in using an old buffer with new caps. Of course
there used to be a separate buffer_vinfo for mapping the buffer with its
own caps, but in compositor the GstVideoConverter was still using wrong
info and resulted in invalid reads and corrupt output.
This approach here is more safe. We delay using the new caps
until we actually select the next buffer in the queue for use.
This way we also eliminate the need for buffer_vinfo, since the
pad->info is always in sync with the format of the selected buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=780682
The function needs to be unlocked if any data is received, but only
end the first buffer processing on an actual buffer, synchronized events
don't matter on the first buffer processing.
https://bugzilla.gnome.org/show_bug.cgi?id=781673
With the macOS/iOS implementations, the active thread can change
multiple times over the life of a pipeline which would expose a race in
the thread tracking.
Fix by taking a ref on the active thread while the context is active.
https://bugzilla.gnome.org/show_bug.cgi?id=779202