This is a workaround for a regression introduced by
f4190a49c0
( adaptivedemux: Check live seeking range more often )
The goal of the previous commit was to be able to cope with non-1.0
rates on live streams which have a "seeking window" (i.e. the server
keeps around quite a bit of the live stream so you can seek back into
it).
Without that commit, two different kind of issues would happen:
* When doing reverse playback, you would never check whether you
are outside of the seekable region. And would then continuously
try to download fragments that are no longer present.
* When doing fast forward, you would end up requesting fragments
which are not present yet.
In order to determine whether one was *really* outside of the seekable
window, we check whether the current stream position is still
within the seekable region.
The *problem* though with that commit is that it assumes that subclasses
will return continuously updated seeking ranges (i.e. dependent on the
current time), which is *NOT* the case.
For example:
* dashdemux does use the current UTC to determine the seekable region
* hlsdemux uses the values from the last updated manifest
Therefore if one downloads fragments faster than realtime, for HLS
we would end up at the end of the last manifest seekable range, and
the previous commit would consider the stream as being ended... which
is not the case.
In the long run, we need to figure out a way to cope with non-1.0
rates on live streams for all types of stream (including HLS).
https://bugzilla.gnome.org/show_bug.cgi?id=783075
This is a race that was exposed by the {hls|dash}.scrub_forward_seeking
validate test.
The "race" is that a subclass might want to change format, causing
a new stream to be created (but not exposed/switched yet) and put on the
prepared_streams list. That stream will have values (including pending
segment) from the pre-seek state.
Before the stream is exposed/switched, a new seek comes in and the stream
values get updated ... but the ones that will be changed don't get updated
causing them to push out wrong segments once they are exposed.
https://bugzilla.gnome.org/show_bug.cgi?id=773159
Add a function to install the default RGBA pad templates,
but don't make them required so that there can be
GstGLFilter sub-classes with different input/output
caps if they want. Remove the hard-coded RGBA restriction in
the set_caps_features call, as it will be taken care
of by intersecting with the pad templates.
Update all the sub-classes to match
Adaptive demuxers are special demuxers that runs their own sources
internally. In this patch we flag the demuxer as being a source in order
to receive the downstream events. We then handle the EOS event by
resetting the internal state and pushing EOS on all pads. This handling
is done asynchronously to avoid blocking user thread.
https://bugzilla.gnome.org/show_bug.cgi?id=723868
When the pad has received EOS, its buffer may still be mixed
any number of times, when the pad's framerate is inferior
to the output framerate.
This was introduced by my patch in
https://bugzilla.gnome.org/show_bug.cgi?id=782962, this patch
also correctly addresses the initial issue.
On the raspberry pi no pkg-config file is provided for the bcm_host
library. We are using AC_CHECK_LIB to detect this lib with autotools,
cc.find_library() library is a closer meson equivalent.
https://bugzilla.gnome.org/show_bug.cgi?id=784026
We have to pass the "height" as height = vmeta->offset[1] / width to the
API, which of course does not work well for formats with only a single
plane. Use the whole memory size instead of the offset in that case.
Previous commit let demux call gst_uri_downloader_cancel() on _demux_reset().
Note that, _demux_reset() called during PAUSED_TO_READY and READY_TO_PAUSED.
And, it will set "cancelled" on uridownloader which blocks the use of
uridownloader. The issue is that, subclass can use the uridownloader not only
live streaming for manifest update, but also for fetching another manifests
such as variant and rendition m3u8 of hls streaming. So to unblock it,
demux should clear "cancelled" before processing initial manifest.
https://bugzilla.gnome.org/show_bug.cgi?id=783401
before broadcasting preroll.
The deadlock was as follows:
-> The subclass pushes a buffer on a newly-created stream in T1
-> We take the preroll lock in T1, to handle_preroll
-> The demuxer is stopped in T2, we take the MANIFEST_LOCK
-> T1 starts blocking because it received a reconfigure event
and needs to take the MANIFEST_LOCK
-> T2 deadlocks because it now wants the preroll_lock.
https://bugzilla.gnome.org/show_bug.cgi?id=783255
Make sure the manifest update loop is stopped before proceeding with the
resetting of the manifest data. Otherwise, the updates loop will try to
use it and it leads to a segfault
https://bugzilla.gnome.org/show_bug.cgi?id=783028
As we release the MANIFEST_LOCK in stop_tasks,
demux->priv->old_streams can be set, we need to free these
otherwise we may end up trying to dispose elements in the
READY state.
https://bugzilla.gnome.org/show_bug.cgi?id=783256
When an accurate seek is requested on a live stream, only requests the
exact value for the "starting position" (i.e. start in forward playback
and stop in reverse playback).
https://bugzilla.gnome.org/show_bug.cgi?id=782698
And to make that 100% obvious, only use variables declared within the
switch cases instead of function-wide ones.
Also remove useless one-time-use-only variable.
CID #1409857
The live seeking range was only checked when doing actual seeks. This was
assuming that the rate would always be 1.0 (i.e. the playback would
advance in realtime, and therefore fragments would always be available
since the seeking window moves at the same rate).
With non-1.0 rates, this no longer becomes valid, and therefore we need
to check whether we are still within the live seeking range when advancing.
https://bugzilla.gnome.org/show_bug.cgi?id=783075
What we want is to retry downloading the fragment on 4xx/5xx errors
however returning EOS will cause waiting for a manifest update for live
(which may be a really long time) or stop everything for non-live.
Change that to only return EOS/ERROR once we've reached the error limit.
https://bugzilla.gnome.org/show_bug.cgi?id=776609
GL_RGB565 is sized internal glformat, the corresponding glformat
should be GL_RGB and type is GL_UNSIGNED_SHORT_565. Otherwise will
return GL_INVALID_ENUM when creating texture.
https://bugzilla.gnome.org/show_bug.cgi?id=783066
This ensures that they really get processed in order with
buffers. Just waiting for the queue to be empty is sometimes not
enough as the buffers are dropped from the pad before the result is
pushed to the next element, sometimes resulting in surprising
re-ordering.
In the case an aggregator is created and pads are requested but only
linked later, we end up never updating the upstream latency.
This was because latency queries on pads that are not linked succeed,
so we never did a new query once a live source has been linked, so the
thread was never started.
https://bugzilla.gnome.org/show_bug.cgi?id=757548
When caps changes while streaming, the new caps was getting processed
immediately in videoaggregator, but the next buffer in the queue that
corresponds to this new caps was not necessarily being used immediately,
which resulted sometimes in using an old buffer with new caps. Of course
there used to be a separate buffer_vinfo for mapping the buffer with its
own caps, but in compositor the GstVideoConverter was still using wrong
info and resulted in invalid reads and corrupt output.
This approach here is more safe. We delay using the new caps
until we actually select the next buffer in the queue for use.
This way we also eliminate the need for buffer_vinfo, since the
pad->info is always in sync with the format of the selected buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=780682
The function needs to be unlocked if any data is received, but only
end the first buffer processing on an actual buffer, synchronized events
don't matter on the first buffer processing.
https://bugzilla.gnome.org/show_bug.cgi?id=781673
With the macOS/iOS implementations, the active thread can change
multiple times over the life of a pipeline which would expose a race in
the thread tracking.
Fix by taking a ref on the active thread while the context is active.
https://bugzilla.gnome.org/show_bug.cgi?id=779202
Otherwise fall back to glDrawBuffers. Also check if glReadBuffer exists
before using it.
glDrawBuffer does not exist for GLES, only glDrawBuffers does.
https://bugzilla.gnome.org/show_bug.cgi?id=782376
This commit fixes the following assumptions with live seeking:
1) start was always valid and of type GST_SEEK_TYPE_SET
2) direction was always forward
3) stop should be offsetted when handling non-accurate seeks before
the range start position.
In order to handle more live seeking use-cases (including reverse playback),
only do non-accurate start/stop value clamping for GST_SEEK_TYPE_SET values.
Also add a bit more debugging lines for issues
https://bugzilla.gnome.org/show_bug.cgi?id=782330
When dealing with live streams, we can't rely on GstSegment calculation
since it uses the segment duration to calculate the absolute values.
But since we are dealing with live *and* we know the ranges, we can
compute the absolute seeking values using the range stop (i.e. "now")
as the END position.
Allows seeking back to "live" by using start_type:GST_SEEK_TYPE_END
and start:0
https://bugzilla.gnome.org/show_bug.cgi?id=782228
The allowed live seek ranges returned by subclasses are "inclusive", that is
to say that the "range_stop" value they return is the highest acceptable position
one can seek to (i.e. "now").
Allow seeking to exactly that value
Rationale is to allow the manifest update task to continue running while
seeks are occurring. Otherwise, if the user reliably performs a seek
before the manifest is updated, then as the manifest task is reset on
seeks (and thus the time to wait between manifest updates), the manifest
would never be updated.
This fix makes the manifest update task free-running and continously
update even during seeks.
Some actions (Qos, reconfigure, ...) might take place before we finish pushing out flush_start.
One problem would be that:
1) The QOS handling in adaptivedemux takes the MANIFEST LOCK
That QOS event comes from basesink with its PREROLL_LOCK taken
2) FLUSH_START is sent from adaptivedemux with the MANIFEST_LOCK taken and the basesink flushing handler needs to take the PREROLL_LOCK
=> deadlock
https://bugzilla.gnome.org/show_bug.cgi?id=781320
meson's configure_file emits only a comment like /* #undef ... */
for values which are unset in the configuration_data. For
gstglconfig.h, this differs from the autotools build where the
preprocessor definitions are always either 0 or 1. So loop over a
list of variables to set to zero as default.
Also sync up the gstglconfig.h.meson file with the additional
macros defined by the autotools build.
https://bugzilla.gnome.org/show_bug.cgi?id=781043
Windows aren't always removed in time, and it turns out to be
very, very hard to remove a window in a way that's not racy and
not deadlocky. Since the window itself doesn't leak, freeing
the list on object destruction is enough.
https://bugzilla.gnome.org/show_bug.cgi?id=781018