Scenario:
A manifest starts out in live mode but then the recording is finalized
and a subsequent update changes the state to a non-live manifest when
the server has finished recording/transcoding/whatever with the full
list of fragments.
Without this patch, the manifest update task is never stopped on the
live->non-live transition and will busy loop, burning through one CPU
core.
https://bugzilla.gnome.org/show_bug.cgi?id=786275
This commit ensures that the idle probe which GstAdaptiveDemuxStream
adds to the upstream source pad is removed after use. Previously a new
probe was added to the pad whenever a fragment was downloaded, meaning
the number of pad probe callbacks being executed increased continually.
https://bugzilla.gnome.org/show_bug.cgi?id=785957
There can be twice as many stream tasks running as there are output
pads for playback of variant HLS playlists. Half of them are the
current pads, and the other half are the pads that are about to be
switched to due to a bitrate change.
The old code only stopped the current streams which could result
in a deadlock on stopping the pipeline. The changes force stopping
and joining of any prepared streams too.
https://bugzilla.gnome.org/show_bug.cgi?id=785987
This is a workaround for a regression introduced by
f4190a49c0
( adaptivedemux: Check live seeking range more often )
The goal of the previous commit was to be able to cope with non-1.0
rates on live streams which have a "seeking window" (i.e. the server
keeps around quite a bit of the live stream so you can seek back into
it).
Without that commit, two different kind of issues would happen:
* When doing reverse playback, you would never check whether you
are outside of the seekable region. And would then continuously
try to download fragments that are no longer present.
* When doing fast forward, you would end up requesting fragments
which are not present yet.
In order to determine whether one was *really* outside of the seekable
window, we check whether the current stream position is still
within the seekable region.
The *problem* though with that commit is that it assumes that subclasses
will return continuously updated seeking ranges (i.e. dependent on the
current time), which is *NOT* the case.
For example:
* dashdemux does use the current UTC to determine the seekable region
* hlsdemux uses the values from the last updated manifest
Therefore if one downloads fragments faster than realtime, for HLS
we would end up at the end of the last manifest seekable range, and
the previous commit would consider the stream as being ended... which
is not the case.
In the long run, we need to figure out a way to cope with non-1.0
rates on live streams for all types of stream (including HLS).
https://bugzilla.gnome.org/show_bug.cgi?id=783075
This is a race that was exposed by the {hls|dash}.scrub_forward_seeking
validate test.
The "race" is that a subclass might want to change format, causing
a new stream to be created (but not exposed/switched yet) and put on the
prepared_streams list. That stream will have values (including pending
segment) from the pre-seek state.
Before the stream is exposed/switched, a new seek comes in and the stream
values get updated ... but the ones that will be changed don't get updated
causing them to push out wrong segments once they are exposed.
https://bugzilla.gnome.org/show_bug.cgi?id=773159
Adaptive demuxers are special demuxers that runs their own sources
internally. In this patch we flag the demuxer as being a source in order
to receive the downstream events. We then handle the EOS event by
resetting the internal state and pushing EOS on all pads. This handling
is done asynchronously to avoid blocking user thread.
https://bugzilla.gnome.org/show_bug.cgi?id=723868
Previous commit let demux call gst_uri_downloader_cancel() on _demux_reset().
Note that, _demux_reset() called during PAUSED_TO_READY and READY_TO_PAUSED.
And, it will set "cancelled" on uridownloader which blocks the use of
uridownloader. The issue is that, subclass can use the uridownloader not only
live streaming for manifest update, but also for fetching another manifests
such as variant and rendition m3u8 of hls streaming. So to unblock it,
demux should clear "cancelled" before processing initial manifest.
https://bugzilla.gnome.org/show_bug.cgi?id=783401
before broadcasting preroll.
The deadlock was as follows:
-> The subclass pushes a buffer on a newly-created stream in T1
-> We take the preroll lock in T1, to handle_preroll
-> The demuxer is stopped in T2, we take the MANIFEST_LOCK
-> T1 starts blocking because it received a reconfigure event
and needs to take the MANIFEST_LOCK
-> T2 deadlocks because it now wants the preroll_lock.
https://bugzilla.gnome.org/show_bug.cgi?id=783255
Make sure the manifest update loop is stopped before proceeding with the
resetting of the manifest data. Otherwise, the updates loop will try to
use it and it leads to a segfault
https://bugzilla.gnome.org/show_bug.cgi?id=783028
As we release the MANIFEST_LOCK in stop_tasks,
demux->priv->old_streams can be set, we need to free these
otherwise we may end up trying to dispose elements in the
READY state.
https://bugzilla.gnome.org/show_bug.cgi?id=783256
When an accurate seek is requested on a live stream, only requests the
exact value for the "starting position" (i.e. start in forward playback
and stop in reverse playback).
https://bugzilla.gnome.org/show_bug.cgi?id=782698
The live seeking range was only checked when doing actual seeks. This was
assuming that the rate would always be 1.0 (i.e. the playback would
advance in realtime, and therefore fragments would always be available
since the seeking window moves at the same rate).
With non-1.0 rates, this no longer becomes valid, and therefore we need
to check whether we are still within the live seeking range when advancing.
https://bugzilla.gnome.org/show_bug.cgi?id=783075
What we want is to retry downloading the fragment on 4xx/5xx errors
however returning EOS will cause waiting for a manifest update for live
(which may be a really long time) or stop everything for non-live.
Change that to only return EOS/ERROR once we've reached the error limit.
https://bugzilla.gnome.org/show_bug.cgi?id=776609
This commit fixes the following assumptions with live seeking:
1) start was always valid and of type GST_SEEK_TYPE_SET
2) direction was always forward
3) stop should be offsetted when handling non-accurate seeks before
the range start position.
In order to handle more live seeking use-cases (including reverse playback),
only do non-accurate start/stop value clamping for GST_SEEK_TYPE_SET values.
Also add a bit more debugging lines for issues
https://bugzilla.gnome.org/show_bug.cgi?id=782330
When dealing with live streams, we can't rely on GstSegment calculation
since it uses the segment duration to calculate the absolute values.
But since we are dealing with live *and* we know the ranges, we can
compute the absolute seeking values using the range stop (i.e. "now")
as the END position.
Allows seeking back to "live" by using start_type:GST_SEEK_TYPE_END
and start:0
https://bugzilla.gnome.org/show_bug.cgi?id=782228
The allowed live seek ranges returned by subclasses are "inclusive", that is
to say that the "range_stop" value they return is the highest acceptable position
one can seek to (i.e. "now").
Allow seeking to exactly that value
Rationale is to allow the manifest update task to continue running while
seeks are occurring. Otherwise, if the user reliably performs a seek
before the manifest is updated, then as the manifest task is reset on
seeks (and thus the time to wait between manifest updates), the manifest
would never be updated.
This fix makes the manifest update task free-running and continously
update even during seeks.
Some actions (Qos, reconfigure, ...) might take place before we finish pushing out flush_start.
One problem would be that:
1) The QOS handling in adaptivedemux takes the MANIFEST LOCK
That QOS event comes from basesink with its PREROLL_LOCK taken
2) FLUSH_START is sent from adaptivedemux with the MANIFEST_LOCK taken and the basesink flushing handler needs to take the PREROLL_LOCK
=> deadlock
https://bugzilla.gnome.org/show_bug.cgi?id=781320
At the moment that demux is waiting manifest update, the target sequence
of fragment was advanced already. So, checking stream_has_next_fragment()
means looking for the next fragment of target fragment.
This might cause unexpected buffering if each fragment has
large duration and manifest is listing only limited number of fragments.
https://bugzilla.gnome.org/show_bug.cgi?id=780494
When there are new pads pending for a bitrate switch, don't allow
EOS through from the old streams. It will be sent when the new pads are
ready, just before the old streams are removed.
This fixes racy bitrate switching with hlsdemux in urisourcebin
where old pads EOS before new pads appear and the entire pipeline can
EOS if those propagate fast enough
For duration queries on live streams, adaptivedemux ignores the query.
The problem then is that the query is answered by the downstream
qtdemux element, with the duration of the currently passing fragment.
This commit changes the behaviour of adaptivedemux to answer the duration
queries for live streams, returning GST_CLOCK_TIME_NONE.
https://bugzilla.gnome.org/show_bug.cgi?id=753879
If we need to send EOS on a pad that hasn't prerolled, generate
an error on the bus instead, otherwise the app will have no idea.
Fixes the HLS testFragmentNotFound test, which is waiting
for either EOS or an error.
To ensure that pads have caps when they are exposed, do
the exposing when all pending streams have prerolled an
output buffer, and only then EOS and remove any old pads.
Improves the switching sequence by making caps available
as soon as a pad appears.
With fixes from Seungha Yang <sh.yang@lge.com>
https://bugzilla.gnome.org/show_bug.cgi?id=758257
send_event() of parent class (i.e., GstBinClass) iterates srcpads
to send SEEK event. And performing it per srcpad is inefficient.
So, let's drop duplicated SEEK event by checking seqnum
https://bugzilla.gnome.org/show_bug.cgi?id=776612
The reason we previously used queue2 was to calculate the download rate,
but that wasn't entirely correct and we therefore calculate it before
queue2. We therefore now just need a simple queue.
When a MSS server hosts a live stream the fragments listed in the
manifest usually don't have accurate timestamps and duration, except
for the first fragment, which additionally stores timing information
for the few upcoming fragments. In this scenario it is useless to
periodically fetch and update the manifest and the fragments list can
be incrementally built by parsing the first/current fragment.
https://bugzilla.gnome.org/show_bug.cgi?id=755036
This changes the failure case to require a consecutive number of
failures rather than being spread out over the entire stream.
Fixes the case where fetching the manifest was intermittent.
https://bugzilla.gnome.org/show_bug.cgi?id=774177