When a MSS server hosts a live stream the fragments listed in the
manifest usually don't have accurate timestamps and duration, except
for the first fragment, which additionally stores timing information
for the few upcoming fragments. In this scenario it is useless to
periodically fetch and update the manifest and the fragments list can
be incrementally built by parsing the first/current fragment.
https://bugzilla.gnome.org/show_bug.cgi?id=755036
This changes the failure case to require a consecutive number of
failures rather than being spread out over the entire stream.
Fixes the case where fetching the manifest was intermittent.
https://bugzilla.gnome.org/show_bug.cgi?id=774177
For formats that need to update the manifest to know about new
fragments as they're being written by the server would never receive an
updated fragment list after a seek event
https://bugzilla.gnome.org/show_bug.cgi?id=774177
With MSVC, this gives the following warning:
warning C4305: 'function': truncation from 'double' to 'gfloat'
Apparently, MSVC does not figure out what type to use for constants
based on the assignment. This warning is very spammy, so let's try to
fix it.
In order to calculate the *actual* bitrate for downloading a fragment
we need to take into account the time since we requested the fragment.
Without this, the bitrate calculations (previously reported by queue2)
would be biased since they wouldn't take into account the request latency
(that is the time between the moment we request a specific URI and the
moment we receive the first byte of that request).
Such examples were it would be biased would be high-bandwith but high-latency
networks. If you download 5MB in 500ms, but it takes 200ms to get the first
byte, queue2 would report 80Mbit/s (5Mb in 500ms) , but taking the request
into account it is only 57Mbit/s (5Mb in 700ms).
While this would not cause too much issues if the above fragment represented
a much longer duration (5s of content), it would cause issues with short
ones (say 1s, or when doing keyframe-only requests which are even shorter)
where the code would expect to be able to download up to 80Mbit/s ... whereas
if we take the request time into account it's much lower (and we would
therefore end up doing late requests).
Also calculate the request latency for debugging purposes and further
usage (it could allow us to figure out the maximum request rate for
example).
https://bugzilla.gnome.org/show_bug.cgi?id=733959https://bugzilla.gnome.org/show_bug.cgi?id=772330
And scale the bitrate with the absolute rate (if it's bigger than 1.0) to get
to the real bitrate due to faster playback.
This allowed in my tests to play a stream with 10x speed without buffering as
the lowest bitrate is chosen, instead of staying/selecting the highest bitrate
and then buffering all the time.
It was previously disabled for not very well specified reasons, which seem to
be not valid anymore nowadays.
Prevent the manifest update loop from looping endlessly
after a seek event, by clearing the variable that tells
the task function not to immediately exit.
The new streams should not be exposed until all streams are done with the
current fragment. The old code is incorrect and actually only checked the
current stream. Fix this by properly checking all streams.
Also, ignore the current stream. The code is only reached when the current
stream finished downloading and since
07f49f15b1 ("adaptivedemux: On EOS, handle it
before waking download loop") download_finished is set after
gst_adaptive_demux_stream_advance_fragment_unlocked() is called.
Without this HLS playback with multiple streams is broken, because the new
streams are never exposed.
https://bugzilla.gnome.org/show_bug.cgi?id=770075
This allows to gradually download part of a fragment when the final size is
not known and only a part of it should be downloaded. For example when only
the moof should be parsed and/or a single keyframe should be downloaded.
https://bugzilla.gnome.org/show_bug.cgi?id=741104
This helps catch those 404 server errors in live streams when
seeking to the very beginning, as the server will handle a
request with some delay, which can cause it to drop the fragment
before sending it.
https://bugzilla.gnome.org/show_bug.cgi?id=753751
To allow adaptivedemux to make retry decisions, it needs to know what
sort of HTTP error has occurred. For example, the retry logic for a
410 error is different from a 504 error.
https://bugzilla.gnome.org/show_bug.cgi?id=753751
Some derived classes (at least dashdemux) expose a seeking range
based on wall clock. This means that a subsequent seek to the start
of this range will be before the allowed range.
To solve this, seeks without the ACCURATE flag are allowed to seek
before the start for live streams, in which case the segment is
shifted to start at the start of the new seek range. If there is
an end position, is is shifted too, to keep the duration constant.
https://bugzilla.gnome.org/show_bug.cgi?id=753751
Make state changes of internal elements more reliable by locking
their state, and ensuring that they aren't blocked pushing data
downstream before trying to set their state.
Add a boolean to avoid starting tasks when the main
thread is busy trying to shut the element down.
Try harder to make switching pads work better by
making sure concurrent downloads are finished before exposing
a new set of pads.
Release the manifest lock when signalling no-more-pads, as
that can call back into adaptivedemux again
If other stream fragments are still downloading but new streams
have been scheduled, don't expose them yet - wait until the last
one finishes. Otherwise, we can cancel a partially downloaded
auxilliary stream and cause a gap.
Drop the manifest lock when performing actions that might
call back into adaptivedemux and trigger deadlocks, such
as adding/removing pads or sending in-band events (EOS).
Unlock the manifest lock when changing the child bin state to
NULL, as it might call back to acquire the manifest lock when
shutting down pads.
Drop the manifest lock while pushing events.
In the case of KEY_UNIT and TRICKMODE_KEY_UNITS seeks, we want to
"snap" to the closest fragment.
Without this, we end up pushing out a segment which does not match
the first fragment timestamp being pushed out, resulting in one or
more buffers being eventually dropped because they are out of segment.
The gst_adaptive_demux_wait_until() function can be woken up either
by its end_time being reached, or from other threads that want to
interrupt the waiting thread.
If the thread is interrupted, it needs to cancel its async clock callback
by unscheduling the clock callback. However, the callback task might already
have been activated, but is waiting for the mutex to become available. In this
case, the call to unschedule does not stop the callback from executing.
The solution to this second issue is to use a reference counted object that
is decremented by both the gst_adaptive_demux_wait_until() function and the
call to gst_clock_id_wait_async (). In this way, the GstAdaptiveDemuxTimer
object is only deleted when both the gst_adaptive_demux_wait_until() function
and the async callback are finished with the object.
https://bugzilla.gnome.org/show_bug.cgi?id=765728
There are several places in adaptivedemux where it waits for
time to pass, for example to wait until it should next download
a fragment. The problem with this approach is that it means that
unit tests are forced to execute in realtime.
This commit replaces the use of g_cond_wait_until() with single
shot GstClockID that signals the condition variable. Under normal
usage, this behaves exactly as before. A unit test can replace the
system clock with a GstTestClock, allowing the test to control the
timing in adaptivedemux.
https://bugzilla.gnome.org/show_bug.cgi?id=762147
A realtime clock is used in many places, such as deciding which
fragment to select at start up and deciding how long to sleep
before a fragment becomes available. For example dashdemux needs
sample the client's estimate of UTC when selecting where to start
in a live DASH stream.
The problem with dashdemux calculating the client's idea of UTC is
that it makes it difficult to create unit tests, because the passage
of time is a factor in the test.
This commit changes dashdemux and adaptivedemux to use the
GstSystemClock, so that a unit test can replace the system clock when
it needs to be able to control the clock.
This commit makes no change to the behaviour under normal usage, as
GstSystemClock is based upon the system time.
https://bugzilla.gnome.org/show_bug.cgi?id=762147
Happens e.g. if a RECONFIGURE event is sent from downstream while we're
switching pads at this very moment. The old pad is gone and the stream has a
new pad.
https://bugzilla.gnome.org/show_bug.cgi?id=764404
When the start_type is GST_SEEK_TYPE_NONE for a forward seek
(or stop_type for a reverse) is not set on a snap seeking operation,
the element should use the current position and then snap as requested.
Also fixes uninitialized variable complaint by clang about
'ts' variable.
Adaptive demuxers need to start downloading from specific positions
(fragments) for every stream, this means that all streams can snap-seek
to a different position when requested. Snap seeking in this case will
be done in 2 steps:
1) do the snap seeking on the pad that received the seek event and
get the final position
2) use this position to do a regular seek on the other streams to
make sure they all start from the same position
More arguments were added to the stream_seek function, allowing better control
of how seeking is done. Knowing the flags and the playback direction allows
subclasses to handle snap-seeking.
And also adds a new return parameter to inform of the final
selected seeking position that is used to align the other streams.
https://bugzilla.gnome.org/show_bug.cgi?id=759158
The function gst_adaptive_demux_stream_update_source() function creates
a new GstPad called internal_pad. This pad is not freed when releasing
the stream.
The solution is to set GST_PAD_FLAG_NEED_PARENT so that the chain
functions do not get called when the pad has no parent and then
remove the parent in the gst_adaptive_demux_stream_free() function. This
causes the refcount of the pad to be set to zero.
https://bugzilla.gnome.org/show_bug.cgi?id=760982
Handling the ghostpad and its internal pad was causing more issues
than helping because of their coupled activation/deactivation
actions.
As we have to install custom chain,event and query functions it is
better to use a floating sink pad internally in the demuxer and just
use those pad functions to push through a standard pad in the demuxer
https://bugzilla.gnome.org/show_bug.cgi?id=757951
Fixed adaptivedemux seeking without flushing that just wants
to update stop position. This required protecting the segment
variables with a new mutex so that the seeking thread and the
download threads could safely manipulate the segment and
events related to it.
This contention is only locked/unlocked when starting a new
download, when the first fragment of a segment is received and
when seeking so, hopefully, it won't damage performance.
Avoids downloading and pushing a full segment just to get 1 nanosecond
of data. This happens frequently when seeking is done with flags
that adjust to boundaries or when the start is aligned with segment
starts. The later is common when segment durations is a multiple of
a second.
For reverse, set position to segment.stop when starting and also
don't set the position to fragment end timestamp when it finishes,
just leave it at the fragment start.
This no longer does anything, and it was marked as CONSTRUCT_ONLY
which means someone would really have to go out of their way to
be able to set this, which would only be done in very custom
scenarios, if ever, and those will likely target a specific
version of GStreamer then, so probably not much point keeping
it deprecated for a while before removing it.
Bitrate estimation is now handled through a queue2 element added after
the source element used to download fragments.
Original hlsdemux patch by Duncan Palmer <dpalmer@digisoft.tv>
https://bugzilla.gnome.org/show_bug.cgi?id=733959
The gst_adaptive_demux_stream_free function is trying to stop the stream's
download task. For this, it signals the task. But it fails to also set the
stream->download_finished = TRUE, so the task will go back to sleep and
only exit when the download is finished.
https://bugzilla.gnome.org/show_bug.cgi?id=755121
dashdemux seeks each live stream to its current fragment in the beginning, but
the base class does not know about this. Update the demuxer segment with this
seek so we generate the correct SEGMENT event and can actually play the
stream.
This needs some refactoring at some point.
https://bugzilla.gnome.org/show_bug.cgi?id=755047
Each period will start again with pts 0 + period presentation offset, which is
also going to be the presentation time inside the container stream if any.
However all periods together should form a continuous timeline, with regard to
stream time and running time.
For making this possible we keep track of the "user requested segment", i.e.
the seek events, inside the demuxer without adjusting anything and taking this
demuxer segment only as orientation for modified segments per stream.
This per stream segments will have their segment.start at pts that would be
produced for this stream in this period, and the segment.base/time adjusted so
that this pts maps to the running and stream time this period should have in
the context of all other periods.
https://bugzilla.gnome.org/show_bug.cgi?id=754222
There are several cases where a HLS server could temporarily have wrong
fragments, or reconfigure the playlist. In those cases, when we get
fragment download failures, we *really* want to wait a bit (for the next
playlist update) before retrying to get fragments.
Previously this method was first checking to see if there was next fragments
(according to the previous manifest update) before waiting for the next update.
The problem was that if that if there is a temporary failure on the server,
that's uncorrelated to whether the manifest contains next fragments or not.
If a ContentProtection element is present in an AdaptationSet element,
send Protection events on the source pad, so that qtdemux can use this
information to correctly generate its source caps for DASH CENC
encrypted streams.
This allows qtdemux to support CENC encrypted DASH streams where the
content protection specific information is carried in the MPD file
rather than in pssh boxes in the initialisation segments.
This commit adds a new function to the adaptivedemux base class to allow
a GstEvent to be queued for a stream. The queue of events are sent the
next time a buffer is pushed for that stream.
https://bugzilla.gnome.org/show_bug.cgi?id=705991
Sometimes the last fragment does not exist because of rounding errors with the
durations. Just finish the stream gracefully instead of erroring out instead.
Segment start/time/position/base should only be modified if this is the first
time we send a segment, otherwise we will override values from the seek
segment if new streams have to be exposed as part of the seek.
Segment base should be calculated from the segment start based on the stream's
own segment, not the demuxer's segment. Both might differ slightly because of
the presentationTimeOffset.
Always add the presentationTimeOffset (relative to the period start, not
timestamp 0) to the segment start after resetting the stream's segment based
on the demuxer's segment (i.e. after seeks or stream restart). Also make sure
to keep the stream's segment up to date and not just send a new segment event
without storing the segment in the stream.
https://bugzilla.gnome.org/show_bug.cgi?id=745455
And include the presentation offset in the last known position for each
stream, and just because we can also keep track of the latest known position
inside the demuxer segment.
It's going to return EOS if the period ended or otherwise there is just no
next fragment left. If we don't store the last return value, it will always
stay OK and gst_adaptive_demux_combine_flows() will always return OK instead
of EOS once all streams are done.
This partially switches period changes in DASH by at least trying to switch
instead of just stopping. What is still left is that after a period change
with DASH the times all start at 0 again instead of continuing.
It's true that we shouldn't consider errors fatal immediately, but if we
always ignore them we will loop infinitely on live streams with segments
that can't be downloaded at all.
Even for "live" streams we are not live in the GStreamer meaning of the word.
We don't produce buffers that are timestamped based on their "capture time"
and our clock, but just based on whatever timestamps the stream might contain.
Also even if we wanted to claim to be live, that wouldn't work well as we
would have to return GST_STATE_CHANGE_NO_PREROLL when going from READY to
PAUSED, which we can't. We first need data to know if we are "live" or not.
It will deadlocks as we will then join() the update task from itself. Instead
just post an actual error message on the bus and only stop the update task.
The application is then responsible for shutting down the element, and thus
all the other tasks and everything, based on the error message it gets.