The gst_adaptive_demux_wait_until() function can be woken up either
by its end_time being reached, or from other threads that want to
interrupt the waiting thread.
If the thread is interrupted, it needs to cancel its async clock callback
by unscheduling the clock callback. However, the callback task might already
have been activated, but is waiting for the mutex to become available. In this
case, the call to unschedule does not stop the callback from executing.
The solution to this second issue is to use a reference counted object that
is decremented by both the gst_adaptive_demux_wait_until() function and the
call to gst_clock_id_wait_async (). In this way, the GstAdaptiveDemuxTimer
object is only deleted when both the gst_adaptive_demux_wait_until() function
and the async callback are finished with the object.
https://bugzilla.gnome.org/show_bug.cgi?id=765728
There are several places in adaptivedemux where it waits for
time to pass, for example to wait until it should next download
a fragment. The problem with this approach is that it means that
unit tests are forced to execute in realtime.
This commit replaces the use of g_cond_wait_until() with single
shot GstClockID that signals the condition variable. Under normal
usage, this behaves exactly as before. A unit test can replace the
system clock with a GstTestClock, allowing the test to control the
timing in adaptivedemux.
https://bugzilla.gnome.org/show_bug.cgi?id=762147
A realtime clock is used in many places, such as deciding which
fragment to select at start up and deciding how long to sleep
before a fragment becomes available. For example dashdemux needs
sample the client's estimate of UTC when selecting where to start
in a live DASH stream.
The problem with dashdemux calculating the client's idea of UTC is
that it makes it difficult to create unit tests, because the passage
of time is a factor in the test.
This commit changes dashdemux and adaptivedemux to use the
GstSystemClock, so that a unit test can replace the system clock when
it needs to be able to control the clock.
This commit makes no change to the behaviour under normal usage, as
GstSystemClock is based upon the system time.
https://bugzilla.gnome.org/show_bug.cgi?id=762147
Happens e.g. if a RECONFIGURE event is sent from downstream while we're
switching pads at this very moment. The old pad is gone and the stream has a
new pad.
https://bugzilla.gnome.org/show_bug.cgi?id=764404
When the start_type is GST_SEEK_TYPE_NONE for a forward seek
(or stop_type for a reverse) is not set on a snap seeking operation,
the element should use the current position and then snap as requested.
Also fixes uninitialized variable complaint by clang about
'ts' variable.
Adaptive demuxers need to start downloading from specific positions
(fragments) for every stream, this means that all streams can snap-seek
to a different position when requested. Snap seeking in this case will
be done in 2 steps:
1) do the snap seeking on the pad that received the seek event and
get the final position
2) use this position to do a regular seek on the other streams to
make sure they all start from the same position
More arguments were added to the stream_seek function, allowing better control
of how seeking is done. Knowing the flags and the playback direction allows
subclasses to handle snap-seeking.
And also adds a new return parameter to inform of the final
selected seeking position that is used to align the other streams.
https://bugzilla.gnome.org/show_bug.cgi?id=759158
The function gst_adaptive_demux_stream_update_source() function creates
a new GstPad called internal_pad. This pad is not freed when releasing
the stream.
The solution is to set GST_PAD_FLAG_NEED_PARENT so that the chain
functions do not get called when the pad has no parent and then
remove the parent in the gst_adaptive_demux_stream_free() function. This
causes the refcount of the pad to be set to zero.
https://bugzilla.gnome.org/show_bug.cgi?id=760982
Handling the ghostpad and its internal pad was causing more issues
than helping because of their coupled activation/deactivation
actions.
As we have to install custom chain,event and query functions it is
better to use a floating sink pad internally in the demuxer and just
use those pad functions to push through a standard pad in the demuxer
https://bugzilla.gnome.org/show_bug.cgi?id=757951
Fixed adaptivedemux seeking without flushing that just wants
to update stop position. This required protecting the segment
variables with a new mutex so that the seeking thread and the
download threads could safely manipulate the segment and
events related to it.
This contention is only locked/unlocked when starting a new
download, when the first fragment of a segment is received and
when seeking so, hopefully, it won't damage performance.
Avoids downloading and pushing a full segment just to get 1 nanosecond
of data. This happens frequently when seeking is done with flags
that adjust to boundaries or when the start is aligned with segment
starts. The later is common when segment durations is a multiple of
a second.
For reverse, set position to segment.stop when starting and also
don't set the position to fragment end timestamp when it finishes,
just leave it at the fragment start.
This no longer does anything, and it was marked as CONSTRUCT_ONLY
which means someone would really have to go out of their way to
be able to set this, which would only be done in very custom
scenarios, if ever, and those will likely target a specific
version of GStreamer then, so probably not much point keeping
it deprecated for a while before removing it.
Bitrate estimation is now handled through a queue2 element added after
the source element used to download fragments.
Original hlsdemux patch by Duncan Palmer <dpalmer@digisoft.tv>
https://bugzilla.gnome.org/show_bug.cgi?id=733959
The gst_adaptive_demux_stream_free function is trying to stop the stream's
download task. For this, it signals the task. But it fails to also set the
stream->download_finished = TRUE, so the task will go back to sleep and
only exit when the download is finished.
https://bugzilla.gnome.org/show_bug.cgi?id=755121
dashdemux seeks each live stream to its current fragment in the beginning, but
the base class does not know about this. Update the demuxer segment with this
seek so we generate the correct SEGMENT event and can actually play the
stream.
This needs some refactoring at some point.
https://bugzilla.gnome.org/show_bug.cgi?id=755047
Each period will start again with pts 0 + period presentation offset, which is
also going to be the presentation time inside the container stream if any.
However all periods together should form a continuous timeline, with regard to
stream time and running time.
For making this possible we keep track of the "user requested segment", i.e.
the seek events, inside the demuxer without adjusting anything and taking this
demuxer segment only as orientation for modified segments per stream.
This per stream segments will have their segment.start at pts that would be
produced for this stream in this period, and the segment.base/time adjusted so
that this pts maps to the running and stream time this period should have in
the context of all other periods.
https://bugzilla.gnome.org/show_bug.cgi?id=754222
There are several cases where a HLS server could temporarily have wrong
fragments, or reconfigure the playlist. In those cases, when we get
fragment download failures, we *really* want to wait a bit (for the next
playlist update) before retrying to get fragments.
Previously this method was first checking to see if there was next fragments
(according to the previous manifest update) before waiting for the next update.
The problem was that if that if there is a temporary failure on the server,
that's uncorrelated to whether the manifest contains next fragments or not.
If a ContentProtection element is present in an AdaptationSet element,
send Protection events on the source pad, so that qtdemux can use this
information to correctly generate its source caps for DASH CENC
encrypted streams.
This allows qtdemux to support CENC encrypted DASH streams where the
content protection specific information is carried in the MPD file
rather than in pssh boxes in the initialisation segments.
This commit adds a new function to the adaptivedemux base class to allow
a GstEvent to be queued for a stream. The queue of events are sent the
next time a buffer is pushed for that stream.
https://bugzilla.gnome.org/show_bug.cgi?id=705991
Sometimes the last fragment does not exist because of rounding errors with the
durations. Just finish the stream gracefully instead of erroring out instead.
Segment start/time/position/base should only be modified if this is the first
time we send a segment, otherwise we will override values from the seek
segment if new streams have to be exposed as part of the seek.
Segment base should be calculated from the segment start based on the stream's
own segment, not the demuxer's segment. Both might differ slightly because of
the presentationTimeOffset.
Always add the presentationTimeOffset (relative to the period start, not
timestamp 0) to the segment start after resetting the stream's segment based
on the demuxer's segment (i.e. after seeks or stream restart). Also make sure
to keep the stream's segment up to date and not just send a new segment event
without storing the segment in the stream.
https://bugzilla.gnome.org/show_bug.cgi?id=745455
And include the presentation offset in the last known position for each
stream, and just because we can also keep track of the latest known position
inside the demuxer segment.
It's going to return EOS if the period ended or otherwise there is just no
next fragment left. If we don't store the last return value, it will always
stay OK and gst_adaptive_demux_combine_flows() will always return OK instead
of EOS once all streams are done.
This partially switches period changes in DASH by at least trying to switch
instead of just stopping. What is still left is that after a period change
with DASH the times all start at 0 again instead of continuing.
It's true that we shouldn't consider errors fatal immediately, but if we
always ignore them we will loop infinitely on live streams with segments
that can't be downloaded at all.
Even for "live" streams we are not live in the GStreamer meaning of the word.
We don't produce buffers that are timestamped based on their "capture time"
and our clock, but just based on whatever timestamps the stream might contain.
Also even if we wanted to claim to be live, that wouldn't work well as we
would have to return GST_STATE_CHANGE_NO_PREROLL when going from READY to
PAUSED, which we can't. We first need data to know if we are "live" or not.
It will deadlocks as we will then join() the update task from itself. Instead
just post an actual error message on the bus and only stop the update task.
The application is then responsible for shutting down the element, and thus
all the other tasks and everything, based on the error message it gets.
It might return OK from subclasses and it could cause a bitrate
renegotiation. For DASH and MSS that is ok as they won't expose
new pads as part of this but it can cause issues for HLS as
it will expose new pads, leading to pads that will only have EOS
that cause decodebin to fail
https://bugzilla.gnome.org/show_bug.cgi?id=745905
Asks the subclass for a potential time offset to apply to each
separate stream, in dash streams can have "presentation time offsets",
which can be different for each stream.
https://bugzilla.gnome.org/show_bug.cgi?id=745455
Move the property from subclasses to adaptivedemux, it allows
selecing the percentage of the measured bitrate to be used when
selecting stream bitrates
Allows to set a bitrate directly instead of measuring it internally
based on the received chunks. The connection-speed was removed from
mssdemux and hlsdemux as it is now in the base class
And use the average to go up in resolution, and the last fragment
bitrate to go down.
This allows the demuxer to react rapidly to bitrate loss, and
be conservative for bitrate improvements.
+ Add a construct only property to define the number of fragments
to consider when calculating the average moving bitrate.
https://bugzilla.gnome.org/show_bug.cgi?id=742979
Add more power to the chunk_received function (renamed to data_received)
and also to the fragment_finish function.
The data_received function must parse/decrypt the data if necessary and
also push it using the new push_buffer function that is exposed now. The
default implementation gets data from the stream adapter (all available)
and pushes it.
The fragment_finish function must also advance the fragment. The default
implementation only advances the fragment.
This allows the subsegment handling in dashdemux to continuously download
the same file from the server instead of stopping at every subsegment
boundary and starting a new request
If we say it is the first segment after a new period it will resync
the segment.start value and all buffers will be late for the new period
we are trying to play. Otherwise we want to keep the segment.start with
the previous value to allow the running time to smoothly increase
Check if there is a next fragment before advancing to avoid causing
a bitrate switch (and maybe exposing new pads) only to push EOS.
This causes playback to stop with an error instead of properly
finishing with EOS message.
The subsegment boundary return tells the adaptivedemux that it can
try to switch to another representation as the stream is at a suitable
position for starting from another bitrate.
In order to get some subsegment information, subclasses might want
to download only the headers to have enough data (the index)
to decide where to start downloading from the subsegment.
This allows the subclasses to know if the chunks that are downloaded are
part of the header or of the index and will parse the parts that are
of their interest.
Segment start needs only to be updated when starting the streams
or after a seek, doing it during bitrate changes will cause the
running time to go discontinuous (jump back to a previous ts)
and QOS will drop buffers