This can happen if the playlists have moved due to the variant playlist
now being redirected to another target. This currently only works as long
as the referenced playlists don't change in relation to the variant
playlist, and the new location is purely due to a new path triggered by a
new redirection target of the variant playlist, or a new redirection
target of the playlist itself.
https://bugzilla.gnome.org/show_bug.cgi?id=731164
data does not have to be freed at all here, it's a pointer to
an arbitrary position inside the current line. Also don't reuse
the data variable for anything else, that will cause crashes
in playlists that have the I-frame playlist URI followed by
other attributes.
CID 1212127
Only the first buffer of a fragment has its timestamp set, so only
update the segment.position when pushing those buffers to avoid
having GST_CLOCK_TIME_NONE set to the position
https://bugzilla.gnome.org/show_bug.cgi?id=729364
Otherwise we will never recover from previous errors, and especially
will never start again after a flushing seek if downstream returned
GST_FLOW_FLUSHING to us.
hlsdemux can't rely on the source to push flushes on a seek on ready
as that might not make sense. So always resort to flushing the
internal proxy pads by pushing flush events from the source's src pad.
Also as the seeking is not required anymore, only seek if there is
really a byte range to be used. And store a ref to the source's
src pad to avoid doing get_static_pad for every fragment.
In decryption scenario, a buffer is always stored to be sent later
to wait for more data or EOS to be able to strip the final bytes
if requested. In case an error hapenned this buffer can be ignored
and not pushed downstream.
Handle some more error cases:
1) When the source element fails to go to ready
2) When decryption fails
3) When there is no source to handle a specific URI
4) When the URI is invalid
Set up a message handling function to catch errors from the internal
source and store the last return code to identify error situations
when returning from a fragment download.
Also moves the duration increase to after the download when we
know if it was successful or not
When using the internal source, hlsdemux doesn't know the caps of
the input before adding the pad, so remove the arguments that would
use that as it is always NULL.
And use an specific flag to signal when a pad switch is required.
Using the discont flag is a bad idea now because when a fragment
download fails it will lead to exposing a pad group without any
data, causing decodebin to abort.
When receving EOS from the internal src, increase the current positon
by the fragment duration to allow correct restoring of download position
if the bitrate changes
Use the same properties as uridownloader to keep connections alive
between consecutive fragments downloads.
1) set keep-alive property to true
2) keep the element in READY instead of in NULL
Measure the download bitrate to be able to select
the best playlist.
As the buffers are directly pushed downstream and it
might block. The time is only measured from the download
until the pad push and it is started again after the push
returns.
Now the decryption is done buffer by buffer instead of on the
whole fragment at once. As it expects multiples of 16 bytes a
GstAdapter was added to properly chunk the buffers.
Also the last buffer must be resized depending on the value of the
last byte of the fragment, so hlsdemux always keeps a pending buffer
as it doesn't know if it is the last one yet
The GstElement is directly linked into a ghost pad and
its buffers are pushed as received downstream. This way the
buffers are small enough and not a whole fragment that usually
causes extra latency and makes buffering harder
They are very confusing for people, and more often than not
also just not very accurate. Seeing 'last reviewed: 2005' in
your docs is not very confidence-inspiring. Let's just remove
those comments.
But only add this for non-live playlists. For live playlists we already
have another thread that is periodically updating playlists.
Reason for this is that sometimes downloading a fragment can fail because
the URIs have changed or expired since last time.
Sequence numbers in different playlists are not guaranteed to be the same for the
same position, e.g. fragments could have different durations in different playlists.
In theory we should do exactly the same for live playlists, but unfortunately we can't
because doing this kind of seeking requires the complete playlist since we started
playback. For live playlists the server is however dropping fragments in the beginning
over time and we have no absolute time references.
Recent refactoring causes this code to be called with either a NULL
fragment, or a non NULL fragment. In the former case, we don't have
a buffer. In the latter case, the original code dealing with DISCONT
assumed the buffer was valid. Testing for a NULL buffer here thus
does not seem to change the intent, and fixes:
Coverity 1195147