Clear error as soon as we determine that the download failed,
otherwise there are code paths where we might return without
clearing it ever, which would leak the GError then. Also, we
can pass a NULL GError pointer to _fetch_uri(), so just do that
instead of passing one that we're going to just free again
right away anyway.
Setting the seek flags to GST_SEEK_FLAG_SNAP_* will change the seek
target time to a segment boundary.
Based on original work by Ben Willers <bwillers@digisoft.tv>.
https://bugzilla.gnome.org/show_bug.cgi?id=759108
If connection speed is set, playlist according
to connection speed is selected as current playlist.
Problem is that the current variant of main playlist still
points to previously set variant.
If previously set variant doesn't correspond to current
playlist, then it causes unnecessary change of playlist
to the same playlist after first fragment is downloaded,
because of not updated current variant.
To fix this, we need to make sure that current variant
of main playlist corresponds to the current playlist
https://bugzilla.gnome.org/show_bug.cgi?id=758946
Don't jump backward to 3 files from the end of the playlist
when switching variants - it just means we downloaded
fragments fast and caught up to the end of the playlist.
Disable that by treating a variant switch as a playlist
update, not a restart due to a seek or so.
If the stream is discont, we must provide a timestamp in any case. Elements
like tsdemux are not going to output anything if we give a NONE timestamp
after a discont.
Also marking a stream as discont if a playlist change was not successful would
lead to the above situation, but in that case we are not required at all to
mark the stream discont as we're still at the old playlist.
For live streams, we want to make sure there's a certain distance
between the sequence to play and the last (earliest) fragment.
The problem is that it assumes there are at least 3 fragments in
the playlist, which might not always be the case (like in the case
of a server restarting and gradually adding fragments).
In order to avoid ending up with negative sequence numbers (which
will just loop forever), limit the new target sequence number to
the highest of:
* either the first sequence number of the playlist (fallback)
* or 3 fragments from the last one (standard behaviour)
This reverts commit 4ca3a22b6b.
The connection-speed=0 is used as a special value in the property
of hlsdemux to mean 'automatic' selection, m3u8.c doesn't need
to know about that as it should be as simple as possible.
So this patch hides this automatic selection documented in hlsdemux
into m3u8 logic and I think the gets harder to understand the code.
It also makes the hlsdemux unit tests work again
https://bugzilla.gnome.org/show_bug.cgi?id=749328
We should only refresh the currently selected variant playlist (if any,
otherwise the main playlist), not the main playlist. And only try to
refresh the main playlist if updating the variant playlist fails.
Some servers (Wowza) use the request of the main playlist to create a
"session", which is then part of the URI of the variant playlist and
also the fragments. Refreshing the main playlist would generate a new
session, and the server rate limits that usually. And after a few retries
the server just kicks us out.
Also as a side effect we now use the same downloader for all playlists, so
that we only have 2 instead of 3 connections to the server. And also
previously we just ignored the downloaded data from the main playlist that
the base class gave to us.
When the segment is very short it might be the case that the
typefinding fails and when finishing the segment hlsdemux would
consider the remaining data (pending_buffer) as an encryption
leftover.
This patch fixes it and makes sure an error is properly posted
if typefind failed by refactoring buffer handling to a function
and using it from the data_received and finish_fragment functions.
We also have to update the current_file GList pointer in the M3U playlist
client, otherwise we are just continuing playback from the current position
instead of seeking.
Move the property from subclasses to adaptivedemux, it allows
selecing the percentage of the measured bitrate to be used when
selecting stream bitrates
Allows to set a bitrate directly instead of measuring it internally
based on the received chunks. The connection-speed was removed from
mssdemux and hlsdemux as it is now in the base class
Add more power to the chunk_received function (renamed to data_received)
and also to the fragment_finish function.
The data_received function must parse/decrypt the data if necessary and
also push it using the new push_buffer function that is exposed now. The
default implementation gets data from the stream adapter (all available)
and pushes it.
The fragment_finish function must also advance the fragment. The default
implementation only advances the fragment.
This allows the subsegment handling in dashdemux to continuously download
the same file from the server instead of stopping at every subsegment
boundary and starting a new request
The duration values in playlists are approximate only, and for
playlist versions 2 and older they are only rounded integer values.
They cannot be used to timestamp buffers. This resulted in playback
gaps and skips because the actual duration of fragments is slightly
different. The solution is to only set the pts of the very first
buffer processed, not for each fragment.
hlsdemux assumes that seeking is not allowed for live streams,
however seek is possible if there are sufficient fragments in the
manifest. For example the BBC have live streams that contain 2 hours
of fragments.
The seek code for both live and on-demand is common code. The
difference between them is that an offset has to be calculated
for the timecode of the first fragment in the live playlist.
When hlsdemux starts to play a live stream, the possible seek range
is between 0 and A seconds. After some time has passed, the beginning of
the stream will no longer be available in the playlist and the seek
range is between B and C seconds.
Seek range:
start 0 ........... A
later B ........... C
This commit adds code to keep a note of the B and C values
and the highest sequence number it has seen. Every time it updates the
media playlist, it walks the list of fragments, seeing if there is a
fragment with sequence number > highest_seen_sequence. If so, the values
of B and C are updated. The value of B is used when timestamping
buffers.
It also makes sure the seek range is never closer than three fragments
from the end of the playlist - see 6.3.3. "Playing the Playlist file"
of the HLS draft.
https://bugzilla.gnome.org/show_bug.cgi?id=725435
For small amounts some data might be mistyped and it would cause
the pipeline to fail. For example if you have AAC inside mpegts,
for small amounts, the AAC samples would cause the typefinder to
think it is AAC and not mpegts.
https://bugzilla.gnome.org/show_bug.cgi?id=736061
If typefind fails, check to see if the buffer is too short for typefind. If this is the case,
prepend the decrypted buffer to the pending buffer and try again the next time around.
https://bugzilla.gnome.org/show_bug.cgi?id=740458
In gst_hls_demux_get_next_fragment() the next fragment URI gets
stored in next_fragment_uri, but the gst_hls_demux_updates_loop()
can at any time update the playlist, rendering this string invalid.
Therefore, any data (like key, iv, URIs) that is taken from a
GstM3U8Client needs to be copied. In addition, accessing the
internals of a GstM3U8Client requires locking.
https://bugzilla.gnome.org/show_bug.cgi?id=737793
If EOS or ERROR happens before the download loop thread has reached its
g_cond_wait() call, then the g_cond_signal doesn't have any effect and
the download loop thread stucks later.
https://bugzilla.gnome.org/show_bug.cgi?id=735663
The internal pad still keeps its EOS flag and event as it can be assigned
after the flush-start/stop pair is sent. The EOS is assigned from the streaming
thread so this is racy.
To be sure to clear it, it has to be done after setting the source to READY to
be sure that its streaming thread isn't running.
https://bugzilla.gnome.org/show_bug.cgi?id=736012
Previously we only refetched the playlist if downloading a fragment
has failed once. We should also do that if it failed a second or third time,
chances are that the playlist was updated now and contains new URIs.
Instead always use the low bandwith playlist making things go smoother
as the current heuristic is rather set for normal playback, and
currently it does not behave properly.
https://bugzilla.gnome.org/show_bug.cgi?id=734445
Only reset the decryption engine on the first buffer of a fragment,
not again for the second buffer. This fixes corrupting the second
buffer of a fragment.
https://bugzilla.gnome.org/show_bug.cgi?id=731968
This can happen if the playlists have moved due to the variant playlist
now being redirected to another target. This currently only works as long
as the referenced playlists don't change in relation to the variant
playlist, and the new location is purely due to a new path triggered by a
new redirection target of the variant playlist, or a new redirection
target of the playlist itself.
https://bugzilla.gnome.org/show_bug.cgi?id=731164
Only the first buffer of a fragment has its timestamp set, so only
update the segment.position when pushing those buffers to avoid
having GST_CLOCK_TIME_NONE set to the position
https://bugzilla.gnome.org/show_bug.cgi?id=729364
Otherwise we will never recover from previous errors, and especially
will never start again after a flushing seek if downstream returned
GST_FLOW_FLUSHING to us.
hlsdemux can't rely on the source to push flushes on a seek on ready
as that might not make sense. So always resort to flushing the
internal proxy pads by pushing flush events from the source's src pad.
Also as the seeking is not required anymore, only seek if there is
really a byte range to be used. And store a ref to the source's
src pad to avoid doing get_static_pad for every fragment.
In decryption scenario, a buffer is always stored to be sent later
to wait for more data or EOS to be able to strip the final bytes
if requested. In case an error hapenned this buffer can be ignored
and not pushed downstream.
Handle some more error cases:
1) When the source element fails to go to ready
2) When decryption fails
3) When there is no source to handle a specific URI
4) When the URI is invalid
Set up a message handling function to catch errors from the internal
source and store the last return code to identify error situations
when returning from a fragment download.
Also moves the duration increase to after the download when we
know if it was successful or not
When using the internal source, hlsdemux doesn't know the caps of
the input before adding the pad, so remove the arguments that would
use that as it is always NULL.
And use an specific flag to signal when a pad switch is required.
Using the discont flag is a bad idea now because when a fragment
download fails it will lead to exposing a pad group without any
data, causing decodebin to abort.
When receving EOS from the internal src, increase the current positon
by the fragment duration to allow correct restoring of download position
if the bitrate changes
Use the same properties as uridownloader to keep connections alive
between consecutive fragments downloads.
1) set keep-alive property to true
2) keep the element in READY instead of in NULL
Measure the download bitrate to be able to select
the best playlist.
As the buffers are directly pushed downstream and it
might block. The time is only measured from the download
until the pad push and it is started again after the push
returns.
Now the decryption is done buffer by buffer instead of on the
whole fragment at once. As it expects multiples of 16 bytes a
GstAdapter was added to properly chunk the buffers.
Also the last buffer must be resized depending on the value of the
last byte of the fragment, so hlsdemux always keeps a pending buffer
as it doesn't know if it is the last one yet
The GstElement is directly linked into a ghost pad and
its buffers are pushed as received downstream. This way the
buffers are small enough and not a whole fragment that usually
causes extra latency and makes buffering harder
They are very confusing for people, and more often than not
also just not very accurate. Seeing 'last reviewed: 2005' in
your docs is not very confidence-inspiring. Let's just remove
those comments.
But only add this for non-live playlists. For live playlists we already
have another thread that is periodically updating playlists.
Reason for this is that sometimes downloading a fragment can fail because
the URIs have changed or expired since last time.
Sequence numbers in different playlists are not guaranteed to be the same for the
same position, e.g. fragments could have different durations in different playlists.
In theory we should do exactly the same for live playlists, but unfortunately we can't
because doing this kind of seeking requires the complete playlist since we started
playback. For live playlists the server is however dropping fragments in the beginning
over time and we have no absolute time references.
Recent refactoring causes this code to be called with either a NULL
fragment, or a non NULL fragment. In the former case, we don't have
a buffer. In the latter case, the original code dealing with DISCONT
assumed the buffer was valid. Testing for a NULL buffer here thus
does not seem to change the intent, and fixes:
Coverity 1195147