Only reset the decryption engine on the first buffer of a fragment,
not again for the second buffer. This fixes corrupting the second
buffer of a fragment.
https://bugzilla.gnome.org/show_bug.cgi?id=731968
This can happen if the playlists have moved due to the variant playlist
now being redirected to another target. This currently only works as long
as the referenced playlists don't change in relation to the variant
playlist, and the new location is purely due to a new path triggered by a
new redirection target of the variant playlist, or a new redirection
target of the playlist itself.
https://bugzilla.gnome.org/show_bug.cgi?id=731164
Only the first buffer of a fragment has its timestamp set, so only
update the segment.position when pushing those buffers to avoid
having GST_CLOCK_TIME_NONE set to the position
https://bugzilla.gnome.org/show_bug.cgi?id=729364
Otherwise we will never recover from previous errors, and especially
will never start again after a flushing seek if downstream returned
GST_FLOW_FLUSHING to us.
hlsdemux can't rely on the source to push flushes on a seek on ready
as that might not make sense. So always resort to flushing the
internal proxy pads by pushing flush events from the source's src pad.
Also as the seeking is not required anymore, only seek if there is
really a byte range to be used. And store a ref to the source's
src pad to avoid doing get_static_pad for every fragment.
In decryption scenario, a buffer is always stored to be sent later
to wait for more data or EOS to be able to strip the final bytes
if requested. In case an error hapenned this buffer can be ignored
and not pushed downstream.
Handle some more error cases:
1) When the source element fails to go to ready
2) When decryption fails
3) When there is no source to handle a specific URI
4) When the URI is invalid
Set up a message handling function to catch errors from the internal
source and store the last return code to identify error situations
when returning from a fragment download.
Also moves the duration increase to after the download when we
know if it was successful or not
When using the internal source, hlsdemux doesn't know the caps of
the input before adding the pad, so remove the arguments that would
use that as it is always NULL.
And use an specific flag to signal when a pad switch is required.
Using the discont flag is a bad idea now because when a fragment
download fails it will lead to exposing a pad group without any
data, causing decodebin to abort.
When receving EOS from the internal src, increase the current positon
by the fragment duration to allow correct restoring of download position
if the bitrate changes
Use the same properties as uridownloader to keep connections alive
between consecutive fragments downloads.
1) set keep-alive property to true
2) keep the element in READY instead of in NULL
Measure the download bitrate to be able to select
the best playlist.
As the buffers are directly pushed downstream and it
might block. The time is only measured from the download
until the pad push and it is started again after the push
returns.
Now the decryption is done buffer by buffer instead of on the
whole fragment at once. As it expects multiples of 16 bytes a
GstAdapter was added to properly chunk the buffers.
Also the last buffer must be resized depending on the value of the
last byte of the fragment, so hlsdemux always keeps a pending buffer
as it doesn't know if it is the last one yet
The GstElement is directly linked into a ghost pad and
its buffers are pushed as received downstream. This way the
buffers are small enough and not a whole fragment that usually
causes extra latency and makes buffering harder
They are very confusing for people, and more often than not
also just not very accurate. Seeing 'last reviewed: 2005' in
your docs is not very confidence-inspiring. Let's just remove
those comments.
But only add this for non-live playlists. For live playlists we already
have another thread that is periodically updating playlists.
Reason for this is that sometimes downloading a fragment can fail because
the URIs have changed or expired since last time.
Sequence numbers in different playlists are not guaranteed to be the same for the
same position, e.g. fragments could have different durations in different playlists.
In theory we should do exactly the same for live playlists, but unfortunately we can't
because doing this kind of seeking requires the complete playlist since we started
playback. For live playlists the server is however dropping fragments in the beginning
over time and we have no absolute time references.
Recent refactoring causes this code to be called with either a NULL
fragment, or a non NULL fragment. In the former case, we don't have
a buffer. In the latter case, the original code dealing with DISCONT
assumed the buffer was valid. Testing for a NULL buffer here thus
does not seem to change the intent, and fixes:
Coverity 1195147
...instead of adding them from the start of playlist every time. This
among other things fixes timestamps for live streams, where the playlist
is some kind of ringbuffer of fragments and thus adding from the beginning
of the playlist will miss the past fragments.
https://bugzilla.gnome.org/show_bug.cgi?id=724983
We now download fragments as fast as possible and push them downstream
while another thread is just responsible for updating live playlists
every now and then.
This simplifies the code a lot and together with the new buffering
mode for adaptive streams in multiqueue makes streams start much faster.
Also simplify threading a bit and hopefully make the GstTask usage safer.
And only afterwards wait until a fragment was played. Otherwise we're keeping
our cache most of the time at "fragments-cache" fragments minus one.
Also allow setting "fragments-cache" to 1 now to start playback even faster.
nettle is used by newer versions of gnutls, while older versions of gnutls
used libgcrypt. Support both for now as not every distro has nettle yet.
nettle is preferred as it is more efficient to use and much smaller.
This patch fixes three memory leaks in hlsdemux, one that occurs
during normal operation and two that occur during error conditions.
The gst_hls_demux_get_next_fragment function calls
gst_fragment_get_buffer which increments the reference count
on the buffer but gst_hls_demux_get_next_fragment never calls unref on
the buffer. This means that the reference count for each downloaded
fragment never gets to zero and so its memory is never released.
This patch adds a call to gst_buffer_unref after the flags have been
updated on the buffer.
There is a leak-on-error in gst_hls_demux_decrypt_fragment if it fails
to download the key file. If the key fails to download, null is
returned without doing an unref on the encrypted fragment. The
semantics of gst_hls_demux_decrypt_fragment is that it takes ownership
of the encrypted fragment and releases it before returning.
There is a leak-on-error in gst_hls_src_buf_to_utf8_playlist in the
unlikely event that the gst_buffer_map fails. In the "happy path"
operation of gst_hls_src_buf_to_utf8_playlist the buffer gets an unref
before the function returns, therefore the error condition must do the
same.
https://bugzilla.gnome.org/show_bug.cgi?id=710881
On some live HLS streams, gst_hls_demux_switch_playlist causes
assertion failures because it tried to dereference a NULL fragment.
This is because g_queue_peek_tail sometimes was returning NULL and
this case was not being checked.
This patch does two things:
* move the g_queue_peek_tail inside the semaphore protection
* check if q_queue_peek_tail returns NULL
https://bugzilla.gnome.org/show_bug.cgi?id=708849
Every encrypted fragment will be a multiple of 128 bits, the last byte
contains the number of bytes that were added as padding in the end
and should be removed.
https://bugzilla.gnome.org/show_bug.cgi?id=701673