Fixed up the error-handling code when downloading fragments.
Modifed the error-handling code to use positive logic when
testing for cancellation of the download loop.
https://bugzilla.gnome.org/show_bug.cgi?id=701404
There is an issue for live streams where download_loop will keep
downloading segments until it gets a 404 error for a segment
that has not yet been published. This is a problem because this
request for a segment that doesn't exist will propagate all the
way back to the origin server(s). This means that dashdemux causes
extra load on the origin server(s) for segments that aren't yet
available.
This patch uses availabilityStartTime, period
and the host's idea of UTC to decide if a fragment is available to
be requested from an HTTP server and filter out requests for fragments
that are not yet available.
https://bugzilla.gnome.org/show_bug.cgi?id=701404
liveadder sometimes calculates the offsets incorrectly before adding. The
resulting errors can easily be heard when mixing silence with a sine.
I'm not sure what the exact conditions are to trigger this, but it definitively
happens when the buffers of two streams have a different duration and buffer
length and duration don't match exactly for one stream because of rounding
errors (e.g. duration=0:00:00.021333333)
I have to admit, I got lost in the math somewhere but it seems that not
rounding in gst_live_adder_length_from_duration() causes 1 sample overlaps in
consecutive buffers from the same stream.
When using gst_util_uint64_scale_int_round() instead of just truncating the
sine sound correctly again.
https://bugzilla.gnome.org/show_bug.cgi?id=708345
It is quite possible that we might get PTS/DTS before the first
PCR/Offset observation.
In order to end up with valid timestamp we wait until at least one
stream was able to get a proper running-time for any PTS/DTS.
Until then, we queue up the pending buffers to push out.
Once we see a first valid timestamp, we re-evaluate the amount of
running-time elapsed (based on returned inital running-time and amount
of data/DTS queued up) for any given stream.
Taking the biggest amount of elapsed time, we set that on the packetizer
as the initial offset and recalculate all pending buffers running-time
PTS/DTS.
Note: The buffer queueing system can also be used later on for the
dvb fast start proposal (where we queue up all stream packets before
seeing PAT/PMT and then push them once we know if they belong to the
chosen program).
This allows:
* Better duration estimation
* More accurate PCR location
* Overall more accurate running-time location and calculation
Location and values of PCR are recorded in groups (PCROffsetGroup)
with notable PCR/Offset observations in them (when bitrate changed
for example). PCR and offset are stored as 32bit values to
reduce memory usage (they are differences against that group's
first_{pcr|offset}.
Those groups each contain a global PCR offset (pcr_offset) which
indicates how far in the stream that group is.
Whenever new PCR values are observed, we store them in a sliding
window estimator (PCROffsetGroupCurrent).
When a reset/wrapover/gap is detected, we close the current group with
current values and start a new one (the pcr_offset of that new group
is also calculated).
When a notable change in bitrate is observed (+/- 10%), we record
new values in the current group. This is a compromise between
storing all PCR/offset observations and none, while at the same time
providing better information for running-time<=>offset calculation
in VBR streams.
Whenever a new non-contiguous group is start (due to seeking for example)
we re-evaluate the pcr_offset of each groups. This allows detecting as
quickly as possible PCR wrapover/reset.
When wanting to find the offset of a certain running-time, one can
iterate the groups by looking at the pcr_offset (which in essence *is*
the running-time of that group in the overall stream).
Once a group (or neighbouring groups if the running-time is between two
groups) is found, once can use the recorded values to find the most
accurate offset.
Right now this code is only used in pull-mode , but could also
be activated later on for any seekable stream, like live timeshift
with queue2.
Future improvements:
* some heuristics to "compress" the stored values in groups so as to keep
the memory usage down while still keeping a decent amount of notable
points.
* After a seek compare expected and obtained PCR/Offset and if the
difference is too big, re-calculate position with newly observed
values and seek to that more accurate position.
Note that this code will *not* provide keyframe-accurate seeking, but
will allow a much more accurate PCR/running-time/offset location on
any random stream.
For past (observed) values it will be as accurate as can be.
For future values it will be better than the current situation.
Finally the more you seek, the more accurate your positioning will be.
On some live HLS streams, gst_hls_demux_switch_playlist causes
assertion failures because it tried to dereference a NULL fragment.
This is because g_queue_peek_tail sometimes was returning NULL and
this case was not being checked.
This patch does two things:
* move the g_queue_peek_tail inside the semaphore protection
* check if q_queue_peek_tail returns NULL
https://bugzilla.gnome.org/show_bug.cgi?id=708849
The previous code could enter an infinite loop because the adapter state
could get out of sync with its mapped data state after sync was lost.
The code was pretty confusing so it's been rewritten to be clearer.
The easiest way to reproduce the infinite loop is to use the breakmydata
element before tsdemux to trigger a resync.
https://bugzilla.gnome.org/show_bug.cgi?id=708161