Otherwise we will look for segments after the period usually. The seek
timestamp is relative to the start of the first period and we have to
select a segment relative to the current period's start.
We didn't do this for fragments that are generated on demand from a template,
only for the other cases when they were all generated upfront. This caused
fragment timestamps to start from 0 again for each new period.
If not set, the timeShiftBufferDepth has a default value of -1.
The standard says that this should be interpreted as infinite.
The gst_mpd_client_check_time_position function incorrectly compares
timeShiftBufferDepth with 0 instead of -1 to determine if it was set.
https://bugzilla.gnome.org/show_bug.cgi?id=751500
The last parameter of gst_mpd_client_add_media_segment function is a
duration. But when called from gst_mpd_client_setup_representation, the
last argument was wrongly set to PeriodEnd
https://bugzilla.gnome.org/show_bug.cgi?id=751449
The period start information, calculated in gst_mpd_client_setup_media_presentation
function is stored in stream_period->start. The information read from
xml file and stored in stream_period->period->start is not changed.
If the xml file does not contain the period start information,
stream_period->period->start will be -1.
The function gst_mpd_client_get_next_segment_availability_end_time wants to
use period start time, but incorrectly uses stream_period->period->start
(value from xml file, which could be -1) instead of stream_period->start
(computed value)
https://bugzilla.gnome.org/show_bug.cgi?id=751465
According to ISO/IEC 23009-1:2014(E), chapter 5.3.2.1
"The Period extends until the PeriodStart of the next Period, or until
the end of the Media Presentation in the case of the last Period."
This means that a configured value for optional attribute period duration
should be ignored if the next period contains a start attribute or it is
the last period and the MPD contains a mediaPresentationDuration attribute.
https://bugzilla.gnome.org/show_bug.cgi?id=750797
Added some warning messages in gst_mpd_client_setup_streaming to help
debug situations when the function will return FALSE.
Renamed a wrongly spelled variable.
https://bugzilla.gnome.org/show_bug.cgi?id=751149
Corrected some comments in gstmpdparser.h file.
Moved gst_mpd_client_get_adaptation_sets function to be grouped with
other functions from AdaptationSet group
https://bugzilla.gnome.org/show_bug.cgi?id=751149
The gst_mpdparser_get_rep_idx_with_max_bandwidth function assumes
representations are ordered by bandwidth and incorrectly returns the
first one when wanting the one with minimum bandwidth.
Corrected gst_mpdparser_get_rep_idx_with_max_bandwidth function to get the
correct representation in case max_bandwidth parameter is 0.
https://bugzilla.gnome.org/show_bug.cgi?id=751153
Added a check for a_node->ns before accessing a_node->ns->href in
gst_mpdparser_get_xml_node_namespace. This could happen if the xml
is missing the default namespace.
https://bugzilla.gnome.org/show_bug.cgi?id=750866
If the presentationTimeOffset attribute of a DASH manifest contains
a value that is larger than 2^32, gstmpdparser incorrectly calculates
the stream's presentation time offset. This is due to two bugs:
1: Using gst_mpdparser_get_xml_prop_unsigned_integer rather than
gst_mpdparser_get_xml_prop_unsigned_integer_64 to parse the
attribute
2: gst_mpd_client_setup_representation multiplying the value by
GST_SECOND and then dividing by timescale
https://bugzilla.gnome.org/show_bug.cgi?id=750804
This reverts commit 37011e5198.
This change was actually completely unnecessary, the streams in question are
marked as static and are not considered live anyway.
Otherwise we'll only get half of its bits printed on 32 bit architectures.
For this, promote the %d-style format strings to something that accepts
64 bit integers with G_GINT64_MODIFIER.
Using format strings from an untrusted source without validation is
calling for problems, and at least allows to remotely crash your application.
If not worse.
The functions to get the next fragment, next fragment timestamp and to advance
to the next fragment need to work differently when stream->segments is NULL.
Use logic similar to that introduced by commit 2105a310 to perform these
functions.
https://bugzilla.gnome.org/show_bug.cgi?id=749684
When all fragments have already been downloaded on a live stream
dashdemux would busy loop as the default implementation of
has_next_fragment would return TRUE. Implement it to correctly
signal if adaptivedemux should wait for the manifest update before
trying to get new fragments.
When updating the manifest the timestamps on it might have changed a little
due to rounding and timescale conversions. If the change makes the timestamp
of the current segment to go up it makes dashdemux reposition to the previous
one causing one extra unnecessary download.
So when repositioning add an extra 10 microseconds to cover for that rounding
issues and increase the chance of falling in the same segment.
Additionally, also improve the time used when the client is already after the
last segment. Instead of using the last segment starting timestamp use the
final timestamp to make it reposition to the next one and not to the one that
has already been downloaded.
These functions of directly getting and setting segment indexes
are no longer useful as now we need 2 indexes: repeat and segment
index.
The only operations needed are advance_segment, going back to the
first one or seeking for a timestamp.
Segments are now stored with their repeat counts instead of spanding
them to multiple segments. This caused advancing to the next segment
using a single index to have to iterate over the whole list every time.
This commit addresses this by storing both the segment index as well
as the repeat index and makes advancing to next segment just an
increment of the repeat or the segment index.
Use a single segment to represent it internally to avoid using too
much memory. This has the drawback of issuing a linear search to
find the correct segment to play but this can be fixed by using
binary searches or caching the current position and just looking
for the next one.
https://bugzilla.gnome.org/show_bug.cgi?id=748369
There is a playback error when trying to play a content that
has 'application' mimeType. This commit prevents an exception from
setup text streams.
https://bugzilla.gnome.org/show_bug.cgi?id=747525
Bitrate-limit is already available in the baseclass and, even though
the bandwidth-usage name is better, hls and mss already used
bitrate-limit. This patch deprecates the bandwidth-usage and maps
it to the baseclass bitrate-limite.
By implementing get_live_seek_range.
As shown by :
gst-validate-1.0 playbin \
uri=http://dev-iplatforms.kw.bbc.co.uk/dash/news24-avc3/news24.php
This patch handles live seeking, by setting a live seek range
comprised between now - timeShiftBufferDepth and now.
The inteersting thing with this stream is that one can actually
ask fragments up to availabilityStartTime, but it seems quite clear
in the spec that content is only guaranteed to exist up to
timeShiftBufferDepth.
One can test live seeking this way :
gst-validate-1.0 playbin \
uri=http://dev-iplatforms.kw.bbc.co.uk/dash/news24-avc3/news24.php \
--set-scenario seek_back.scenario
with scenario being:
description, seek=true
seek, playback-time=position+5.0, start="position-600.0",
flags=accurate+flush
This example will play the stream, wait for five seconds, then seek back
to a position 10 minutes earlier.
https://bugzilla.gnome.org/show_bug.cgi?id=744362
Add more power to the chunk_received function (renamed to data_received)
and also to the fragment_finish function.
The data_received function must parse/decrypt the data if necessary and
also push it using the new push_buffer function that is exposed now. The
default implementation gets data from the stream adapter (all available)
and pushes it.
The fragment_finish function must also advance the fragment. The default
implementation only advances the fragment.
This allows the subsegment handling in dashdemux to continuously download
the same file from the server instead of stopping at every subsegment
boundary and starting a new request