There are several cases where a HLS server could temporarily have wrong
fragments, or reconfigure the playlist. In those cases, when we get
fragment download failures, we *really* want to wait a bit (for the next
playlist update) before retrying to get fragments.
Previously this method was first checking to see if there was next fragments
(according to the previous manifest update) before waiting for the next update.
The problem was that if that if there is a temporary failure on the server,
that's uncorrelated to whether the manifest contains next fragments or not.
We need to keep the active buffer (the one we have retreive a
texture id from) otherwise it's racy and upstream may upload
new content before we have rendered or during later redisplay.
The payloader didn't copy anything so far, the depayloader copied every
possible meta. Let's make it consistent and just copy all metas without
tags or with only the video tag.
https://bugzilla.gnome.org/show_bug.cgi?id=751774
A race condition in the state change function may cause buffers to be
unreffed while they are still used by the streaming thread in
gst_rtp_h265_pay_send_vps_sps_pps() resulting in a crash. Chain up to the
parent class first in the state change function to make sure streaming
has stopped and only then free those buffers.
https://bugzilla.gnome.org/show_bug.cgi?id=741381
The SPS struct might be filled out by a call to
gst_h264_parser_parse_subset_sps, which fills out
dynamically allocated data and requires a call
to gst_h264_sps_clear() to free it. Also make sure
to clear out any allocated SPS data when returning
an error.
https://bugzilla.gnome.org/show_bug.cgi?id=753306
The payloader didn't copy anything so far, the depayloader copied every
possible meta. Let's make it consistent and just copy all metas without
tags or with only the video tag.
https://bugzilla.gnome.org/show_bug.cgi?id=751774
The urn:mpeg:dash:utc:http-head:2014 method of time synchronisation
uses an HTTP HEAD request to a specified URL and then parses the
Date: HTTP response header.
This commit adds support to dashdemux for this method of time
synchronisation by making a HEAD request and then parsing the Date:
response.
This commit adds support to gstfragment to return the HTTP headers
and to uridownloader to support HEAD requests. To avoid creating a
new API, the RANGE get function is re-used (abused?) with start=-1
and end=-1 to indicate a HEAD request.
https://bugzilla.gnome.org/show_bug.cgi?id=752413
This commit addresses the following items from the code review:
use a portable way to define NTP_TO_UNIX_EPOCH,
fix memory leak on error, and
add documentation to UTCTiming parse functions
Using LL is not portable, so the G_GUINT64_CONSTANT needs to be instead.
If an error occurs during DNS resolution, the GError was not being
released, causing a memory leak.
https://bugzilla.gnome.org/show_bug.cgi?id=752413
Unless the DASH client can compensate for the difference between its
clock and the clock used by the server, the client might request
fragments that either not yet on the server or fragments that have
already been expired from the server. This is an issue because these
requests can propagate all the way back to the origin
ISO/IEC 23009-1:2014/Amd 1 [PDAM1] defines a new UTCTiming element to allow
a DASH client to track the clock used by the server generating the
DASH stream. Multiple UTCTiming elements might be present, to indicate
support for multiple methods of UTC time gathering. Each element can
contain a white space separated list of URLs that can be contacted
to discover the UTC time from the server's perspective.
This commit provides parsing of UTCTiming elements, unit tests of this
parsing and a function to poll a time server. This function
supports the following methods:
urn:mpeg:dash:utc:ntp:2014
urn:mpeg:dash:utc:http-xsdate:2014
urn:mpeg:dash:utc:http-iso:2014
urn:mpeg:dash:utc:http-ntp:2014
The manifest update task is used to poll the clock time server,
to save having to create a new thread.
When choosing the starting fragment number and when waiting for a
fragment to become available, the difference between the server's idea
of UTC and the client's idea of UTC is taken into account. For example,
if the server's time is behind the client's idea of UTC, we wait for
longer before requesting a fragment
[PDAM1]: http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber=66068
dashdemux: support NTP time servers in UTCTiming elements
Use the gst_ntp_clock to support the use of an NTP server.
https://bugzilla.gnome.org/show_bug.cgi?id=752413
When playing mts files with embedded subtitles, the buffer is mapped,
but not unmapped at the end resulting in a memory leak.
Also unref event in handle_dvd_event as it takes ownership of the event.
https://bugzilla.gnome.org/show_bug.cgi?id=753539
When playing mts files with embedded subtitles, there are few event leaks.
Events are supposed to be transfer full. So if not forwarding the event,
they need to be freed.
https://bugzilla.gnome.org/show_bug.cgi?id=753539
This reverts commit ff11a1a8a0.
It can't be assumed that all buffers in a buffer list have the same SSRC or
are RTP or RTCP only. It has to be checked for every single buffer, and one
basically has to do the processing that is done by the default chain_list
implementation.
h264parse and gstrtph264depay do the same, let's keep the behaviour
consistent. As we now include the codec_data inside the stream, this causes
less caps renegotiation.
https://bugzilla.gnome.org/show_bug.cgi?id=753228
rtph264depay does the same and this fixes decoding of some streams with 32
SPS (or 256 PPS). It is allowed to have SPS ID 0 to 31 (or PPS ID 0 to 255),
but the field in the codec_data for the number of SPS or PPS is only 5
(or 8) bit. As such, 32 SPS (or 256 PPS) are interpreted as 0 everywhere.
This looks like a mistake in the part of the spect about the codec_data.
This causes an assertion and would lead to getting a NULL instead
of a buffer. Without proper checking this would easily lead to a
segfault.
Related to rpth264depay: https://bugzilla.gnome.org/show_bug.cgi?id=737199
ChromaLog2WeightDenom = luma_log2_weight_denom + delta_chroma_log2_weight_denom
The value of ChromaLog2WeightDenom should be in the range of 0 to 7 and
the value luma_log2_weight_denom should be also in the range of 0 to 7.
Which means , delta_chroma_log2_weight_denom can have values in the range
between -7 and 7.
https://bugzilla.gnome.org/show_bug.cgi?id=753552
The current code was ignoring the par/dar aspect when transforming
from window coordinates to stream coordinates resulting in incorrect
coordinates being sent upstream in the navigation events.
String parameters are expected to be passed as (f0r_param_string *),
which actually map to char**. In the filters this is evaluated as
(*(char**)param) which currently lead to crash when passing char*.
Remove the special case for string, all types, including char* as
passed as a reference.
https://phabricator.freedesktop.org/T83
The payloader didn't copy anything so far, the depayloader copied every
possible meta. Let's make it consistent and just copy all metas without tags or
with only the audio tag.
https://bugzilla.gnome.org/show_bug.cgi?id=751774
Checking the vector is not empty and checking the vector size is greater
than zero are the same thing, this is a redundancy in the code. Only
checking the vector is not empty is sufficient, therefore removing the
other check.
Adding this private header to the list of sources. We don't want to make
this header public, but we need it in the list of sources otherwise it
won't be included in the tarball. This fixes make distcheck.
This regression was introduced by commit 1a6fe3db