Some distributions store OpenCV files in /usr/share/opencv and some others
(and default when building from source) install them in
/usr/share/OpenCV. Support both to find cascade files.
https://bugzilla.gnome.org/show_bug.cgi?id=753651
It is faster than doing a query that propagates downstream and
should be enough
Elements: faac, gsmenc, opusenc, sbcenc, voamrwbenc, adpcmenc, sirenenc
As we only expose the mapped portion of the frame into the GL
memory object (and not the original padding) we need to
re-calculate the size and offset.
When seeking to the last second of a mpd it would reject the seek
because the comparison was < instead of <=
This fails the important use case of seeking to the end of a file
to play it back in reverse from the end
There are several cases where a HLS server could temporarily have wrong
fragments, or reconfigure the playlist. In those cases, when we get
fragment download failures, we *really* want to wait a bit (for the next
playlist update) before retrying to get fragments.
Previously this method was first checking to see if there was next fragments
(according to the previous manifest update) before waiting for the next update.
The problem was that if that if there is a temporary failure on the server,
that's uncorrelated to whether the manifest contains next fragments or not.
We need to keep the active buffer (the one we have retreive a
texture id from) otherwise it's racy and upstream may upload
new content before we have rendered or during later redisplay.
The payloader didn't copy anything so far, the depayloader copied every
possible meta. Let's make it consistent and just copy all metas without
tags or with only the video tag.
https://bugzilla.gnome.org/show_bug.cgi?id=751774
A race condition in the state change function may cause buffers to be
unreffed while they are still used by the streaming thread in
gst_rtp_h265_pay_send_vps_sps_pps() resulting in a crash. Chain up to the
parent class first in the state change function to make sure streaming
has stopped and only then free those buffers.
https://bugzilla.gnome.org/show_bug.cgi?id=741381
The SPS struct might be filled out by a call to
gst_h264_parser_parse_subset_sps, which fills out
dynamically allocated data and requires a call
to gst_h264_sps_clear() to free it. Also make sure
to clear out any allocated SPS data when returning
an error.
https://bugzilla.gnome.org/show_bug.cgi?id=753306
The payloader didn't copy anything so far, the depayloader copied every
possible meta. Let's make it consistent and just copy all metas without
tags or with only the video tag.
https://bugzilla.gnome.org/show_bug.cgi?id=751774
The urn:mpeg:dash:utc:http-head:2014 method of time synchronisation
uses an HTTP HEAD request to a specified URL and then parses the
Date: HTTP response header.
This commit adds support to dashdemux for this method of time
synchronisation by making a HEAD request and then parsing the Date:
response.
This commit adds support to gstfragment to return the HTTP headers
and to uridownloader to support HEAD requests. To avoid creating a
new API, the RANGE get function is re-used (abused?) with start=-1
and end=-1 to indicate a HEAD request.
https://bugzilla.gnome.org/show_bug.cgi?id=752413
This commit addresses the following items from the code review:
use a portable way to define NTP_TO_UNIX_EPOCH,
fix memory leak on error, and
add documentation to UTCTiming parse functions
Using LL is not portable, so the G_GUINT64_CONSTANT needs to be instead.
If an error occurs during DNS resolution, the GError was not being
released, causing a memory leak.
https://bugzilla.gnome.org/show_bug.cgi?id=752413
Unless the DASH client can compensate for the difference between its
clock and the clock used by the server, the client might request
fragments that either not yet on the server or fragments that have
already been expired from the server. This is an issue because these
requests can propagate all the way back to the origin
ISO/IEC 23009-1:2014/Amd 1 [PDAM1] defines a new UTCTiming element to allow
a DASH client to track the clock used by the server generating the
DASH stream. Multiple UTCTiming elements might be present, to indicate
support for multiple methods of UTC time gathering. Each element can
contain a white space separated list of URLs that can be contacted
to discover the UTC time from the server's perspective.
This commit provides parsing of UTCTiming elements, unit tests of this
parsing and a function to poll a time server. This function
supports the following methods:
urn:mpeg:dash:utc:ntp:2014
urn:mpeg:dash:utc:http-xsdate:2014
urn:mpeg:dash:utc:http-iso:2014
urn:mpeg:dash:utc:http-ntp:2014
The manifest update task is used to poll the clock time server,
to save having to create a new thread.
When choosing the starting fragment number and when waiting for a
fragment to become available, the difference between the server's idea
of UTC and the client's idea of UTC is taken into account. For example,
if the server's time is behind the client's idea of UTC, we wait for
longer before requesting a fragment
[PDAM1]: http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber=66068
dashdemux: support NTP time servers in UTCTiming elements
Use the gst_ntp_clock to support the use of an NTP server.
https://bugzilla.gnome.org/show_bug.cgi?id=752413
When playing mts files with embedded subtitles, the buffer is mapped,
but not unmapped at the end resulting in a memory leak.
Also unref event in handle_dvd_event as it takes ownership of the event.
https://bugzilla.gnome.org/show_bug.cgi?id=753539
When playing mts files with embedded subtitles, there are few event leaks.
Events are supposed to be transfer full. So if not forwarding the event,
they need to be freed.
https://bugzilla.gnome.org/show_bug.cgi?id=753539
This reverts commit ff11a1a8a0.
It can't be assumed that all buffers in a buffer list have the same SSRC or
are RTP or RTCP only. It has to be checked for every single buffer, and one
basically has to do the processing that is done by the default chain_list
implementation.