There are several cases where a HLS server could temporarily have wrong
fragments, or reconfigure the playlist. In those cases, when we get
fragment download failures, we *really* want to wait a bit (for the next
playlist update) before retrying to get fragments.
Previously this method was first checking to see if there was next fragments
(according to the previous manifest update) before waiting for the next update.
The problem was that if that if there is a temporary failure on the server,
that's uncorrelated to whether the manifest contains next fragments or not.
The SPS struct might be filled out by a call to
gst_h264_parser_parse_subset_sps, which fills out
dynamically allocated data and requires a call
to gst_h264_sps_clear() to free it. Also make sure
to clear out any allocated SPS data when returning
an error.
https://bugzilla.gnome.org/show_bug.cgi?id=753306
The urn:mpeg:dash:utc:http-head:2014 method of time synchronisation
uses an HTTP HEAD request to a specified URL and then parses the
Date: HTTP response header.
This commit adds support to dashdemux for this method of time
synchronisation by making a HEAD request and then parsing the Date:
response.
This commit adds support to gstfragment to return the HTTP headers
and to uridownloader to support HEAD requests. To avoid creating a
new API, the RANGE get function is re-used (abused?) with start=-1
and end=-1 to indicate a HEAD request.
https://bugzilla.gnome.org/show_bug.cgi?id=752413
ChromaLog2WeightDenom = luma_log2_weight_denom + delta_chroma_log2_weight_denom
The value of ChromaLog2WeightDenom should be in the range of 0 to 7 and
the value luma_log2_weight_denom should be also in the range of 0 to 7.
Which means , delta_chroma_log2_weight_denom can have values in the range
between -7 and 7.
https://bugzilla.gnome.org/show_bug.cgi?id=753552
Adding this private header to the list of sources. We don't want to make
this header public, but we need it in the list of sources otherwise it
won't be included in the tarball. This fixes make distcheck.
This regression was introduced by commit 1a6fe3db
Depending on the bytes order we will get BGRA (little) and ARGB (big)
from the composition overlay buffer while our GL code expects RGBA. Add
a fragment shader that do this conversion.
https://bugzilla.gnome.org/show_bug.cgi?id=752842
In the case where you have a source giving the GstAggregator smaller
buffers than it uses, when it reaches a timeout, it will consume the
first buffer, then try to read another buffer for the pad. If the
previous element is not fast enough, it may get the next buffer even
though it may be queued just before. To prevent that race, the easiest
solution is to move the queue inside the GstAggregatorPad itself. It
also means that there is no need for strange code cause by increasing
the min latency without increasing the max latency proportionally.
This also means queuing the synchronized events and possibly acting
on them on the src task.
https://bugzilla.gnome.org/show_bug.cgi?id=745768
We can't know if the GstGLUpload type is initialized at this point already,
and thus our debug category might not be initialized yet... and cause an
assertion here.
As we don't print debug output for any of the other transform functions, let's
defer this problem for now.
Before aggregator based elements always started at running time 0,
now it's possible to select the first input buffer running time or
explicitly set a start-time value.
https://bugzilla.gnome.org/show_bug.cgi?id=749966
Adding a pad will add a new upstream that might have a bigger minimum latency,
so we might have to wait longer. Or it might be the first live upstream, in
which case we will have to start deadline based aggregation.
Removing a pad will remove a new upstream that might have had the biggest
latency, so we can now stop waiting a bit earlier. Or it might be the last
live upstream, in which case we can stop deadline based aggregation.
The coordinate are relative to the texture dimension and not
the window dimension now. There is no need to pass the window
dimension or to update the overlay if the dimension changes.
https://bugzilla.gnome.org/show_bug.cgi?id=745107
If a ContentProtection element is present in an AdaptationSet element,
send Protection events on the source pad, so that qtdemux can use this
information to correctly generate its source caps for DASH CENC
encrypted streams.
This allows qtdemux to support CENC encrypted DASH streams where the
content protection specific information is carried in the MPD file
rather than in pssh boxes in the initialisation segments.
This commit adds a new function to the adaptivedemux base class to allow
a GstEvent to be queued for a stream. The queue of events are sent the
next time a buffer is pushed for that stream.
https://bugzilla.gnome.org/show_bug.cgi?id=705991
They require to get_proc_address some functions through the
platform specific {glX,egl}GetProcAddress rather than the default
GL library symbol lookup.
nvidia drivers return the exact version in glGstString (GL_VERSION)
we request on creation so start with the highest known version and
work our way down.
The previous approach of traversing the other_context weak ref tree was
1. Less performant
2. Incorrect for context destruction removing a link in the tree
Example of 2:
c1 = context_create (NULL)
c2 = context_create (c1)
c3 = context_create (c2)
context_can_share (c1, c3) == TRUE
context_destroy (c2)
unref (c2)
context_can_share (c1, c3) returns FALSE when it should be TRUE!
This does not remove the restriction that context sharedness can only
be tracked between GstGLContext's.
This will deadlock if the main thread is the one who creates the GstGLContext.
All things we call from the main thread should be possible from any thread.
https://bugzilla.gnome.org/show_bug.cgi?id=751101
Sometimes the last fragment does not exist because of rounding errors with the
durations. Just finish the stream gracefully instead of erroring out instead.
Segment start/time/position/base should only be modified if this is the first
time we send a segment, otherwise we will override values from the seek
segment if new streams have to be exposed as part of the seek.
Segment base should be calculated from the segment start based on the stream's
own segment, not the demuxer's segment. Both might differ slightly because of
the presentationTimeOffset.
Always add the presentationTimeOffset (relative to the period start, not
timestamp 0) to the segment start after resetting the stream's segment based
on the demuxer's segment (i.e. after seeks or stream restart). Also make sure
to keep the stream's segment up to date and not just send a new segment event
without storing the segment in the stream.
https://bugzilla.gnome.org/show_bug.cgi?id=745455
And include the presentation offset in the last known position for each
stream, and just because we can also keep track of the latest known position
inside the demuxer segment.
It's going to return EOS if the period ended or otherwise there is just no
next fragment left. If we don't store the last return value, it will always
stay OK and gst_adaptive_demux_combine_flows() will always return OK instead
of EOS once all streams are done.
This partially switches period changes in DASH by at least trying to switch
instead of just stopping. What is still left is that after a period change
with DASH the times all start at 0 again instead of continuing.
The problem here was that after removing the formats and
all the things we could convert, we then intersected these
caps with the template caps.
Hence if a subclass offered permissive sink templates
(eg all the possible formats videoconvert handles), but only
one output format, then at negotiation time getcaps returned
caps with the format restricted to that format, even though
we do handle conversion.
https://bugzilla.gnome.org/show_bug.cgi?id=751255
- add data pointer to GstJpegSegment and pass segment
to all parsing functions, rename accordingly
- shorten GstJpegMarkerCode enum type name to GstJpegMarker
- move function gtk-doc blurbs into .c file
- add since markers
- flesh out docs for SOF markers
https://bugzilla.gnome.org/show_bug.cgi?id=673925
Fix scan for next marker code when there is an odd number of filler
(0xff) bytes before the actual marker code. Also optimize the loop
to execute with fewer instructions (~10%).
This fixes parsing for Spectralfan.mov.
The size of a marker segment is defined to be exclusive of any initial
marker code. So, fix the size for SOI, EOI and APPn segments but also
the size of any possible segment that is usually "reserved" or not
explicitly defined.
https://bugzilla.gnome.org/show_bug.cgi?id=707447