when allocating memory. Fixes crashes with avdec_h265 in the AVX2
code path which requires 32-byte stride alignment, but the
GstAllocationParams only specified a 16-byte alignment.
https://bugzilla.gnome.org/show_bug.cgi?id=754120
We have to queue buffers based on their running time, not based on
the segment position.
Also return running time from GstAggregator::get_next_time() instead of
a segment position, as required by the API.
Also only update the segment position after we pushed a buffer, otherwise
we're going to push down a segment event with the next position already.
https://bugzilla.gnome.org/show_bug.cgi?id=753196
Each period will start again with pts 0 + period presentation offset, which is
also going to be the presentation time inside the container stream if any.
However all periods together should form a continuous timeline, with regard to
stream time and running time.
For making this possible we keep track of the "user requested segment", i.e.
the seek events, inside the demuxer without adjusting anything and taking this
demuxer segment only as orientation for modified segments per stream.
This per stream segments will have their segment.start at pts that would be
produced for this stream in this period, and the segment.base/time adjusted so
that this pts maps to the running and stream time this period should have in
the context of all other periods.
https://bugzilla.gnome.org/show_bug.cgi?id=754222
Only accept alpha if downstream has alpha as well. It could
theoretically accept alpha unconditionally if blending is
properly implemented for handle it but at the moment this
is a missing feature.
Improves the caps query by also comparing with the template
caps to filter by what the subclass supports.
https://bugzilla.gnome.org/show_bug.cgi?id=754465
If short_term_ref_pic_set_sps_flag is FALSE, the ShortTermRefPicSet
structure is supposed to derive from slice header. Which means,
CurrRpsIdx is equal to num_short_term_ref_pic_sets. But the number
of refpicsets communicated via sps header is only num_short_term_ref_pic_sets - 1.
And we are using slice_header structure to reference the last entry, which is
ShortTermRefPicSet[num_short_term_ref_pic_sets].
https://bugzilla.gnome.org/show_bug.cgi?id=754834
HAVE_IOS is only defined for the build of this module so
attempting to use gstgl in iOS would result in incorrect GL
includes.
Use GST_GL_HAVE_PLATFORM_EAGL instead for choosing the iOS GL
header.
Section 6.5.1: Coding tree block raster and tile scanning conversion process
Follow the equations 6-3 and 6-4
This will provide correct offset_max in slice_header for parsing
num_entry_point_offsets.
https://bugzilla.gnome.org/show_bug.cgi?id=754024
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
As per 7-42 and 7-43 the ScalingFactor's scanIdx is 0,
which is "up-right-diagonal" scan. Add APIs for converting
up-right-diagonal to raster and vise versa.
https://bugzilla.gnome.org/show_bug.cgi?id=754024
Being more strict on specification, According to 7.4.7.3,
delta_chroma_log2_weight_denom should be in the range of
[(0 - luma_log2_weight_denom), (7 - luma_log2_weight_denom)]
https://bugzilla.gnome.org/show_bug.cgi?id=754024
During allocation query, when this element is not passthrough, it must
relay the overlay compostion meta and it's parameters. Fortunatly, base
transform can do this for us.
https://bugzilla.gnome.org/show_bug.cgi?id=753850
GL_SHADING_LANGUAGE_VERSION was introduced since ES 2.0, but in some
android emulator doesn't support this feature. To prevent confusion for
developer, the error message need to be more clear.
https://bugzilla.gnome.org/show_bug.cgi?id=753905
Removes the redundant GL object creation/deletion on every
decide_allocation call which is being called for every caps change.
Thus reduces the required GL state changes on reconfigure events
which are being sent by glimagesink/xvimagesink
Instead of checking for the gstreamer-video-1.0 package is installed,
just assume it is since we already check for the -base dependency.
With this replace the GST_VIDEO_* variables in makefiles and directly
link with libgstvideo.
https://bugzilla.gnome.org/show_bug.cgi?id=753820
As we only expose the mapped portion of the frame into the GL
memory object (and not the original padding) we need to
re-calculate the size and offset.
There are several cases where a HLS server could temporarily have wrong
fragments, or reconfigure the playlist. In those cases, when we get
fragment download failures, we *really* want to wait a bit (for the next
playlist update) before retrying to get fragments.
Previously this method was first checking to see if there was next fragments
(according to the previous manifest update) before waiting for the next update.
The problem was that if that if there is a temporary failure on the server,
that's uncorrelated to whether the manifest contains next fragments or not.
The SPS struct might be filled out by a call to
gst_h264_parser_parse_subset_sps, which fills out
dynamically allocated data and requires a call
to gst_h264_sps_clear() to free it. Also make sure
to clear out any allocated SPS data when returning
an error.
https://bugzilla.gnome.org/show_bug.cgi?id=753306
The urn:mpeg:dash:utc:http-head:2014 method of time synchronisation
uses an HTTP HEAD request to a specified URL and then parses the
Date: HTTP response header.
This commit adds support to dashdemux for this method of time
synchronisation by making a HEAD request and then parsing the Date:
response.
This commit adds support to gstfragment to return the HTTP headers
and to uridownloader to support HEAD requests. To avoid creating a
new API, the RANGE get function is re-used (abused?) with start=-1
and end=-1 to indicate a HEAD request.
https://bugzilla.gnome.org/show_bug.cgi?id=752413
ChromaLog2WeightDenom = luma_log2_weight_denom + delta_chroma_log2_weight_denom
The value of ChromaLog2WeightDenom should be in the range of 0 to 7 and
the value luma_log2_weight_denom should be also in the range of 0 to 7.
Which means , delta_chroma_log2_weight_denom can have values in the range
between -7 and 7.
https://bugzilla.gnome.org/show_bug.cgi?id=753552
Adding this private header to the list of sources. We don't want to make
this header public, but we need it in the list of sources otherwise it
won't be included in the tarball. This fixes make distcheck.
This regression was introduced by commit 1a6fe3db
Depending on the bytes order we will get BGRA (little) and ARGB (big)
from the composition overlay buffer while our GL code expects RGBA. Add
a fragment shader that do this conversion.
https://bugzilla.gnome.org/show_bug.cgi?id=752842
In the case where you have a source giving the GstAggregator smaller
buffers than it uses, when it reaches a timeout, it will consume the
first buffer, then try to read another buffer for the pad. If the
previous element is not fast enough, it may get the next buffer even
though it may be queued just before. To prevent that race, the easiest
solution is to move the queue inside the GstAggregatorPad itself. It
also means that there is no need for strange code cause by increasing
the min latency without increasing the max latency proportionally.
This also means queuing the synchronized events and possibly acting
on them on the src task.
https://bugzilla.gnome.org/show_bug.cgi?id=745768
We can't know if the GstGLUpload type is initialized at this point already,
and thus our debug category might not be initialized yet... and cause an
assertion here.
As we don't print debug output for any of the other transform functions, let's
defer this problem for now.
Before aggregator based elements always started at running time 0,
now it's possible to select the first input buffer running time or
explicitly set a start-time value.
https://bugzilla.gnome.org/show_bug.cgi?id=749966
Adding a pad will add a new upstream that might have a bigger minimum latency,
so we might have to wait longer. Or it might be the first live upstream, in
which case we will have to start deadline based aggregation.
Removing a pad will remove a new upstream that might have had the biggest
latency, so we can now stop waiting a bit earlier. Or it might be the last
live upstream, in which case we can stop deadline based aggregation.