The functions to get the next fragment, next fragment timestamp and to advance
to the next fragment need to work differently when stream->segments is NULL.
Use logic similar to that introduced by commit 2105a310 to perform these
functions.
https://bugzilla.gnome.org/show_bug.cgi?id=749684
Previously the VPS unit was detected and all next packets where copied
into the header buffer assuming only SPS and PPS would follow. This is
not always true, also other types of NAL units follow the VPS unit and
where copied to the header buffer. Now the VPS/SPS/PPS are explicitely
detected and copied in the header buffer.
1. Set the sync point after the (possible) upload has occured
2. Wait in the correct GL context (the draw context)
Note: We don't add the GL sync meta to the input buffer as it's not
writable and a copy would be expensive.
Similar to the change with the same name for glimagesink
1. Set the sync point after the (possible) upload has occured
2. Wait in the correct GL context (the draw context)
Note: We don't add the GL sync meta to the input buffer as it's not
writable and a copy would be expensive.
The property level has a minimum value of 0. But when we set the level as 0,
it gets an assertion error. The function icvPyrSegmentation8uC3R returns false
if level is set as 0, since the minimum level cant be 0 and thus results in error.
Hence changing the minimum value to 1.
https://bugzilla.gnome.org/show_bug.cgi?id=749525
When all fragments have already been downloaded on a live stream
dashdemux would busy loop as the default implementation of
has_next_fragment would return TRUE. Implement it to correctly
signal if adaptivedemux should wait for the manifest update before
trying to get new fragments.
When updating the manifest the timestamps on it might have changed a little
due to rounding and timescale conversions. If the change makes the timestamp
of the current segment to go up it makes dashdemux reposition to the previous
one causing one extra unnecessary download.
So when repositioning add an extra 10 microseconds to cover for that rounding
issues and increase the chance of falling in the same segment.
Additionally, also improve the time used when the client is already after the
last segment. Instead of using the last segment starting timestamp use the
final timestamp to make it reposition to the next one and not to the one that
has already been downloaded.
These functions of directly getting and setting segment indexes
are no longer useful as now we need 2 indexes: repeat and segment
index.
The only operations needed are advance_segment, going back to the
first one or seeking for a timestamp.
Segments are now stored with their repeat counts instead of spanding
them to multiple segments. This caused advancing to the next segment
using a single index to have to iterate over the whole list every time.
This commit addresses this by storing both the segment index as well
as the repeat index and makes advancing to next segment just an
increment of the repeat or the segment index.
Use a single segment to represent it internally to avoid using too
much memory. This has the drawback of issuing a linear search to
find the correct segment to play but this can be fixed by using
binary searches or caching the current position and just looking
for the next one.
https://bugzilla.gnome.org/show_bug.cgi?id=748369
The custom code is wrong as it ignores the templates, which leads to
missing fields in the result. Instead, simply use the default get_caps
implementation which does it correctly (get the template, intersect
with filter and return).
https://bugzilla.gnome.org/show_bug.cgi?id=749237
Without this, we will fixate weird pixel-aspect-ratios like 1/2147483647. But
in the end, all the negotiation code in videoaggregator needs a big cleanup
and videoaggregator needs to get rid of the software-mixer specific things
everywhere.