The functions to get the next fragment, next fragment timestamp and to advance
to the next fragment need to work differently when stream->segments is NULL.
Use logic similar to that introduced by commit 2105a310 to perform these
functions.
https://bugzilla.gnome.org/show_bug.cgi?id=749684
Previously the VPS unit was detected and all next packets where copied
into the header buffer assuming only SPS and PPS would follow. This is
not always true, also other types of NAL units follow the VPS unit and
where copied to the header buffer. Now the VPS/SPS/PPS are explicitely
detected and copied in the header buffer.
1. Set the sync point after the (possible) upload has occured
2. Wait in the correct GL context (the draw context)
Note: We don't add the GL sync meta to the input buffer as it's not
writable and a copy would be expensive.
Similar to the change with the same name for glimagesink
1. Set the sync point after the (possible) upload has occured
2. Wait in the correct GL context (the draw context)
Note: We don't add the GL sync meta to the input buffer as it's not
writable and a copy would be expensive.
The property level has a minimum value of 0. But when we set the level as 0,
it gets an assertion error. The function icvPyrSegmentation8uC3R returns false
if level is set as 0, since the minimum level cant be 0 and thus results in error.
Hence changing the minimum value to 1.
https://bugzilla.gnome.org/show_bug.cgi?id=749525
When all fragments have already been downloaded on a live stream
dashdemux would busy loop as the default implementation of
has_next_fragment would return TRUE. Implement it to correctly
signal if adaptivedemux should wait for the manifest update before
trying to get new fragments.
When updating the manifest the timestamps on it might have changed a little
due to rounding and timescale conversions. If the change makes the timestamp
of the current segment to go up it makes dashdemux reposition to the previous
one causing one extra unnecessary download.
So when repositioning add an extra 10 microseconds to cover for that rounding
issues and increase the chance of falling in the same segment.
Additionally, also improve the time used when the client is already after the
last segment. Instead of using the last segment starting timestamp use the
final timestamp to make it reposition to the next one and not to the one that
has already been downloaded.
These functions of directly getting and setting segment indexes
are no longer useful as now we need 2 indexes: repeat and segment
index.
The only operations needed are advance_segment, going back to the
first one or seeking for a timestamp.
Segments are now stored with their repeat counts instead of spanding
them to multiple segments. This caused advancing to the next segment
using a single index to have to iterate over the whole list every time.
This commit addresses this by storing both the segment index as well
as the repeat index and makes advancing to next segment just an
increment of the repeat or the segment index.
Use a single segment to represent it internally to avoid using too
much memory. This has the drawback of issuing a linear search to
find the correct segment to play but this can be fixed by using
binary searches or caching the current position and just looking
for the next one.
https://bugzilla.gnome.org/show_bug.cgi?id=748369
The custom code is wrong as it ignores the templates, which leads to
missing fields in the result. Instead, simply use the default get_caps
implementation which does it correctly (get the template, intersect
with filter and return).
https://bugzilla.gnome.org/show_bug.cgi?id=749237
Without this, we will fixate weird pixel-aspect-ratios like 1/2147483647. But
in the end, all the negotiation code in videoaggregator needs a big cleanup
and videoaggregator needs to get rid of the software-mixer specific things
everywhere.
Upstream might not give us a caps event (dtlssrtpdec) because it might be an
RTP/RTCP mixed stream, but we split the two streams anyway and should report
proper caps downstream if possible.
Fixes "sticky event misordering" warnings with dtlssrtpdec.
And provide home-made fallback for older GLib versions,
so that we can later find these and remove them when
we bump the GLib requirement (which is certainly going
to happen before 2.0).
https://bugzilla.gnome.org/show_bug.cgi?id=748495
It's better to just select some random variant playlist instead of stopping,
chances are that it's still continuing to work and we might just have to
select a different variant again later.
We should only refresh the currently selected variant playlist (if any,
otherwise the main playlist), not the main playlist. And only try to
refresh the main playlist if updating the variant playlist fails.
Some servers (Wowza) use the request of the main playlist to create a
"session", which is then part of the URI of the variant playlist and
also the fragments. Refreshing the main playlist would generate a new
session, and the server rate limits that usually. And after a few retries
the server just kicks us out.
Also as a side effect we now use the same downloader for all playlists, so
that we only have 2 instead of 3 connections to the server. And also
previously we just ignored the downloaded data from the main playlist that
the base class gave to us.
When the segment is very short it might be the case that the
typefinding fails and when finishing the segment hlsdemux would
consider the remaining data (pending_buffer) as an encryption
leftover.
This patch fixes it and makes sure an error is properly posted
if typefind failed by refactoring buffer handling to a function
and using it from the data_received and finish_fragment functions.
We also have to update the current_file GList pointer in the M3U playlist
client, otherwise we are just continuing playback from the current position
instead of seeking.