If we already saw the keyframes that we need to find,
we do not need to bisect to find them.
This will always be the case for streams with audio only,
where each frame acts as a keyframe, but will occasionally
also happen for streams with video.
https://bugzilla.gnome.org/show_bug.cgi?id=662475
Opus streams outside of Ogg may not have headers, and oggstream
may be used by oggmux to mux an Opus stream which does not come
from Ogg - thus without headers.
Determining headerness by packet count would strip the first two
packets from such an Opus stream, leading to a very small amount
of audio being clipped at the beginning of the stream.
In push mode, we determine duration by doing a seek to the end of the
stream. However, a skeleton stream with an index will cause the duration
to be known already, and we end up never setting the push_time_duration
variable which we use to know duration has been determined.
https://bugzilla.gnome.org/show_bug.cgi?id=662049
If they go when calling snd_config_update_free_global, they're
not really bug leaks, but more like intentional ones we don't
want to get told about.
https://bugzilla.gnome.org/show_bug.cgi?id=615342
Now we can configure how much time to wait before deciding that a
discont has happened.
Also, adds getter and setter to allow derived implementations to set
this value upon construction.
Suggestions and several improvements by Havard Graff.
Signed-off-by: Felipe Contreras <felipe.contreras@gmail.com>
A common problem for audio-playback is that the timestamps might not
be completely linear. This is specially common when doing streaming over
a network, where you can have jittery and/or bursty packettransmission,
which again will often be reflected on the buffertimestamps.
Now, the current implementation have a threshold that says how far the
buffertimestamp is allowed o drift from the ideal aligned time in the
ringbuffer. This was an instant reaction, and ment that if one buffer
arrived with a timestamp that would breach the drift-tolerance, a resync
would take place, and the result would be an audible gap for the
listener.
The annoying thing would be that in the case of a "timestamp-outlier",
you would first resync one way, say +100ms, and then, if the next
timestamp was "back on track", you would end up resyncing the other way
(-100ms) So in fact, when you had only one buffer with slightly off
timestamping, you would end up with *two* audible gaps. This is the
problem this patch addresses.
The way to "fix" this problem with the previous implementation, would
have been to increase the "drift-tolerance" to a value that was greater
than the largest timestamp-outlier one would normally expect. The big
problem with this approach, however, is that it will allow normal
operations with a huge offset timestamp vs running-time, which is
detrimental to lip-sync. If the drift-tolerance is set to 200ms, it
basically means that lip-sync can easily end up being off by that much.
This patch will basically start a timer when the first breach of
drift-tolerance is detected. If any following timestamp for the next n
nanoseconds gets "back on track" within the threshold, it has basically
eliminated the effect of an outlier, and the timer is stopped. If,
however, all timestamps within this time-limit are breaching the
threshold, we are probably facing a more permanent offset in the
timestamps, and a resync is allowed to happen.
So basically this patch offers something as rare as both higher
accuracy, it terms of allowing smaller drift-tolerances, as well as much
smoother, less glitchy playback!
Commit message and improvments by Havard Graff.
Fixes bug #640859.
This allows us to easily get ahold of all pads on a stream-topology message, including
pre-decoder ones, while "pad" only gives us access to the raw pads (as used by discoverer).