mirror of
https://gitlab.freedesktop.org/gstreamer/gstreamer.git
synced 2024-11-19 16:21:17 +00:00
4aadf50012
In order to calculate the *actual* bitrate for downloading a fragment we need to take into account the time since we requested the fragment. Without this, the bitrate calculations (previously reported by queue2) would be biased since they wouldn't take into account the request latency (that is the time between the moment we request a specific URI and the moment we receive the first byte of that request). Such examples were it would be biased would be high-bandwith but high-latency networks. If you download 5MB in 500ms, but it takes 200ms to get the first byte, queue2 would report 80Mbit/s (5Mb in 500ms) , but taking the request into account it is only 57Mbit/s (5Mb in 700ms). While this would not cause too much issues if the above fragment represented a much longer duration (5s of content), it would cause issues with short ones (say 1s, or when doing keyframe-only requests which are even shorter) where the code would expect to be able to download up to 80Mbit/s ... whereas if we take the request time into account it's much lower (and we would therefore end up doing late requests). Also calculate the request latency for debugging purposes and further usage (it could allow us to figure out the maximum request rate for example). https://bugzilla.gnome.org/show_bug.cgi?id=733959 https://bugzilla.gnome.org/show_bug.cgi?id=772330 |
||
---|---|---|
.. | ||
gst | ||
Makefile.am | ||
meson.build |