Currently, if prepare() takes too much time, we skip the call to render().
The side effect of this, is that we endup starving the render(). The solution
in this patch is to always render frames that are on time before prepare() is
executed. This will maximize the number of frames we display and smoothly
degrade the rendering performance.
https://bugzilla.gnome.org/show_bug.cgi?id=729335
Add more git repositories to check (so git-version.sh is consistent with
gst-uninstalled) and display the date of the last commit, which is more valuable
information than the last commit's hash.
When the first segment has position != 0 and position > max-size-time
it will immediatelly cause the multiqueue to signal overrun.
This can happen easily with adaptive streams when switching bitrates
and starting a new group. The segment for this new group will have
a position that is much greater than 0 and will lead to this issue.
This is particularly harmful when the adaptive stream uses mpegts
that doesn't emit no-more-pads and it might happen that only one
of the stream pads was added when the multiqueue overruns and gets
the group ready for exposing. So the user will only get audio or
video.
The solution is to fallback to the sink segment while the source pad
has no segment.
https://bugzilla.gnome.org/show_bug.cgi?id=729124
They are very confusing for people, and more often than not
also just not very accurate. Seeing 'last reviewed: 2005' in
your docs is not very confidence-inspiring. Let's just remove
those comments.
Currently we set TAG_MEMORY as soon a resize changes the size of one
of the memory. This has the side effect that buffer pool cannot know if
the memory have simply been resized, or if the memorys has been replaced.
This make it hard to actually implement _reset(). Instead, only set the
TAG_MEMORY if one or more memory has been replaced, and do a light
sanity check of the size.
https://bugzilla.gnome.org/show_bug.cgi?id=727109
We might not have reached PAUSED yet because of an async error,
but nonetheless we want to make sure that the pads are always
deactivated in READY state.
The step can end up being zero if the underlying value isn't a valid
range GValue.
In those cases, return FALSE.
We don't use g_return*_if_fail since it will already have been triggered
by the above-mentionned _get_step() functions.
CID #1037132
This should allow for more meaningful errors. Dereferencing NULL
is more useful information than dereferencing a random address
happened to be on the stack.
The qlock is released between popping a buffer from the queue
and pushing it. When this buffer causes the sink to wait in
preroll, this lets a query see that the queue is empty, and
push the query then wait for it to be serviced. However, this
will not be done till after peroll, and this will thus block.
If upstream was waiting on buffering to reach 100% before
switching to PLAYING, a deadlock would ensue.
This had been fixed recently by failing queries when the
queue2 was buffering, but this happens to break some other
case (playbin on a local http server and matroska), while
this patch works for both.
See https://bugzilla.gnome.org/show_bug.cgi?id=728345
Keep it simple. Likely also makes things easier for bindings,
and efficiency clearly has not been a consideration given how
the existing code handled these lists.
In order to be deterministic, multiple waiting GstClockIDs needs to be
released at the same time, or else one can get into the situation that
the one being released first can add itself back again before the next
one waiting is released.
Test added for new API and old tests rewritten to comply.
We want to iterate over items idx to idx + length
We use the len variable as the corrected number of memory to iterate
and then properly go over all items.
Fixes the issue where specifying any idx different from 0 had no effect
Spotted by clang static analyzer