baseparse would pass whatever is left in the adapter to the
subclass when draining, even if it's less than the minimum
frame size required. This is bogus, baseparse should just
discard that data then. The original intention of that code
seems to have been that if we have more data available than
the minimum required we should pass all of the data available
and not just the minimum required, which does make sense, so
we'll continue to do that in the case that more data is available.
Fixes assertions in rawvideoparse on EOS after not-negotiated with
fakesrc sizetype=random ! queue ! rawvideoparse format=rgb ! appsink caps=video/x-raw,format=I420
https://bugzilla.gnome.org/show_bug.cgi?id=773666
The durations of the buffers are (usually) assuming that no frames are being
dropped and are just the durations coming from the stream. However if we do
trickmodes, frames are being dropped regularly especially if only key units
are supposed to be played.
Fixes completely bogus QoS proportion values in the above case.
https://github.com/mesonbuild/meson
With contributions from:
Tim-Philipp Müller <tim@centricular.com>
Mathieu Duponchelle <mathieu.duponchelle@opencreed.com>
Jussi Pakkanen <jpakkane@gmail.com> (original port)
Highlights of the features provided are:
* Faster builds on Linux (~40-50% faster)
* The ability to build with MSVC on Windows
* Generate Visual Studio project files
* Generate XCode project files
* Much faster builds on Windows (on-par with Linux)
* Seriously fast configure and building on embedded
... and many more. For more details see:
http://blog.nirbheek.in/2016/05/gstreamer-and-meson-new-hope.htmlhttp://blog.nirbheek.in/2016/07/building-and-developing-gstreamer-using.html
Building with Meson should work on both Linux and Windows, but may
need a few more tweaks on other operating systems.
If segment.stop was given, and the subclass provides a size that might be
smaller than segment.stop and also smaller than the actual size, we would
already stop there.
Instead try reading up to segment.stop, the goal is to ignore the (possibly
inaccurate) size the subclass gives and finish until segment.stop or when the
subclass tells us to stop.
Waiting before posting calculated bitrates seems to be the
intent of the code, so avoid adding them to the tag list
pushed with the first frame.
When the threshold is reached, gst_base_parse_update_bitrates
sets tags_changed, so this posts the calculated ones right
that moment.
This prevents an insane average calculated from just the
first (key) frame from getting posted.
https://bugzilla.gnome.org/show_bug.cgi?id=768439
There must be a SEGMENT event before the GAP event, and SEGMENT events must
come after any CAPS event. We however did not produce any CAPS yet, so we need
to ensure to insert the CAPS event before the SEGMENT event into the pending
events list.
https://bugzilla.gnome.org/show_bug.cgi?id=766970
If we were in PAUSED, the current clock time and base time don't have much to
do with the running time anymore as the clock might have advanced while we
were PAUSED. The system clock does that for example, audio clocks often don't.
Updating the start time in PAUSED will cause a) the wrong position to be
reported, b) step events to step not just the requested amount but the amount
of time we spent in PAUSED. The start time should only ever be updated when
going from PLAYING to PAUSED to remember the current running time (to be able
to compensate later when going to PLAYING for the clock time advancing while
PAUSED), not when we are already in PAUSED.
Based on a patch by Kishore Arepalli <kishore.arepalli@gmail.com>
The updating of the start time when the state is lost was added in commit
ba943a82c0 to fix the position reporting when
the state is lost. This still works correctly after this change.
https://bugzilla.gnome.org/show_bug.cgi?id=739289
We don't do calculations with different units (buffer offsets and bytes)
anymore but have functions for:
1) getting the number of bytes since the last discont
2) getting the offset (and pts/dts) at the last discont
and the previously added function to get the last offset and its distance from
the current adapter position.
https://bugzilla.gnome.org/show_bug.cgi?id=766647
API: gst_buffer_prev_offset
API: gst_buffer_get_offset_from_discont
The gst_buffer_get_offset_from_discont() method allows retrieving the current
offset based on the GST_BUFFER_OFFSET of the buffers that were pushed in.
The offset will be set initially by the GST_BUFFER_OFFSET of
DISCONT buffers, and then incremented by the sizes of the following
buffers.
The gst_buffer_prev_offset() method allows retrievent the previous
GST_BUFFER_OFFSET regardless of flags. It works in the same way as
the other gst_buffer_prev_*() methods.
https://bugzilla.gnome.org/show_bug.cgi?id=766647
The subclass should do that already, but just in case do it ourselves too as a
fallback. Without this, e.g. playbin will just wait forever if this fails
because it is triggered as part of an ASYNC state change.
POSIX standards requires strsignal() to return a pointer to a char,
not a const pointer to a char. [1] On uClibc, and possibly other
libc's, that do not HAVE_DECL_STRSIGNAL, libcompat.h declares
const char *strsignal (int sig) which causes a type error.
[1] man 3 strsignal
https://bugzilla.gnome.org/show_bug.cgi?id=763567
To allow the GstTestClock to be used as a GstSystemClock, it is
useful to implement the clock-type property that GstSystemClock
provides. This allows GstTestClock to be used as the system clock
with code that expects a GstSystemClock.
https://bugzilla.gnome.org/show_bug.cgi?id=762147
Otherwise PTS and DTS will come out of sync if upstream continues to provide
PTS and not DTS, and we have to skip some data from the stream or PTS are not
exactly increasing with the duration of each packet.
https://bugzilla.gnome.org/show_bug.cgi?id=765260
gsttypefindhelper.c:485: Warning: GstBase: invalid "transfer" annotation for gsize: only valid for array, struct, union, boxed, object and interface types
If we don't store the value in prev_dts, we would over and over again
initialize the DTS from the last known upstream PTS. If upstream only provides
PTS every now and then, then this causes DTS to be rather static.
For example in adaptive streaming scenarios this means that all buffers in a
fragment will have exactly the same DTS while the PTS is properly updated. As
our queues are now preferring to do buffer fill level calculations on DTS,
this is causing huge problems there.
See https://bugzilla.gnome.org/show_bug.cgi?id=691481#c27 where this part of
the code was introduced.
https://bugzilla.gnome.org/show_bug.cgi?id=765096
It is calling do_sync(), which requires the STREAM_LOCK and PREROLL_LOCK to be
taken. The STREAM_LOCK is already taken in all callers, the PREROLL_LOCK not.
https://bugzilla.gnome.org/show_bug.cgi?id=764939
This is the best guess we can make if such a buffer reached the collect
pad. This is uncommon, we do expect parsers to have tried and fixed that
if possible (or needed).
https://bugzilla.gnome.org/show_bug.cgi?id=762207
POSIX standards requires strsignal() to return a pointer to a char,
not a const pointer to a char. [1] On uClibc, and possibly other
libc's, that do not HAVE_DECL_STRSIGNAL, libcompat.h declares
const char *strsignal (int sig) which causes a type error.
[1] man 3 strsignal
https://bugzilla.gnome.org/show_bug.cgi?id=763567
Many parsers are storing tags only in pre_push_frame(), if we wouldn't check
afterwards we would push buffers before those tags and a lot of code assumes that
tags are available before preroll.
https://bugzilla.gnome.org/show_bug.cgi?id=763553