Arrange for upstream as well as downstream flushing when seeking.
Also determine upstream size as well as seekability. Adjust some comments
to reality and employ debug statement in proper order.
This reverts commit b5a3d60363.
Reverting this for now, since no one really seems to remember why this
property exists or what it could possibly be good for. It seems to have
been in the original mp3parse since the beginning of time and was back-
ported from there.
Seekability, like duration, etc is unlikely to change (frequently), and
the default assumption covers most cases, so let subclass set when needed.
At the same time, allow subclass to indicate if it has seek-metadata (table)
available, and possibly have it provide an average bitrate.
This allows the child class to chain its event handler with
GstBaseParse, so that subclasses don't have to duplicate all the default
event handling logic.
https://bugzilla.gnome.org/show_bug.cgi?id=622276
We wait to parse a minimum number of frames (10, arbitrarily) before
emiting bitrate tags so that our early estimates are not wildly
inaccurate for streams that start with a silence. If the stream ends
before that, we just emit the tags anyway.
While it _would_ be nicer to be specify the threshold to start pushing
the tags in terms of duration, this would introduce more complexity than
this merits.
https://bugzilla.gnome.org/show_bug.cgi?id=614991
This makes baseparse keep a running average of the stream bitrate, as
well as the minimum and maximum bitrates. Subclasses can override a
vfunc to make sure that per-frame overhead from the container is not
accounted for in the bitrate calculation.
We take care not to override the bitrate, minimum-bitrate, and
maximum-bitrate tags if they have been posted upstream. We also
rate-limit the emission of bitrate so that it is only triggered by a
change of >10 kbps.
Perform sanity check on type of seek, and only perform one that is
appropriately supported. Adjust downstream newsegment event
to first buffer timestamp that is sent downstream.
In particular, consider DISCONT == !sync, and allow subclass to query
sync state, as it may want to perform additional checks depending
on whether sync was achieved earlier on.
Also arrange for subclass to query whether leftover data is being drained.
In particular, (optionally) provide baseparse with a notion of frames per second
(and therefore also frame duration) and have it track frame and byte counts.
This way, subclass can provide baseparse with fps and have it provide default
buffer time metadata and conversions, though subclass can still install
callbacks to handle such itself.
After all, stream is as-is, and there is little molding to downstream's
taste that can be done. If subclass can and wants to do so, it can
still override as such.
Also handle the case gracefully where the subclass decides to drop
the first buffers and has no caps set yet. It's still required to
have valid caps set when the first buffer should be passed downstream.
Sending the flush-start event forward before taking the stream lock actually
works, in contrast to deadlocking in downstream preroll_wait (hunk 1).
After that we get the chain function being stuck in a busy loop. This is fixed
by updating the minimum frame size inside the synchronization loop because the
subclass asks for more data in this way (hunk 2).
Finally, this leads to a very probable crash because the subclass can find a
valid frame with a size greater than the currently available data in the
adapter. This makes the subsequent gst_adapter_take_buffer call return NULL,
which is not expected (hunk 3).
Baseparse internaly breaks the semantics of a _chain function by calling it with
buffer==NULL. The reson I belived it was okay to remove it was that there is
also an unchecked access to buffer later in _chain. Actually that code is wrong,
as it most probably wants to set discont on the outgoing buffer.
This allows to only create the socketpair when it is really required instead
of always creating it and immediately destroying it again for child buses.
https://bugzilla.gnome.org/show_bug.cgi?id=647005
This is used by GstBin to create a child bus without
a socketpair because child buses will always work
synchronous. Otherwise too many sockets could be
created and the limit of file descriptors for the
process could be reached.
Fixes bug #646624.
if set->control_pending is set to 0 but we didn't not succed reading
the control socket, future calls to gst_poll_wait() will be awaiken
by the control socket which will not be released properly because
set->control_pending is already 0, causing an infinite loop.
This caused "re-declaration" problems.
./clutter-gst-video-sink.c: In function ‘clutter_gst_video_sink_init_interfaces’:
./clutter-gst-video-sink.c:231:1: warning: declaration of ‘ClutterGstVideoSink’ shadows a global declaration [-Wshadow]
./clutter-gst-video-sink.h:64:44: warning: shadowed declaration is here [-Wshadow]
https://bugzilla.gnome.org/show_bug.cgi?id=646531
Some applications are requesting the same pad name multiple times
and the behaviour is undefined and different from element to element
but we don't want to break applications that work just fine.
In 0.11 this check should be an assertion again, although elements
have to do manual checking if the pad already exists again because
it can't be done in a threadsafe way here.