Don't unref the event if it hasn't been handled, because the caller
assumes it is still valid and might reuse it.
I ran into this problem when transcoding an AVI (with mp3 inside)
to gpp.
https://bugzilla.gnome.org/show_bug.cgi?id=639555
That is, as such formats allow subclass to extract position from frame,
it is possible to extract duration (if not otherwise provided)
from (near) last frame, and a seek can fairly accurately target the required
position.
Fixes#631389.
Arrange for upstream as well as downstream flushing when seeking.
Also determine upstream size as well as seekability. Adjust some comments
to reality and employ debug statement in proper order.
This reverts commit b5a3d60363.
Reverting this for now, since no one really seems to remember why this
property exists or what it could possibly be good for. It seems to have
been in the original mp3parse since the beginning of time and was back-
ported from there.
Seekability, like duration, etc is unlikely to change (frequently), and
the default assumption covers most cases, so let subclass set when needed.
At the same time, allow subclass to indicate if it has seek-metadata (table)
available, and possibly have it provide an average bitrate.
This allows the child class to chain its event handler with
GstBaseParse, so that subclasses don't have to duplicate all the default
event handling logic.
https://bugzilla.gnome.org/show_bug.cgi?id=622276
We wait to parse a minimum number of frames (10, arbitrarily) before
emiting bitrate tags so that our early estimates are not wildly
inaccurate for streams that start with a silence. If the stream ends
before that, we just emit the tags anyway.
While it _would_ be nicer to be specify the threshold to start pushing
the tags in terms of duration, this would introduce more complexity than
this merits.
https://bugzilla.gnome.org/show_bug.cgi?id=614991
This makes baseparse keep a running average of the stream bitrate, as
well as the minimum and maximum bitrates. Subclasses can override a
vfunc to make sure that per-frame overhead from the container is not
accounted for in the bitrate calculation.
We take care not to override the bitrate, minimum-bitrate, and
maximum-bitrate tags if they have been posted upstream. We also
rate-limit the emission of bitrate so that it is only triggered by a
change of >10 kbps.
Perform sanity check on type of seek, and only perform one that is
appropriately supported. Adjust downstream newsegment event
to first buffer timestamp that is sent downstream.
In particular, consider DISCONT == !sync, and allow subclass to query
sync state, as it may want to perform additional checks depending
on whether sync was achieved earlier on.
Also arrange for subclass to query whether leftover data is being drained.
In particular, (optionally) provide baseparse with a notion of frames per second
(and therefore also frame duration) and have it track frame and byte counts.
This way, subclass can provide baseparse with fps and have it provide default
buffer time metadata and conversions, though subclass can still install
callbacks to handle such itself.
After all, stream is as-is, and there is little molding to downstream's
taste that can be done. If subclass can and wants to do so, it can
still override as such.
Also handle the case gracefully where the subclass decides to drop
the first buffers and has no caps set yet. It's still required to
have valid caps set when the first buffer should be passed downstream.
Sending the flush-start event forward before taking the stream lock actually
works, in contrast to deadlocking in downstream preroll_wait (hunk 1).
After that we get the chain function being stuck in a busy loop. This is fixed
by updating the minimum frame size inside the synchronization loop because the
subclass asks for more data in this way (hunk 2).
Finally, this leads to a very probable crash because the subclass can find a
valid frame with a size greater than the currently available data in the
adapter. This makes the subsequent gst_adapter_take_buffer call return NULL,
which is not expected (hunk 3).
Baseparse internaly breaks the semantics of a _chain function by calling it with
buffer==NULL. The reson I belived it was okay to remove it was that there is
also an unchecked access to buffer later in _chain. Actually that code is wrong,
as it most probably wants to set discont on the outgoing buffer.
This allows to only create the socketpair when it is really required instead
of always creating it and immediately destroying it again for child buses.
https://bugzilla.gnome.org/show_bug.cgi?id=647005
This is used by GstBin to create a child bus without
a socketpair because child buses will always work
synchronous. Otherwise too many sockets could be
created and the limit of file descriptors for the
process could be reached.
Fixes bug #646624.