This is done by adding a capsfilter after every parser/converter that contains
all possible caps supported by downstream elements. A capsfilter is necessary
here because the decoder is only selected after the parser selected a format
and the parser can't know what downstream would support otherwise.
This reverts commit 105814e2c7.
The general consensus seems to be that we should revert this for
now. If such behaviour is desired, we should probably enable it
via a flag. And maybe use the scaletempo plugin instead.
Instead of just assuming all pads are created at the same time,
remember which ones are actually new (via ->pending_blocked_pads).
This allows the following use-case to properly work:
* Upstream starts with audio-only
* Only that pad gets data, blocks and a real audio sink is created
* Upstream laters adds a video stream
* A new pad is requested, blocks and reconfiguration kicks in in
order to add a new real video sink
When we don't have specific {audio|video|text}-sink properties, don't
set them on playsink when reconfiguring.
If we do that, we end up setting the previous configured sink to
GST_STATE_NULL resulting in any potentially pending push being returned
with GST_FLOW_WRONG_STATE which will cause the upstream elements to
silently stop.
https://bugzilla.gnome.org/show_bug.cgi?id=655279
When we have a multi-stream (i.e. audio and video) input and the demuxer
adds/removes pads for a new stream (common in a mpeg-ts stream when the
program stream mapping is updated), the algorithm for EOS handling was
previously wrong (it would only drop the EOS of the *last* pad but would
let the EOS on the other pads go through).
The logic has only been changed a tiny bit for EOS handling resulting in:
* If there is no next group, let the EOS go through
* If there is a next group, but not all pads are drained in the active
group, drop the EOS event
* If there is a next group and all pads are drained, then the ghostpads
will be removed and the EOS event will be dropped automatically.
This allows us to make parsers accept both parsed and unparsed input
without decodebin plugging them in a loop until things blow up, ie.
without affecting applications that still use the old playbin or the
old decodebin.
(Making parsers accept parsed input is useful for later when we want
to use parsers to convert the stream-format into something the decoder
can handle. It's also much more convenient for application authors
who can plug parsers unconditionally in transcoding pipelines, for
example).
This is especially needed when switching between a non-sparse and sparse
video stream, see bug #537382. It also lowers the time needed for switching
between streams a bit.
In particular, in audio only cases whose (estimated) metadata provides bitrate
information, the buffer-size based on such bitrate (and buffer-duration)
will be much more reasonable than queue2 default buffer-size.
For streams at low bitrates we need to set a limit in time because the limit
in bytes might not reached too late, sometimes more than 30 seconds.
This limit can only be set if upstream is seekable (see #584104)
Closes#647769
These reconfigure based on the caps and plugin in converters if
necessary. This also makes switching between compressed and raw
streams work flawlessly without loosing the states of any element
somewhere or having running time problems.
Before playbin2 would use different selectors for raw audio and
compressed audio (and the same for video) and used different
pads from playsink. This made the involved logic much more
complex and was not implemented completely in playsink, which
made it impossible to support files with a compressed and
uncompressed stream that is support by the sink.
playbin2 handles raw/non-raw streams the same now and the
decision is left to playsink, which now can also handle
caps changes from raw to non-raw and the other way around.
Fixes bug #632788.
Remove the android/ top dir
Fixe the Makefile.am to be androgenized
To build gstreamer for android we are now using androgenizer which generates the
needed Android.mk files.
Androgenizer can be found here:
http://git.collabora.co.uk/?p=user/derek/androgenizer.git
In addition to ensuring that an element we want to select in
autoplug-select can enter the READY state, we also now check if it can
accept the caps we wish to plug it for. This is handy for sinks that
need to perform a probe to figure out whether they can actually handle a
given format.
Post better error messages in case typefind/decodebin2 are missing or
could not be loaded for some reason (e.g. because they inadvertently
got blacklisted).
https://bugzilla.gnome.org/show_bug.cgi?id=644892
Parsers are the only element class that are not changing the data and
could lead to an infinite loop. Other element classes like demuxers,
e.g. id3demux, can be used multiple times in a row and sometimes are.
Previously we only checked against the raw caps but we should also
check against the return value of autoplug-continue. Additionally fix
a thread-safety issue with accessing the raw caps.
Add "source-setup" signal for convenience and discoverability. No need
to figure out "notify::source", look up the notify callback signature,
then do an g_object_get() to get the source element..
https://bugzilla.gnome.org/show_bug.cgi?id=626152
...instead of copying the array. Returning NULL will result
in the original factories array to be used and prevents a useless
array copy in most use cases.
...instead of copying the array. Returning NULL will result
in the original factories array to be used and prevents a useless
array copy in most use cases.
Add notes about the behaviour if multiple signal handlers are connected.
For most autoplug-* signals only the first signal handler will ever
be invoked.
Also add to the autoplug-sort docs that the signal handler can return NULL
to specify that the order should change and other handlers get the chance
to sort the array.
This lock is taken when activating a group, which could result in
calling the autoplug-continue callback, which also needs this lock
to access the sinks.
See bug #642174.
Don't build merge the caps of all sinks but check them one-by-one
until one supports the caps. Also get reffed caps from the sinkpads
instead of a writable copy and add debug output if a sink claims to
support ANY caps.
Some things aren't quite right yet and cause problems (0-sized buffers
with PREROLL flag set cause crashes in elements that don't expect those;
getting pipeline back to preroll/playing again when audio/video streams
have different lengths and a seek past the end of one of the stream
happens doesn't always work, etc.). Needs further investigation in the
next cycle.
https://bugzilla.gnome.org/show_bug.cgi?id=633700https://bugzilla.gnome.org/show_bug.cgi?id=634699
Fix a bug when reconfiguring the playsink where the subpicture
stream is broken by attempting to connect it through
streamsynchroniser and second time.
Advance stop times too when they are getting higher than the
stop time of segments, avoiding assertions.
The stop time has to be advanced too so that running time keep in sync
for gapless mode.
https://bugzilla.gnome.org/show_bug.cgi?id=631312
Where it was previously located, we would get async-done for the first
unknown-type, even if other valid streams would appear afterwards.
decode_bin_expose() will take care of posting async-done when the group
is exposed.
But we still want to post it in case the typefinding returned an unknown
type, in which case we will post it after posting an error.
These two changes ensure we do as much as possible before posting async-done.