The fact that a decoder is not compatible with the fixed sink
is currently happenning in the case where we have hardware accelerated
video decoders on the system (especially vaapi elements that are actually plugged),
and the user is providing a sink that doesn't support the surface.
A simple example that shows how it used to crash on a system where gstreamer-vaapi
is installed:
gst-launch playbin2 video-sink=xvimagesink uri=/codec/supported/by/vaapi
What we are now doing in this case, is avoid using the accelerated
decoder and plug a "normal" decoder instead (if avalaible).
This commit doesn't handle the case where we have hardware accelerated
demuxing.
gstsubtitleoverlay.c: In function 'gst_subtitle_overlay_video_sink_event':
gstsubtitleoverlay.c:1736:22: error: 'target' may be used uninitialized in this function
With unfixed caps we can't reliably decide if the final caps
are going to be "raw" (e.g. supported by a sink) or not.
We will get here again later when the caps are fixed.
If subdrained isn't initialized to FALSE then a chain might think
that its group is drained when in fact it's not and this can cause
a switch too early or even cause a deadlock.
This reverts commit b0b4e286c8.
We agreed that the previous (pre-.35) behaviour is broken and a bug and the
current behaviour is correct, deterministic and allows the application to
handle stuff properly while the old behaviour can't be handled properly by
applications and just worked in some applications by luck.
The solution to the problem that was solved by relying on the old, broken
behaviour would be, to make decodebin2/playbin2 more aware of decoders and
improve the autoplugging of decoders by considering the caps supported by the
sink instead of just using something with the highest rank.
See bug #656923.
Fixes regression since 0.10.33 where sinks that can cope with non raw
caps or custom caps are not autoplugged if there's a sink configured
with the properties video-sink and audio-sink which cannot handle
the stream. This change checks for compatibility on the configured one
and use it if success. Otherwhise it tries with the found factories.
This reverts commit a22faad18a. Instead
of disabling subtitles completelly when video stream have custom caps,
just let the sutbtileoverlay cope with them as now it's able to.
Implement handling of non raw video streams by avoiding colorspace
elements and autoplugging a compatible renderer if available. Fallback
to passthrough if no compatible renderer is found.
Only log in debug log for now, since the check is a bit
half-hearted, its purpose is mostly to make sure people
use gst_filename_to_uri() or g_filename_to_uri().
https://bugzilla.gnome.org/show_bug.cgi?id=654673
g_value_get_object() does not give us our own ref.
Fixes "Trying to dispose object "flacparse", but it still has a parent "registry0".
You need to let the parent manage the object instead of unreffing the object directly."
and similar warnings.
https://bugzilla.gnome.org/show_bug.cgi?id=658416
This is done by adding a capsfilter after every parser/converter that contains
all possible caps supported by downstream elements. A capsfilter is necessary
here because the decoder is only selected after the parser selected a format
and the parser can't know what downstream would support otherwise.
This reverts commit 105814e2c7.
The general consensus seems to be that we should revert this for
now. If such behaviour is desired, we should probably enable it
via a flag. And maybe use the scaletempo plugin instead.
Make enums for the chroma siting for easier use in the videoinfo.
Make enums for the color range, color matrix, transfer function and the
color primaries. Add these values to the video info structure in a Colorimetry
structure. These values define the exact colors and are needed to perform
correct colorspace conversion. Use a couple of predefined colorimetry specs
because in practice only a few combinations are in use.
Add view_id to the video frames to identify the view this frame represents in
multiview video.
Remove old gst_video_parse_caps_framerate, use the videoinfo for this.
Port elements to new colorimetry info.
Remove deprecated colorspace property from videotestsrc.
Rework the audio caps similar to the video caps. Remove
width/depth/endianness/signed fields and replace with a simple string
format and media type audio/x-raw.
Create a GstAudioInfo and some helper methods to parse caps.
Remove duplicate code from the ringbuffer and replace with audio info.
Use AudioInfo in the base audio filter class.
Port elements to new API.
Instead of just assuming all pads are created at the same time,
remember which ones are actually new (via ->pending_blocked_pads).
This allows the following use-case to properly work:
* Upstream starts with audio-only
* Only that pad gets data, blocks and a real audio sink is created
* Upstream laters adds a video stream
* A new pad is requested, blocks and reconfiguration kicks in in
order to add a new real video sink
When we don't have specific {audio|video|text}-sink properties, don't
set them on playsink when reconfiguring.
If we do that, we end up setting the previous configured sink to
GST_STATE_NULL resulting in any potentially pending push being returned
with GST_FLOW_WRONG_STATE which will cause the upstream elements to
silently stop.
https://bugzilla.gnome.org/show_bug.cgi?id=655279
When we have a multi-stream (i.e. audio and video) input and the demuxer
adds/removes pads for a new stream (common in a mpeg-ts stream when the
program stream mapping is updated), the algorithm for EOS handling was
previously wrong (it would only drop the EOS of the *last* pad but would
let the EOS on the other pads go through).
The logic has only been changed a tiny bit for EOS handling resulting in:
* If there is no next group, let the EOS go through
* If there is a next group, but not all pads are drained in the active
group, drop the EOS event
* If there is a next group and all pads are drained, then the ghostpads
will be removed and the EOS event will be dropped automatically.
This allows us to make parsers accept both parsed and unparsed input
without decodebin plugging them in a loop until things blow up, ie.
without affecting applications that still use the old playbin or the
old decodebin.
(Making parsers accept parsed input is useful for later when we want
to use parsers to convert the stream-format into something the decoder
can handle. It's also much more convenient for application authors
who can plug parsers unconditionally in transcoding pipelines, for
example).
This is especially needed when switching between a non-sparse and sparse
video stream, see bug #537382. It also lowers the time needed for switching
between streams a bit.
In particular, in audio only cases whose (estimated) metadata provides bitrate
information, the buffer-size based on such bitrate (and buffer-duration)
will be much more reasonable than queue2 default buffer-size.
For streams at low bitrates we need to set a limit in time because the limit
in bytes might not reached too late, sometimes more than 30 seconds.
This limit can only be set if upstream is seekable (see #584104)
Closes#647769