The number of expected pads was:
* Defaulting to 1
* Or being overriden by GST_MESSAGE_STREAMS_SELECTED
This fails if upstream isn't a selectable source and has multiple streams, and
would therefore cause failures with multi-stream gapless playback
Fixes#1672
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3658>
It is quite possible to have the blocking probe called from different streaming
threads when all expected pads are present.
* Notify all waiters by using g_cond_broadcast instead of g_cond_signal
* Properly remove the probe after waiting
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3658>
gst_element_add_pad() is supposed to activate the pad if the element
state is >= PAUSED and the pad is not already active.
Unfortunately, before this patch, the activation was performed while the
element lock was still taken, which ended causing a deadlock in
gst_pad_start_task() as it attempted to post `stream-status` message in
the element, which also requires the element lock.
Elements could work around this bug by activating the pad manually
before adding it to the element.
This patch fixes the problem by performing pad activation only after the
element lock has been released.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3635>
Commit d3a66f9851 introduced a potential deadlock with two parallel release_pad
calls, where one could release the main multiqueue lock (qlock) while still
holding the reconf_lock and then calling other routines which in some conditions
may try to acquire qlock again. The second release_pad could already acquire the
qlock and then start waiting on reconf_lock, which may never be possible because
because the first one isn't releasing it until it can acquire qlock.
Fix it by holding reconf_lock for the whole durationg of qlock, making this
particular deadlock impossible.
Fixes#1642
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3571>
This is recommended by various specifications for such framerates, while
for integer framerates we continue using centiframes to allow for some
more accuracy.
Using N means that no rounding error accumulates, eventually leading to
outputting a packet with a different duration.
Some tools such as MediaInfo determine that a stream is variable
framerate if any packet has a different duration than the others, and
there is no reason I can see for not using the full 4 bytes of
resolution that the mp4 timescale offers.
Example problematic pipeline:
```
videotestsrc num-buffers=5001 ! video/x-raw,framerate=60000/1001,width=320,height=240 ! \
videoconvert ! x264enc bitrate=80000 speed-preset=1 tune=zerolatency ! h264parse ! \
video/x-h264,profile=high-10 ! mp4mux ! filesink location="result2.mp4"
```
This results in a media file that MediaInfo detects as variable
framerate because the 5000th packet has duration 99 instead of 100.
With this patch, the timescale is 60000 and all packets have duration
1001.
Related issue for context: https://bugzilla.gnome.org/show_bug.cgi?id=769041
Co-authored-by: Sebastian Dröge <sebastian@centricular.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3049>
The VAAPI vaQueryVideoProcPipelineCaps() requires the context as the
parameter. So far, we always pass VA_INVALID_ID and it can succeed.
But the API does not say that and in theory, a valid context is required.
Now the new platform really needs a valid context and so we have to
delay that query until the context is created.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3613>
NVDEC launches CUDA kernel function (ConvertNV12BLtoNV12 or so)
when CuvidMapVideoFrame() is called. Which seems to be
NVDEC's internal post-processing kernel function, maybe
to convert tiled YUV to linear YUV format or something similar.
A problem if we don't pass CUDA stream to the CuvidMapVideoFrame()
call is that the NVDEC's internel kernel function will use default CUDA stream.
Then lots of the other CUDA API calls will be blocked/serialized.
To avoid the unnecessary blocking, we should pass our own
CUDA stream object to the CuvidMapVideoFrame() call
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3605>
Using the "GstBin" flags to check if an adaptive demuxer is streams-aware isn't
a good idea since it prevents using elements which aren't bins.
Instead we see if a collection was posted by the demuxer by the time a pad is
added.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3601>
If a discontinuity is detected in push mode, we need to clear the cached section
observations since they might have potentially changed.
This was only done properly when operating with TIME segments (dvb, udp,
adaptive demuxers, ...) but not with BYTE segments (such as with custom app/fd
sources).
We still don't want to flush out the PCR observations, since this might be
needed for seeking in push-based BYTE sources.
Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/1650
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3584>