In the case where the queue shrinks due to a property change and the queue
becomes full, we would set the waiting_del flag, which would prevent posting the
100% buffering message on the bus. Since the pipeline is not aware of the new
buffering value, in the common case where the pipeline is paused during
buffering, it would never resume.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6165>
Remove the percent_changed check to determine whether a buffering message should
be posted. The check on the last posted buffering value is sufficient, and the
removal doesn't introduce additional complexity.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6165>
If input height and parsed one are identical, do not consider it as interlaced
Fixing below pipeline:
gst-launch-1.0 videotestsrc ! video/x-raw,format=I420,width=640,height=10 \
! jpegenc ! jpegparse ! jpegdec ! videoconvert ! autovideosink
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6181>
When dealing with demuxers which aren't streams-aware, we need to handle the
old-school "stream replacement" dance from `parsebin` and hide that in such a
way that output pads are re-used (if compatible).
By analyzing the collection posted by parsebin, we can:
* Identify whether some output slots are no longer used (because the stream they
currently handle is not present in the collection)
* Decide if some upcoming streams could re-use the existing slot
This supports both buffering and non-buffering modes.
Fixes#1651
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6201>
When the conversion is only caps feature from memory:VAMemory to system memory,
it's possible to optimize by doing a pseudo pass-through since the va-backed
buffers are the same for system memory buffers.
This change will also mitigates #2940
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6174>
If the allocation query received from downstream doesn't handle GstVideoMeta but
it requests memory:DMABuf caps feature, it's incomplete, so we rather reject the
negotiation.
Both in base decoder, base transform and compositor.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6155>
When switching urisourcebin, ensure that we first unlink *all* pads from
decodebin3 before linking them again.
This is to ensure that decodebin3 completely knows that all previous pads are no
longer needed and can prepare itself to being re-used.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6179>
The value is stored as an 8 bit integer, with 0 meaning that there is
not data for this extension. That means that the maximum length is 255
bytes and not 256 bytes.
On the other hand, the one-byte RTP header extensions are storing the
length as a 4 bit integer with an offset of 1 (i.e. 0 means 1 byte
extension length), so here 16 is the correct maximum length.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6180>
This is a simplification of the venerable
gst_va_base_dec_get_preferred_format_and_caps_features() function, which
predates since gstreamer-vaapi. It's used to select the format and the
capsfeature to use when setting the output state. It was complex and hard to
follow. This refactor simplifies a lot the algorithm.
The first thing to remove _downstream_has_video_meta() since, most of the time
it will be called before the caps negotiation, and allocation queries make sense
only after caps negotiation. It might work during renegotiation but, in that
case, caps feature change is uncommon. Better a simple and common approach.
Also, for performance, instead of dealing with caps features as strings, GQuarks
are used.
The refactor works like this:
1. If peer pad returns any caps, the returned caps feature is system memory and
looks for a proper format in the allowed caps.
2. The allowed caps are traversed at most 3 times: one per each valid caps
feature. First VAMemory, later DMABuf, and last system memory. The first to
match in allowed caps is picked, and the first format matching with the
chroma is picked too.
Notice that, right now, using playbin videoconvert never return any.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6154>
Some subtitle "decoders" had a wrong category of "Parser", which `parsebin`
relies on to identify elements which do not *decode* streams but *parse* them.
This would cause such subtitle decoders to be plugged in within parsebin,
preventing the original stream to be properly used by (more efficient)
downstream decoders or subtitle renderers.
Fixes#1757
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6153>
If we drop all messages with the same clock id as ours we will also
drop all messages coming from a PTP clock on our host since both clock
ids are build from the same MAC address.
At least for Linux we do not see our own messages anyway since the
network stack can well distinguish between multicast send from our
socket or from another socket on the same machine. To make sure that
this works for all supported platforms just drop delay requests since
this is the only message that is sent from the GStreamer PTP clock.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6172>
If we don't specify a path for loading, the runtime linker will search
for the library instead, which will use the usual mechanisms: RPATHs,
LD_LIBRARY_PATH, PATH (on Windows), etc.
Also try harder to load a non-devel libpython using INSTSONAME, if
available.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6159>