In the trick mode, driver may queue a valid buffer follow by an
empty buffer which has no valid data to indicate EOS.For the empty
buffer whose memory is multi-plane, need to resize it before
unreference it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2731>
Depending on device feature level, d3d11 runtime can support
ID3D11Fence which is equivalent to ID3D12Fence.
Waiting using fence has performance-wise benefit over pulling
ID3D11Query status. If ID3D11Fence is not supported by device,
then ID3D11Query will be used instead.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2790>
4x downscaling of chroma with co-sited chroma has never worked
it seems.
Fixes incorrect videotestsrc output and videoconvert conversions
to Y41B, YUV9, YVU9 and IYU9 with co-sited chroma.
e.g.
gst-launch-1.0 videotestsrc ! video/x-raw,format=Y41B,width=1280,height=720 ! \
videoconvert ! autovideosink
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2789>
It may happens that bitstream doesn't provided SPS in decoding order
(like in VPSSPSPPS_A_MainConcept_1 conformance test file).
To be sure that the decoder got the correct SPS parameters process
SPS just before start decoding the frame.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2575>
While possible defer computataion of pps and sps fields until
slice parsing since it may happens that bitstreams don't encoded
them in expected order.
A example weird ordered bitstreams is VPSSPSPPS_A_MainConcept_1
conformance test.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2575>
The function g_array_sized_new() leaves the len to 0, but the slice
implementation assumes it would be set to 4. Sending multiple slices is
not yet support for H.264 as no driver needed it yet, but if that code
was to be used it would have overflowed as the array would never grow as
multiple 0 by 2 always results in 0.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1079>
And also don't assert that there are no buffers queued up when handling
an EOS event. The pad's streaming thread might've already received a new
stream-start event and queued up a buffer in the meantime.
This still leaves a race condition where the srcpad task sees all pads
in EOS state and finishes the stream, while shortly afterwards a pad
might receive a stream-start event again, but this doesn't seem to be
solveable with the current aggregator design.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2769>
SMPTE 170M and 240M use the same RGB and white point coordinates
and therefore both primaries can be considered functionally
equivalent.
Also, some transfer functions have different name but equal
gamma functions. Adding another colorimetry compare function
to deal with thoes cases at once
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2765>
In file included from ../gstreamer1.0-plugins-good/1.20.3-r0/recipe-sysroot/usr/include/gstreamer-1.0/gst/gl/gstglfuncs.h:87,
from ../gst-plugins-good-1.20.3/ext/qt/qtglrenderer.cc:14:
../gstreamer1.0-plugins-good/1.20.3-r0/recipe-sysroot/usr/include/gstreamer-1.0/gst/gl/glprototypes/gstgl_compat.h:40:18: error: conflicting declaration 'typedef void* GLsync'
40 | typedef gpointer GLsync;
| ^~~~~~
In file included from ../gstreamer1.0-plugins-good/1.20.3-r0/recipe-sysroot/usr/include/QtGui/qopengl.h:127,
from ../gstreamer1.0-plugins-good/1.20.3-r0/recipe-sysroot/usr/include/QtQuick/qsggeometry.h:44,
from ../gstreamer1.0-plugins-good/1.20.3-r0/recipe-sysroot/usr/include/QtQuick/qsgnode.h:43,
from ../gstreamer1.0-plugins-good/1.20.3-r0/recipe-sysroot/usr/include/QtQuick/qsgrendererinterface.h:43,
from ../gstreamer1.0-plugins-good/1.20.3-r0/recipe-sysroot/usr/include/QtQuick/qquickwindow.h:44,
from ../gstreamer1.0-plugins-good/1.20.3-r0/recipe-sysroot/usr/include/QtQuick/QQuickWindow:1,
from ../gst-plugins-good-1.20.3/ext/qt/qtglrenderer.cc:6:
../gstreamer1.0-plugins-good/1.20.3-r0/recipe-sysroot/usr/include/QtGui/qopengles2ext.h:24:26: note: previous declaration as 'typedef struct __GLsync* GLsync'
24 | typedef struct __GLsync *GLsync;
| ^~~~~~
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2763>
These patches are taken from upstream, and they fix compile failures
with latest clang. These can be dropped when upgrading these wraps.
This is currently causing a warning because we do not require the
version of meson that ships with this feature: 0.63.0. The version has
not been bumped because older Meson versions gracefully ignore the
wrap field, this fix is optional and only needed on macOS, and 0.63.0
is a very new release with a bug that partially breaks this feature:
https://github.com/mesonbuild/meson/pull/10602
We can consider bumping the requirement once 0.63.1 is released.
Also switch from git to tarballs, no reason to use git here anymore.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2761>
We should move this functionality to gst-libs so that GstD3D11Converter
can be moved to gst-libs.
Another advantage is that applications can call our
HLSL compiler wrapper method without any worry about OS version
dependent system installed HLSL library.
Note that there are multiple HLSL compiler library versions
on Windows and system installed one would be OS version dependent.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2760>
We need GStreamer elements to do the bandwidth estimation as this way
they can also control the pacing of the transmission flow as specified
in the [GCC] algorithm for example.
Bandwidth estimator element are placed right before the "RTPSession" as
an "rtp-aux-sender" element. This way they can use the "Transport-wide
Congestion Control" RTCP feedback messages through the "RTPTwcc" custom
events that are sent by the rtpsession.
Applications are responsible to react to the bandwidth estimator element
and set the encoder target bitrate etc... which means that we can not
pass an estimator as an element factory, so a signal as been chosen
instead.
[GCC]: https://datatracker.ietf.org/doc/html/draft-ietf-rmcat-gcc-02
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2562>
Raw memory upload should always be the least preferred input
caps, only added by the raw memory uploader as the last thing
in the caps.
Caps negotiation should still choose raw data when it needs to,
and other upload methods that can accept raw data buffers will still do so.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2725>
The AVClass name of Animated PNG in FFmpeg 5.x is "(A)PNG"
and it will be converted to "-a-png" through
g_ascii_strdown() and g_strcanon(). But GLib disallow leading '-'
character for a GType name. Strip leading '-' to workaround it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2724>
There might be a sequence of event and buffer flow:
- Got stream-start/caps/segment events
- Got flush events
- And then buffers with a new segment event
In the above case, stream-start and caps event might not be reached to
peer proxysrc if peer proxysrc is not ready to receive them.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1552>
gst_video_convert_scale_get_fixed_format() receives 'othercaps' from
basetransforms' fixate_caps() vmethod which explicitly mentions that
'`othercaps` may not be writable'.
The gst_caps_intersect() call just before may or may not produce new
caps. Particularly in cases like EMPTY or ANY caps on either of the
inputs, only a ref is taken and returned to the caller.
As a result, gst_video_convert_scale_fixate_format() may have attempted
to modify a non-writable caps structure.
Fix by adding a gst_caps_make_writable().
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2709>
- ssl module requires an explicit TLS_SERVER role
- asyncio throws a deprecation warning when using
asyncio.get_event_loop(). Remove custom event loop handling entirely
- No need to keep the websocket server in a member variable, can use
a future to signal exit case along with the async with context manager
of websockets.serve()
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2698>
... and don't use GstD3D11VideoProcessor. Now GstD3D11Converter will
be able to convert using videoprocessor, and texture upload is also supported by
GstD3D11Converter. All the noisy code can be removed therefore.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2697>
* Add videoprocessor feature to d3d11converter, in order to unifiy
conversion flow.
* Add convert_buffer() method to support automatic shader/videoprocessor
selection. The method also supports texture upload if input memory
cannot be used for conversion (e.g., system memory or so)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2697>
There's no need to re-assign the return value of
g_string_append_*() functions and such to the variable
holding the GString. These return values are just for
convenience so function calls can be chained. The actual
GString pointer won't change, it's not a GList after all.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2685>
This reverts commit 6f9ae5d758.
The _transform_caps() function can't tell the difference
between the caller wanting to know the output caps
for the current method, or all possible output caps. If
it includes caps for all possible methods, glupload can
end up negotiating and sending the wrong output caps
downstream.
Partially reverts !2687Fixes#1310
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2699>
Adding nvautogpu{h264,h265}enc class which will accept upstream logical
GPU device object (GstCudaContext or GstD3D11Device) instead of
using pre-assigned GPU instance.
If upstream logical GPU device object is not NVENC compatible
(e.g., D3D11 device of non-NVIDIA GPU) or it's system memory,
then user specified "cuda-device-id" or "adapter-luid" property
will be used for GPU device selection.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2666>
GstCudaMemory supports CPU access via CUDA pinned host memory already
and it would show faster memory transfer performance between
GPU and CPU than copying from/to normal system memory.
If downstream supports video meta, we can passthrough CUDA memory.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2690>
If no filter caps are provided with a caps query, always
generate a full set of all caps from all upload methods,
not just the configured one. This is needed to handle
renegotiation when dealing with raw sysmem caps - as the upload
method might accept raw sysmem caps, but only the raw data
uploader adds those to the caps query.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2687>
This reverts commit f3292dc156.
Only the raw data uploader should add sysmem caps to the
actual caps query, because we want them to be at the
lowest priority. If upstream does select to send raw
caps, then the correct upload method will still
be chosen because the accept_caps implementation
will accept them
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2687>
When checking if we need to reconfigure when uploading, check
specifically the output caps of the current method will
result in compatible/incompatible caps, not the full set
of output caps from all upload methods.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2687>
Performs crop, scale, and color space conversion all in
a single render pipeline. Note that cropping related property is not
added in this element (which will make negotiation very complicated),
but user can configure videocrop element for crop meta to be attached
on each buffer.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2678>
Fixes warnings like:
Received a structure string that contains '="0.5"'. Reading as a gdouble value, rather than a string value. This is undesired behaviour, and with GStreamer 1.22 onward, this will be interpreted as a string value instead because it is wrapped in '"' quotes. If you want to guarantee this value is read as a string, before this change, use '=(string)"0.5"' instead. If you want to read in a gdouble value, leave its value unquoted.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2621>
* When dealing with rendition streams, we attempt to synchronize the media
playlist against the variant stream. This helps with speeding up the correct
initial fragment search and avoids issues when streams at activated at a much
later time.
* Also add checks for variant stream existence before attempting to use them
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2679>
When updating playlists, there is a possibility that the playlists don't
perfectly align, but the last entry of the previous playlist is *just* before
the first entry of the new playlist.
In those cases, we still can transfer the timing information from one playlist
to another, but we do not want to return that segment as being the matching one.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2679>
When matching playlists, there is a possibility that rendition streams will not
have been updated in time (for example because that stream started later, or
playback was paused). This would cause several playback failures and seeking
failures.
In order to still fall back on our feet, attempt to synchronize that rendition
playlist against the current variant playlist. This will attempt to match the
stream time using SN/DNS/PDT/...
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2679>