And also don't assert that there are no buffers queued up when handling
an EOS event. The pad's streaming thread might've already received a new
stream-start event and queued up a buffer in the meantime.
This still leaves a race condition where the srcpad task sees all pads
in EOS state and finishes the stream, while shortly afterwards a pad
might receive a stream-start event again, but this doesn't seem to be
solveable with the current aggregator design.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2769>
SMPTE 170M and 240M use the same RGB and white point coordinates
and therefore both primaries can be considered functionally
equivalent.
Also, some transfer functions have different name but equal
gamma functions. Adding another colorimetry compare function
to deal with thoes cases at once
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2765>
In file included from ../gstreamer1.0-plugins-good/1.20.3-r0/recipe-sysroot/usr/include/gstreamer-1.0/gst/gl/gstglfuncs.h:87,
from ../gst-plugins-good-1.20.3/ext/qt/qtglrenderer.cc:14:
../gstreamer1.0-plugins-good/1.20.3-r0/recipe-sysroot/usr/include/gstreamer-1.0/gst/gl/glprototypes/gstgl_compat.h:40:18: error: conflicting declaration 'typedef void* GLsync'
40 | typedef gpointer GLsync;
| ^~~~~~
In file included from ../gstreamer1.0-plugins-good/1.20.3-r0/recipe-sysroot/usr/include/QtGui/qopengl.h:127,
from ../gstreamer1.0-plugins-good/1.20.3-r0/recipe-sysroot/usr/include/QtQuick/qsggeometry.h:44,
from ../gstreamer1.0-plugins-good/1.20.3-r0/recipe-sysroot/usr/include/QtQuick/qsgnode.h:43,
from ../gstreamer1.0-plugins-good/1.20.3-r0/recipe-sysroot/usr/include/QtQuick/qsgrendererinterface.h:43,
from ../gstreamer1.0-plugins-good/1.20.3-r0/recipe-sysroot/usr/include/QtQuick/qquickwindow.h:44,
from ../gstreamer1.0-plugins-good/1.20.3-r0/recipe-sysroot/usr/include/QtQuick/QQuickWindow:1,
from ../gst-plugins-good-1.20.3/ext/qt/qtglrenderer.cc:6:
../gstreamer1.0-plugins-good/1.20.3-r0/recipe-sysroot/usr/include/QtGui/qopengles2ext.h:24:26: note: previous declaration as 'typedef struct __GLsync* GLsync'
24 | typedef struct __GLsync *GLsync;
| ^~~~~~
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2763>
These patches are taken from upstream, and they fix compile failures
with latest clang. These can be dropped when upgrading these wraps.
This is currently causing a warning because we do not require the
version of meson that ships with this feature: 0.63.0. The version has
not been bumped because older Meson versions gracefully ignore the
wrap field, this fix is optional and only needed on macOS, and 0.63.0
is a very new release with a bug that partially breaks this feature:
https://github.com/mesonbuild/meson/pull/10602
We can consider bumping the requirement once 0.63.1 is released.
Also switch from git to tarballs, no reason to use git here anymore.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2761>
We should move this functionality to gst-libs so that GstD3D11Converter
can be moved to gst-libs.
Another advantage is that applications can call our
HLSL compiler wrapper method without any worry about OS version
dependent system installed HLSL library.
Note that there are multiple HLSL compiler library versions
on Windows and system installed one would be OS version dependent.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2760>
We need GStreamer elements to do the bandwidth estimation as this way
they can also control the pacing of the transmission flow as specified
in the [GCC] algorithm for example.
Bandwidth estimator element are placed right before the "RTPSession" as
an "rtp-aux-sender" element. This way they can use the "Transport-wide
Congestion Control" RTCP feedback messages through the "RTPTwcc" custom
events that are sent by the rtpsession.
Applications are responsible to react to the bandwidth estimator element
and set the encoder target bitrate etc... which means that we can not
pass an estimator as an element factory, so a signal as been chosen
instead.
[GCC]: https://datatracker.ietf.org/doc/html/draft-ietf-rmcat-gcc-02
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2562>
Raw memory upload should always be the least preferred input
caps, only added by the raw memory uploader as the last thing
in the caps.
Caps negotiation should still choose raw data when it needs to,
and other upload methods that can accept raw data buffers will still do so.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2725>
The AVClass name of Animated PNG in FFmpeg 5.x is "(A)PNG"
and it will be converted to "-a-png" through
g_ascii_strdown() and g_strcanon(). But GLib disallow leading '-'
character for a GType name. Strip leading '-' to workaround it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2724>
There might be a sequence of event and buffer flow:
- Got stream-start/caps/segment events
- Got flush events
- And then buffers with a new segment event
In the above case, stream-start and caps event might not be reached to
peer proxysrc if peer proxysrc is not ready to receive them.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1552>
gst_video_convert_scale_get_fixed_format() receives 'othercaps' from
basetransforms' fixate_caps() vmethod which explicitly mentions that
'`othercaps` may not be writable'.
The gst_caps_intersect() call just before may or may not produce new
caps. Particularly in cases like EMPTY or ANY caps on either of the
inputs, only a ref is taken and returned to the caller.
As a result, gst_video_convert_scale_fixate_format() may have attempted
to modify a non-writable caps structure.
Fix by adding a gst_caps_make_writable().
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2709>
- ssl module requires an explicit TLS_SERVER role
- asyncio throws a deprecation warning when using
asyncio.get_event_loop(). Remove custom event loop handling entirely
- No need to keep the websocket server in a member variable, can use
a future to signal exit case along with the async with context manager
of websockets.serve()
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2698>
... and don't use GstD3D11VideoProcessor. Now GstD3D11Converter will
be able to convert using videoprocessor, and texture upload is also supported by
GstD3D11Converter. All the noisy code can be removed therefore.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2697>
* Add videoprocessor feature to d3d11converter, in order to unifiy
conversion flow.
* Add convert_buffer() method to support automatic shader/videoprocessor
selection. The method also supports texture upload if input memory
cannot be used for conversion (e.g., system memory or so)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2697>
There's no need to re-assign the return value of
g_string_append_*() functions and such to the variable
holding the GString. These return values are just for
convenience so function calls can be chained. The actual
GString pointer won't change, it's not a GList after all.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2685>
This reverts commit 6f9ae5d758.
The _transform_caps() function can't tell the difference
between the caller wanting to know the output caps
for the current method, or all possible output caps. If
it includes caps for all possible methods, glupload can
end up negotiating and sending the wrong output caps
downstream.
Partially reverts !2687Fixes#1310
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2699>
Adding nvautogpu{h264,h265}enc class which will accept upstream logical
GPU device object (GstCudaContext or GstD3D11Device) instead of
using pre-assigned GPU instance.
If upstream logical GPU device object is not NVENC compatible
(e.g., D3D11 device of non-NVIDIA GPU) or it's system memory,
then user specified "cuda-device-id" or "adapter-luid" property
will be used for GPU device selection.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2666>
GstCudaMemory supports CPU access via CUDA pinned host memory already
and it would show faster memory transfer performance between
GPU and CPU than copying from/to normal system memory.
If downstream supports video meta, we can passthrough CUDA memory.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2690>
If no filter caps are provided with a caps query, always
generate a full set of all caps from all upload methods,
not just the configured one. This is needed to handle
renegotiation when dealing with raw sysmem caps - as the upload
method might accept raw sysmem caps, but only the raw data
uploader adds those to the caps query.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2687>
This reverts commit f3292dc156.
Only the raw data uploader should add sysmem caps to the
actual caps query, because we want them to be at the
lowest priority. If upstream does select to send raw
caps, then the correct upload method will still
be chosen because the accept_caps implementation
will accept them
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2687>
When checking if we need to reconfigure when uploading, check
specifically the output caps of the current method will
result in compatible/incompatible caps, not the full set
of output caps from all upload methods.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2687>
Performs crop, scale, and color space conversion all in
a single render pipeline. Note that cropping related property is not
added in this element (which will make negotiation very complicated),
but user can configure videocrop element for crop meta to be attached
on each buffer.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2678>
Fixes warnings like:
Received a structure string that contains '="0.5"'. Reading as a gdouble value, rather than a string value. This is undesired behaviour, and with GStreamer 1.22 onward, this will be interpreted as a string value instead because it is wrapped in '"' quotes. If you want to guarantee this value is read as a string, before this change, use '=(string)"0.5"' instead. If you want to read in a gdouble value, leave its value unquoted.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2621>
* When dealing with rendition streams, we attempt to synchronize the media
playlist against the variant stream. This helps with speeding up the correct
initial fragment search and avoids issues when streams at activated at a much
later time.
* Also add checks for variant stream existence before attempting to use them
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2679>
When updating playlists, there is a possibility that the playlists don't
perfectly align, but the last entry of the previous playlist is *just* before
the first entry of the new playlist.
In those cases, we still can transfer the timing information from one playlist
to another, but we do not want to return that segment as being the matching one.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2679>
When matching playlists, there is a possibility that rendition streams will not
have been updated in time (for example because that stream started later, or
playback was paused). This would cause several playback failures and seeking
failures.
In order to still fall back on our feet, attempt to synchronize that rendition
playlist against the current variant playlist. This will attempt to match the
stream time using SN/DNS/PDT/...
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2679>
If we have been updating too slowly and have gone out of the current live
window, inform the baseclass accordingly.
This is different from the case where we have been updating quicker than what
the server provides.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2679>
* Since only flushing seeks are allowed, the "current" position is always the
global output position (and not "some" stream current position).
* In terms of figuring out to which stream to "snap" to, we can send it to any
selected stream. Removes the requirement of this function to a specific output
pad.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2679>
Remove the "pending advance" hack and instead rely on the base stream current
position to track our position (instead of a potentially NULL "current
segment").
Also ensure the media playlists are always refreshed with valid stream time,
even if there is no current segment.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2679>
The stream start and current position would be properly set when seeking or
activating a stream after playback started. But it would never be properly
initialized.
Set it to NONE initially to indicate to subclasses that no position has been
tracked yet. This will allow them to detect initial stream usage.
Futhermore, once the initial streams setup is done, make sure that it is set to
a valid initial value:
* The minimum stream time in live
* Or else the period start
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2679>
If the driver does not support VIDIOC_CREATE_BUFS ioctl, the pool
configuration may get changed, which requires a validation. This would
fail to activate a pool in a case it shouldn't normally fail unless we
are out of memory.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2456>
This example code demonstrates D3D11 device sharing between
application and GStreamer. Application can access texture
using appsink and it can be rendered on application's window without
any copy operation.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2646>
Our Direct3D11 abstraction layer has been improved and
it gained good shape from API point of view.
Also, On Windows, GstD3D11 has various advantages over GstGL
in terms of compatibility/stability/feature/performance.
Note that WGL implementation is known to be buggy for some
drivers/vendors/scenario (that's a reason why Google implemented ANGLE).
Moreover, GstGL is not fully optimized for Windows unfortunately.
It's the time to open this interface to application developers
for various optimized processing using our Direct3D11
infrastructure.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2646>