Before sending EOS, update the period's has_next_period
flag and/or create the next period. This closes a race
where the output loop might receive the EOS event
and either push it downstream (causing premature EOS),
or receive it and try and switch to the next period
before that period is completely set up.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2838>
When combining stream flows, ignore streams that
are not selected, instead of checking whether
the stream state has changed yet.
Fixes another issue with dashdemux2 where it fails to
change to the next period when playing content with
several video, audio and text streams, as with
Manifest_MultiPeriod_1080p.mpd when seeking to 730
just before the end of the first period.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2838>
The is_gst_mini_object_check would sometimes detect a proper GObject
as a mini object, and then bad things happen.
We know whether a pointer is a proper GObject or a MiniObject here
though, so just pass that information to the right code paths and
avoid the heuristics altogether.
Eliminates all remaining uses of object_is_gst_mini_object().
Fixes#1334
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2832>
The is_gst_mini_object_check would sometimes detect a proper GObject
as a mini object, and then bad things happen.
We know whether a pointer is a proper GObject or a MiniObject here
though, so just pass that information to the right code paths and
avoid the heuristics altogether.
There are probably more cases where the check should be eliminated.
Fixes#1334, maybe
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2832>
This is based on gtksink, but similar to waylandsink uses Wayland APIs
directly instead of rendering with Gtk/Cairo primitives.
Note that the long term plan is to move this into the existing extension
in `-good`, which requires the Wayland library to move the as well.
For this reason several files like `gstgtkutils.*` and `gtkgstbasewidget.*`
are straight copies and should be kept in sync.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1515>
This can be important for instance when a container holds multiple
tracks with the same media type, with no indication (eg tags) of
which track is the default one.
In that case, players usually pick the first track by default.
This is especially useful when using smart editing with GES, as
it will result in the same ordering as the input file that was
used as a template.
For reference, this yields the same order as ffprobe.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1549>
The previous code was storing container children in reverse
addition order, this was mitigated by the fact that track elements
were also stored in reverse order, thus restoring the original
order, but it seems more consistent to preserve order throughout,
the extra cost of append operations is negligible.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1549>
when creating a profile from a discoverer info.
There is no justification for the existing code, and talking with
Thibault he cannot remember why the sort was in place.
On the other hand, this allows GES users to not have to implement
a callback for the select-tracks-for-object callback when using
it to trim a single clip, which the output profile was built from:
track elements will be placed in the appropriate track by default,
that is the one that will be connected to the matching profile.
For multi-clip timelines, the situation doesn't change, users will
still have to implement a callback and do the leg work of placing
track elements (if any) in a matching track (if any).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1549>
chroma-format, bit-depth-chroma, bit-depth-luma are all informative
fields set by the H265 and H265 parser upon receiving an SPS.
They shouldn't be constrained downstream of the parser, instead
if a user wants those to ultimately match certain values they
should do so by constraining a profile.
In this case however, we also always remove the profile constraint
in order to let encoders pick a suitable one as a function of the
raw input video format and their own capabilities.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1549>
With the 2.72 release, glib-networking developers have decided that
TLS certificate validation cannot be implemented correctly by them, so
they've deprecated it.
In a nutshell: a cert can have several validation errors, but there
are no guarantees that the TLS backend will return all those errors,
and things are made even more complicated by the fact that the list of
errors might refer to certs that are added for backwards-compat and
won't actually be used by the TLS library.
Our best option is to ignore the deprecation and pass the warning onto
users so they can make an appropriate security decision regarding
this.
We can't deprecate the tls-validation-flags property because it is
very useful when connecting to RTSP cameras that will never get
updates to fix certificate errors.
Relevant upstream merge requests / issues:
https://gitlab.gnome.org/GNOME/glib/-/merge_requests/2214https://gitlab.gnome.org/GNOME/glib-networking/-/issues/179https://gitlab.gnome.org/GNOME/glib-networking/-/merge_requests/193
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2494>
Forgot to change the wrap type in e0014ef4fe which broke the
subproject. Wasn't noticed by CI because the subproject cache wasn't
regenerated.
The accompanied patch was included in 2.8.2, so it is not needed. It
was originally needed with 2.8.1
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2812>
For formats which we don't have fast-path implementation, compositor
will convert it to common unpack formats (AYUV, ARGB, AYUV64 and ARGB64)
then blending will happen using the intermediate formats.
Finally blended image will be converted back to the selected output format
if required.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1486>
It is entirely possible for the cancellable to be cancelled (and freed)
in gst_rtsp_connection_flush() while there may be an ongoing read/write
operation.
Nothing prevents gst_rtsp_connection_flush() from waiting for the
outstanding read/writes.
This could lead to a crash like (where cancellable has been freed
within gst_rtsp_connection_flush()):
#0 0x00007ffff4351096 in g_output_stream_writev (stream=stream@entry=0x7fff30002950, vectors=vectors@entry=0x7ffe2c6afa80, n_vectors=n_vectors@entry=3, bytes_written=bytes_written@entry=0x7ffe2c6af950, cancellable=cancellable@entry=0x7fff300288a0, error=error@entry=0x7ffe2c6af958) at ../subprojects/glib/gio/goutputstream.c:377
#1 0x00007ffff44b2c38 in writev_bytes (stream=0x7fff30002950, vectors=vectors@entry=0x7ffe2c6afa80, n_vectors=n_vectors@entry=3, bytes_written=bytes_written@entry=0x7ffe2c6afb90, block=block@entry=1, cancellable=0x7fff300288a0) at ../subprojects/gst-plugins-base/gst-libs/gst/rtsp/gstrtspconnection.c:1320
#2 0x00007ffff44b583e in gst_rtsp_connection_send_messages_usec (conn=0x7fff30001370, messages=messages@entry=0x7ffe2c6afcc0, n_messages=n_messages@entry=1, timeout=timeout@entry=3000000) at ../subprojects/gst-plugins-base/gst-libs/gst/rtsp/gstrtspconnection.c:2056
#3 0x00007ffff44d2669 in gst_rtsp_client_sink_connection_send_messages (sink=0x7fffac0192c0, timeout=3000000, n_messages=1, messages=0x7ffe2c6afcc0, conninfo=0x7fffac019610) at ../subprojects/gst-rtsp-server/gst/rtsp-sink/gstrtspclientsink.c:1929
#4 gst_rtsp_client_sink_try_send (sink=sink@entry=0x7fffac0192c0, conninfo=conninfo@entry=0x7fffac019610, requests=requests@entry=0x7ffe2c6afcc0, n_requests=n_requests@entry=1, response=response@entry=0x0, code=code@entry=0x0) at ../subprojects/gst-rtsp-server/gst/rtsp-sink/gstrtspclientsink.c:2845
#5 0x00007ffff44d3077 in do_send_data (buffer=0x7fff38075c60, channel=<optimized out>, context=0x7fffac042640) at ../subprojects/gst-rtsp-server/gst/rtsp-sink/gstrtspclientsink.c:3896
#6 0x00007ffff4281cc6 in gst_rtsp_stream_transport_send_rtp (trans=trans@entry=0x7fff20061f80, buffer=<optimized out>) at ../subprojects/gst-rtsp-server/gst/rtsp-server/rtsp-stream-transport.c:632
#7 0x00007ffff4278e9b in push_data (stream=0x7fff40019bf0, is_rtp=<optimized out>, buffer_list=0x0, buffer=<optimized out>, trans=0x7fff20061f80) at ../subprojects/gst-rtsp-server/gst/rtsp-server/rtsp-stream.c:2586
#8 check_transport_backlog (stream=0x7fff40019bf0, trans=0x7fff20061f80) at ../subprojects/gst-rtsp-server/gst/rtsp-server/rtsp-stream.c:2645
#9 0x00007ffff42793b3 in send_tcp_message (idx=<optimized out>, stream=0x7fff40019bf0) at ../subprojects/gst-rtsp-server/gst/rtsp-server/rtsp-stream.c:2741
#10 send_func (stream=0x7fff40019bf0) at ../subprojects/gst-rtsp-server/gst/rtsp-server/rtsp-stream.c:2776
#11 0x00007ffff7d59fad in g_thread_proxy (data=0x7fffbc062920) at ../subprojects/glib/glib/gthread.c:827
#12 0x00007ffff7a8ce2d in start_thread () from /lib64/libc.so.6
#13 0x00007ffff7b12620 in clone3 () from /lib64/libc.so.6
Fix by adding a cancellable lock and returning an extra reference used
across all read/write operations. gst_rtsp_connection_flush() can free
the in-use cancellable and it will no longer affect any in progress
read/write.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2799>
1.21.0.1 should not satisfy a check for 1.22.0.
If someone needs more control they should do a feature check for
the symbol in the headers or lib.
Based on a similar patch by Tim-Philipp Müller for libnice.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2501>
The caps negotiation should respect the selected method to the test pipeline below works properly.
gst-launch-1.0 videotestsrc ! video/x-raw,width=320,height=600 ! videoflip method=clockwise ! video/x-raw,width=600,height=320 ! fakesink
Signed-off-by: Adrian Fiergolski <adrian.fiergolski@fastree3d.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2803>
In the trick mode, driver may queue a valid buffer follow by an
empty buffer which has no valid data to indicate EOS.For the empty
buffer whose memory is multi-plane, need to resize it before
unreference it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2731>
Depending on device feature level, d3d11 runtime can support
ID3D11Fence which is equivalent to ID3D12Fence.
Waiting using fence has performance-wise benefit over pulling
ID3D11Query status. If ID3D11Fence is not supported by device,
then ID3D11Query will be used instead.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2790>
4x downscaling of chroma with co-sited chroma has never worked
it seems.
Fixes incorrect videotestsrc output and videoconvert conversions
to Y41B, YUV9, YVU9 and IYU9 with co-sited chroma.
e.g.
gst-launch-1.0 videotestsrc ! video/x-raw,format=Y41B,width=1280,height=720 ! \
videoconvert ! autovideosink
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2789>
It may happens that bitstream doesn't provided SPS in decoding order
(like in VPSSPSPPS_A_MainConcept_1 conformance test file).
To be sure that the decoder got the correct SPS parameters process
SPS just before start decoding the frame.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2575>
While possible defer computataion of pps and sps fields until
slice parsing since it may happens that bitstreams don't encoded
them in expected order.
A example weird ordered bitstreams is VPSSPSPPS_A_MainConcept_1
conformance test.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2575>
The function g_array_sized_new() leaves the len to 0, but the slice
implementation assumes it would be set to 4. Sending multiple slices is
not yet support for H.264 as no driver needed it yet, but if that code
was to be used it would have overflowed as the array would never grow as
multiple 0 by 2 always results in 0.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1079>
And also don't assert that there are no buffers queued up when handling
an EOS event. The pad's streaming thread might've already received a new
stream-start event and queued up a buffer in the meantime.
This still leaves a race condition where the srcpad task sees all pads
in EOS state and finishes the stream, while shortly afterwards a pad
might receive a stream-start event again, but this doesn't seem to be
solveable with the current aggregator design.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2769>
SMPTE 170M and 240M use the same RGB and white point coordinates
and therefore both primaries can be considered functionally
equivalent.
Also, some transfer functions have different name but equal
gamma functions. Adding another colorimetry compare function
to deal with thoes cases at once
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2765>
In file included from ../gstreamer1.0-plugins-good/1.20.3-r0/recipe-sysroot/usr/include/gstreamer-1.0/gst/gl/gstglfuncs.h:87,
from ../gst-plugins-good-1.20.3/ext/qt/qtglrenderer.cc:14:
../gstreamer1.0-plugins-good/1.20.3-r0/recipe-sysroot/usr/include/gstreamer-1.0/gst/gl/glprototypes/gstgl_compat.h:40:18: error: conflicting declaration 'typedef void* GLsync'
40 | typedef gpointer GLsync;
| ^~~~~~
In file included from ../gstreamer1.0-plugins-good/1.20.3-r0/recipe-sysroot/usr/include/QtGui/qopengl.h:127,
from ../gstreamer1.0-plugins-good/1.20.3-r0/recipe-sysroot/usr/include/QtQuick/qsggeometry.h:44,
from ../gstreamer1.0-plugins-good/1.20.3-r0/recipe-sysroot/usr/include/QtQuick/qsgnode.h:43,
from ../gstreamer1.0-plugins-good/1.20.3-r0/recipe-sysroot/usr/include/QtQuick/qsgrendererinterface.h:43,
from ../gstreamer1.0-plugins-good/1.20.3-r0/recipe-sysroot/usr/include/QtQuick/qquickwindow.h:44,
from ../gstreamer1.0-plugins-good/1.20.3-r0/recipe-sysroot/usr/include/QtQuick/QQuickWindow:1,
from ../gst-plugins-good-1.20.3/ext/qt/qtglrenderer.cc:6:
../gstreamer1.0-plugins-good/1.20.3-r0/recipe-sysroot/usr/include/QtGui/qopengles2ext.h:24:26: note: previous declaration as 'typedef struct __GLsync* GLsync'
24 | typedef struct __GLsync *GLsync;
| ^~~~~~
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2763>
These patches are taken from upstream, and they fix compile failures
with latest clang. These can be dropped when upgrading these wraps.
This is currently causing a warning because we do not require the
version of meson that ships with this feature: 0.63.0. The version has
not been bumped because older Meson versions gracefully ignore the
wrap field, this fix is optional and only needed on macOS, and 0.63.0
is a very new release with a bug that partially breaks this feature:
https://github.com/mesonbuild/meson/pull/10602
We can consider bumping the requirement once 0.63.1 is released.
Also switch from git to tarballs, no reason to use git here anymore.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2761>
We should move this functionality to gst-libs so that GstD3D11Converter
can be moved to gst-libs.
Another advantage is that applications can call our
HLSL compiler wrapper method without any worry about OS version
dependent system installed HLSL library.
Note that there are multiple HLSL compiler library versions
on Windows and system installed one would be OS version dependent.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2760>
We need GStreamer elements to do the bandwidth estimation as this way
they can also control the pacing of the transmission flow as specified
in the [GCC] algorithm for example.
Bandwidth estimator element are placed right before the "RTPSession" as
an "rtp-aux-sender" element. This way they can use the "Transport-wide
Congestion Control" RTCP feedback messages through the "RTPTwcc" custom
events that are sent by the rtpsession.
Applications are responsible to react to the bandwidth estimator element
and set the encoder target bitrate etc... which means that we can not
pass an estimator as an element factory, so a signal as been chosen
instead.
[GCC]: https://datatracker.ietf.org/doc/html/draft-ietf-rmcat-gcc-02
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2562>
Raw memory upload should always be the least preferred input
caps, only added by the raw memory uploader as the last thing
in the caps.
Caps negotiation should still choose raw data when it needs to,
and other upload methods that can accept raw data buffers will still do so.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2725>
The AVClass name of Animated PNG in FFmpeg 5.x is "(A)PNG"
and it will be converted to "-a-png" through
g_ascii_strdown() and g_strcanon(). But GLib disallow leading '-'
character for a GType name. Strip leading '-' to workaround it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2724>
There might be a sequence of event and buffer flow:
- Got stream-start/caps/segment events
- Got flush events
- And then buffers with a new segment event
In the above case, stream-start and caps event might not be reached to
peer proxysrc if peer proxysrc is not ready to receive them.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1552>
gst_video_convert_scale_get_fixed_format() receives 'othercaps' from
basetransforms' fixate_caps() vmethod which explicitly mentions that
'`othercaps` may not be writable'.
The gst_caps_intersect() call just before may or may not produce new
caps. Particularly in cases like EMPTY or ANY caps on either of the
inputs, only a ref is taken and returned to the caller.
As a result, gst_video_convert_scale_fixate_format() may have attempted
to modify a non-writable caps structure.
Fix by adding a gst_caps_make_writable().
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2709>
- ssl module requires an explicit TLS_SERVER role
- asyncio throws a deprecation warning when using
asyncio.get_event_loop(). Remove custom event loop handling entirely
- No need to keep the websocket server in a member variable, can use
a future to signal exit case along with the async with context manager
of websockets.serve()
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2698>
... and don't use GstD3D11VideoProcessor. Now GstD3D11Converter will
be able to convert using videoprocessor, and texture upload is also supported by
GstD3D11Converter. All the noisy code can be removed therefore.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2697>
* Add videoprocessor feature to d3d11converter, in order to unifiy
conversion flow.
* Add convert_buffer() method to support automatic shader/videoprocessor
selection. The method also supports texture upload if input memory
cannot be used for conversion (e.g., system memory or so)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2697>
There's no need to re-assign the return value of
g_string_append_*() functions and such to the variable
holding the GString. These return values are just for
convenience so function calls can be chained. The actual
GString pointer won't change, it's not a GList after all.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2685>
This reverts commit 6f9ae5d758.
The _transform_caps() function can't tell the difference
between the caller wanting to know the output caps
for the current method, or all possible output caps. If
it includes caps for all possible methods, glupload can
end up negotiating and sending the wrong output caps
downstream.
Partially reverts !2687Fixes#1310
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2699>
Adding nvautogpu{h264,h265}enc class which will accept upstream logical
GPU device object (GstCudaContext or GstD3D11Device) instead of
using pre-assigned GPU instance.
If upstream logical GPU device object is not NVENC compatible
(e.g., D3D11 device of non-NVIDIA GPU) or it's system memory,
then user specified "cuda-device-id" or "adapter-luid" property
will be used for GPU device selection.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2666>
GstCudaMemory supports CPU access via CUDA pinned host memory already
and it would show faster memory transfer performance between
GPU and CPU than copying from/to normal system memory.
If downstream supports video meta, we can passthrough CUDA memory.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2690>
If no filter caps are provided with a caps query, always
generate a full set of all caps from all upload methods,
not just the configured one. This is needed to handle
renegotiation when dealing with raw sysmem caps - as the upload
method might accept raw sysmem caps, but only the raw data
uploader adds those to the caps query.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2687>
This reverts commit f3292dc156.
Only the raw data uploader should add sysmem caps to the
actual caps query, because we want them to be at the
lowest priority. If upstream does select to send raw
caps, then the correct upload method will still
be chosen because the accept_caps implementation
will accept them
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2687>
When checking if we need to reconfigure when uploading, check
specifically the output caps of the current method will
result in compatible/incompatible caps, not the full set
of output caps from all upload methods.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2687>
Performs crop, scale, and color space conversion all in
a single render pipeline. Note that cropping related property is not
added in this element (which will make negotiation very complicated),
but user can configure videocrop element for crop meta to be attached
on each buffer.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2678>
Fixes warnings like:
Received a structure string that contains '="0.5"'. Reading as a gdouble value, rather than a string value. This is undesired behaviour, and with GStreamer 1.22 onward, this will be interpreted as a string value instead because it is wrapped in '"' quotes. If you want to guarantee this value is read as a string, before this change, use '=(string)"0.5"' instead. If you want to read in a gdouble value, leave its value unquoted.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2621>
* When dealing with rendition streams, we attempt to synchronize the media
playlist against the variant stream. This helps with speeding up the correct
initial fragment search and avoids issues when streams at activated at a much
later time.
* Also add checks for variant stream existence before attempting to use them
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2679>
When updating playlists, there is a possibility that the playlists don't
perfectly align, but the last entry of the previous playlist is *just* before
the first entry of the new playlist.
In those cases, we still can transfer the timing information from one playlist
to another, but we do not want to return that segment as being the matching one.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2679>
When matching playlists, there is a possibility that rendition streams will not
have been updated in time (for example because that stream started later, or
playback was paused). This would cause several playback failures and seeking
failures.
In order to still fall back on our feet, attempt to synchronize that rendition
playlist against the current variant playlist. This will attempt to match the
stream time using SN/DNS/PDT/...
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2679>
If we have been updating too slowly and have gone out of the current live
window, inform the baseclass accordingly.
This is different from the case where we have been updating quicker than what
the server provides.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2679>
* Since only flushing seeks are allowed, the "current" position is always the
global output position (and not "some" stream current position).
* In terms of figuring out to which stream to "snap" to, we can send it to any
selected stream. Removes the requirement of this function to a specific output
pad.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2679>
Remove the "pending advance" hack and instead rely on the base stream current
position to track our position (instead of a potentially NULL "current
segment").
Also ensure the media playlists are always refreshed with valid stream time,
even if there is no current segment.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2679>
The stream start and current position would be properly set when seeking or
activating a stream after playback started. But it would never be properly
initialized.
Set it to NONE initially to indicate to subclasses that no position has been
tracked yet. This will allow them to detect initial stream usage.
Futhermore, once the initial streams setup is done, make sure that it is set to
a valid initial value:
* The minimum stream time in live
* Or else the period start
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2679>
If the driver does not support VIDIOC_CREATE_BUFS ioctl, the pool
configuration may get changed, which requires a validation. This would
fail to activate a pool in a case it shouldn't normally fail unless we
are out of memory.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2456>
This example code demonstrates D3D11 device sharing between
application and GStreamer. Application can access texture
using appsink and it can be rendered on application's window without
any copy operation.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2646>
Our Direct3D11 abstraction layer has been improved and
it gained good shape from API point of view.
Also, On Windows, GstD3D11 has various advantages over GstGL
in terms of compatibility/stability/feature/performance.
Note that WGL implementation is known to be buggy for some
drivers/vendors/scenario (that's a reason why Google implemented ANGLE).
Moreover, GstGL is not fully optimized for Windows unfortunately.
It's the time to open this interface to application developers
for various optimized processing using our Direct3D11
infrastructure.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2646>
This patch adds general mechanism for handling specific hacks. In this
case for jpeg decoder in i965 driver, which cannot create surfaces
with fourcc specified.
From jpeg decoder to the allocator, which creates the surfaces,
there's a non-simple path: basedec pseudo-class adds a hacks guint32
which will be set by actual elements (vajpegdec, in this case) and
basedec will always set the hack to the allocator when the allocator
is instantiated.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1575>
Given the supported rt formats in a profile/entrypoint config it's
possible to know the supported JPEG colorspace and subsampling. This
patch adds this information in coded caps to a safer autoplugging
after jpegparser.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1575>
This base class is intented for hardware accelerated decoders, but since
only VA uses it, it will be kept internally in va plugin.
It follows the same logic as the others video decoders in the library but.
as JPEG are independet images, there's no need to handle a DBP so no need
of a picture object. Instead a scan object is added with all the structures
required to decode the image (huffman and quant tables, mcus, etc.).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1575>
Gallium drivers historically have reported strange dmabuf sizes, from always
zero to the whole frame (multiple fds). The simplest solution is to use lseek
SEEK_END to get the prime descriptor size.
Also the allocator raises a warning if both values differ in order to report
it to driver.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2574>
If compiled with -Dgstreamer:gst_debug=false and we have
GST_REMOVE_DISABLED defined we will get the following compiler error:
```
[...]/libgstreamer-1.0.so.0.2100.0.p/gst.c.o: in function `gst_deinit':
[...]/gst/gst.c:1258: undefined reference to `_priv_gst_debug_cleanup'
[...] hidden symbol `_priv_gst_debug_cleanup' isn't defined
```
Add the missing define guard to avoid this.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2648>
The latency messages are non-deterministic and can arrive before/after
async-done or during state-changes as they are posted by e.g. sinks from
their streaming thread but bins are finishing asynchronous state changes
from a secondary helper thread.
To solve this, expect latency messages at any time and assert that we
receive one at some point during the test.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2643>
The commit b90d0274 introduces uninitialized width and height when we
consider to change the "pixel-aspect-ratio" for some interlaced stream.
We need to check the resolution in the src caps, and if no resolution
info found, there is no need to consider the aspect ratio.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2630>
Previously it was only possible to request them with the exact template
name, e.g. 'src_%s', but not with "instantiated" names that would match
this template, e.g.'src_foo_bar'.
This is now possible and a test was added for this, in addition to
fixing a previously invalid test.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2635>
Removing glvideomixer-like nuance (it was initially referenced)
and rewriting element since it's not an optimal design at all
from performance point of view.
* Remove wrapper bin (and internal conversion/upload/download elements)
which will waste CPU/GPU resources. Conversion/blending can be done by the
d3d11compositor element at once.
* Add support YUV blending without RGB conversion.
The RGB <-> YUV conversion is completely unnecessary since YUV textures
support blending as well.
* Remove complicated blending operation properties since it's hard
to use from application point of view. Instead, adding "operator" property
like what compositor element does.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2631>
Similar to and inspired by glimagesink and gtkglsink.
Using the Wayland buffer transform API allows to offload
rotate operations to the Wayland compositor. This can have
several advantages:
- The Wayland compositor may be able to use hardware plane
capabilities to do the rotation.
- In case of pre-rotated content on rotated outputs the
rotations may equal out, potentially allowing the
compositor to use hardware planes even if they don't
support rotate operations.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2543>