... and don't use GstD3D11VideoProcessor. Now GstD3D11Converter will
be able to convert using videoprocessor, and texture upload is also supported by
GstD3D11Converter. All the noisy code can be removed therefore.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2697>
* Add videoprocessor feature to d3d11converter, in order to unifiy
conversion flow.
* Add convert_buffer() method to support automatic shader/videoprocessor
selection. The method also supports texture upload if input memory
cannot be used for conversion (e.g., system memory or so)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2697>
There's no need to re-assign the return value of
g_string_append_*() functions and such to the variable
holding the GString. These return values are just for
convenience so function calls can be chained. The actual
GString pointer won't change, it's not a GList after all.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2685>
This reverts commit 6f9ae5d758.
The _transform_caps() function can't tell the difference
between the caller wanting to know the output caps
for the current method, or all possible output caps. If
it includes caps for all possible methods, glupload can
end up negotiating and sending the wrong output caps
downstream.
Partially reverts !2687Fixes#1310
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2699>
Adding nvautogpu{h264,h265}enc class which will accept upstream logical
GPU device object (GstCudaContext or GstD3D11Device) instead of
using pre-assigned GPU instance.
If upstream logical GPU device object is not NVENC compatible
(e.g., D3D11 device of non-NVIDIA GPU) or it's system memory,
then user specified "cuda-device-id" or "adapter-luid" property
will be used for GPU device selection.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2666>
GstCudaMemory supports CPU access via CUDA pinned host memory already
and it would show faster memory transfer performance between
GPU and CPU than copying from/to normal system memory.
If downstream supports video meta, we can passthrough CUDA memory.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2690>
If no filter caps are provided with a caps query, always
generate a full set of all caps from all upload methods,
not just the configured one. This is needed to handle
renegotiation when dealing with raw sysmem caps - as the upload
method might accept raw sysmem caps, but only the raw data
uploader adds those to the caps query.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2687>
This reverts commit f3292dc156.
Only the raw data uploader should add sysmem caps to the
actual caps query, because we want them to be at the
lowest priority. If upstream does select to send raw
caps, then the correct upload method will still
be chosen because the accept_caps implementation
will accept them
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2687>
When checking if we need to reconfigure when uploading, check
specifically the output caps of the current method will
result in compatible/incompatible caps, not the full set
of output caps from all upload methods.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2687>
Performs crop, scale, and color space conversion all in
a single render pipeline. Note that cropping related property is not
added in this element (which will make negotiation very complicated),
but user can configure videocrop element for crop meta to be attached
on each buffer.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2678>
Fixes warnings like:
Received a structure string that contains '="0.5"'. Reading as a gdouble value, rather than a string value. This is undesired behaviour, and with GStreamer 1.22 onward, this will be interpreted as a string value instead because it is wrapped in '"' quotes. If you want to guarantee this value is read as a string, before this change, use '=(string)"0.5"' instead. If you want to read in a gdouble value, leave its value unquoted.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2621>
* When dealing with rendition streams, we attempt to synchronize the media
playlist against the variant stream. This helps with speeding up the correct
initial fragment search and avoids issues when streams at activated at a much
later time.
* Also add checks for variant stream existence before attempting to use them
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2679>
When updating playlists, there is a possibility that the playlists don't
perfectly align, but the last entry of the previous playlist is *just* before
the first entry of the new playlist.
In those cases, we still can transfer the timing information from one playlist
to another, but we do not want to return that segment as being the matching one.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2679>
When matching playlists, there is a possibility that rendition streams will not
have been updated in time (for example because that stream started later, or
playback was paused). This would cause several playback failures and seeking
failures.
In order to still fall back on our feet, attempt to synchronize that rendition
playlist against the current variant playlist. This will attempt to match the
stream time using SN/DNS/PDT/...
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2679>
If we have been updating too slowly and have gone out of the current live
window, inform the baseclass accordingly.
This is different from the case where we have been updating quicker than what
the server provides.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2679>
* Since only flushing seeks are allowed, the "current" position is always the
global output position (and not "some" stream current position).
* In terms of figuring out to which stream to "snap" to, we can send it to any
selected stream. Removes the requirement of this function to a specific output
pad.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2679>
Remove the "pending advance" hack and instead rely on the base stream current
position to track our position (instead of a potentially NULL "current
segment").
Also ensure the media playlists are always refreshed with valid stream time,
even if there is no current segment.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2679>
The stream start and current position would be properly set when seeking or
activating a stream after playback started. But it would never be properly
initialized.
Set it to NONE initially to indicate to subclasses that no position has been
tracked yet. This will allow them to detect initial stream usage.
Futhermore, once the initial streams setup is done, make sure that it is set to
a valid initial value:
* The minimum stream time in live
* Or else the period start
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2679>
If the driver does not support VIDIOC_CREATE_BUFS ioctl, the pool
configuration may get changed, which requires a validation. This would
fail to activate a pool in a case it shouldn't normally fail unless we
are out of memory.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2456>
This example code demonstrates D3D11 device sharing between
application and GStreamer. Application can access texture
using appsink and it can be rendered on application's window without
any copy operation.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2646>
Our Direct3D11 abstraction layer has been improved and
it gained good shape from API point of view.
Also, On Windows, GstD3D11 has various advantages over GstGL
in terms of compatibility/stability/feature/performance.
Note that WGL implementation is known to be buggy for some
drivers/vendors/scenario (that's a reason why Google implemented ANGLE).
Moreover, GstGL is not fully optimized for Windows unfortunately.
It's the time to open this interface to application developers
for various optimized processing using our Direct3D11
infrastructure.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2646>
This patch adds general mechanism for handling specific hacks. In this
case for jpeg decoder in i965 driver, which cannot create surfaces
with fourcc specified.
From jpeg decoder to the allocator, which creates the surfaces,
there's a non-simple path: basedec pseudo-class adds a hacks guint32
which will be set by actual elements (vajpegdec, in this case) and
basedec will always set the hack to the allocator when the allocator
is instantiated.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1575>
Given the supported rt formats in a profile/entrypoint config it's
possible to know the supported JPEG colorspace and subsampling. This
patch adds this information in coded caps to a safer autoplugging
after jpegparser.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1575>
This base class is intented for hardware accelerated decoders, but since
only VA uses it, it will be kept internally in va plugin.
It follows the same logic as the others video decoders in the library but.
as JPEG are independet images, there's no need to handle a DBP so no need
of a picture object. Instead a scan object is added with all the structures
required to decode the image (huffman and quant tables, mcus, etc.).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1575>
Gallium drivers historically have reported strange dmabuf sizes, from always
zero to the whole frame (multiple fds). The simplest solution is to use lseek
SEEK_END to get the prime descriptor size.
Also the allocator raises a warning if both values differ in order to report
it to driver.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2574>
If compiled with -Dgstreamer:gst_debug=false and we have
GST_REMOVE_DISABLED defined we will get the following compiler error:
```
[...]/libgstreamer-1.0.so.0.2100.0.p/gst.c.o: in function `gst_deinit':
[...]/gst/gst.c:1258: undefined reference to `_priv_gst_debug_cleanup'
[...] hidden symbol `_priv_gst_debug_cleanup' isn't defined
```
Add the missing define guard to avoid this.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2648>
The latency messages are non-deterministic and can arrive before/after
async-done or during state-changes as they are posted by e.g. sinks from
their streaming thread but bins are finishing asynchronous state changes
from a secondary helper thread.
To solve this, expect latency messages at any time and assert that we
receive one at some point during the test.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2643>
The commit b90d0274 introduces uninitialized width and height when we
consider to change the "pixel-aspect-ratio" for some interlaced stream.
We need to check the resolution in the src caps, and if no resolution
info found, there is no need to consider the aspect ratio.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2630>
Previously it was only possible to request them with the exact template
name, e.g. 'src_%s', but not with "instantiated" names that would match
this template, e.g.'src_foo_bar'.
This is now possible and a test was added for this, in addition to
fixing a previously invalid test.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2635>
Removing glvideomixer-like nuance (it was initially referenced)
and rewriting element since it's not an optimal design at all
from performance point of view.
* Remove wrapper bin (and internal conversion/upload/download elements)
which will waste CPU/GPU resources. Conversion/blending can be done by the
d3d11compositor element at once.
* Add support YUV blending without RGB conversion.
The RGB <-> YUV conversion is completely unnecessary since YUV textures
support blending as well.
* Remove complicated blending operation properties since it's hard
to use from application point of view. Instead, adding "operator" property
like what compositor element does.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2631>
Similar to and inspired by glimagesink and gtkglsink.
Using the Wayland buffer transform API allows to offload
rotate operations to the Wayland compositor. This can have
several advantages:
- The Wayland compositor may be able to use hardware plane
capabilities to do the rotation.
- In case of pre-rotated content on rotated outputs the
rotations may equal out, potentially allowing the
compositor to use hardware planes even if they don't
support rotate operations.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2543>
AVC and HEVC define crop rectangle and the x/y coordinates might
not be zero. This commit will address the non-zero x/y offset coordinates
via GstVideoCropMeta if downstream supports the meta and d3d11 memory.
Otherwise decoder will copy decoded texture into output frame.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2624>
Some mpeg-ts streams have extra data at the beginning. While it's not ideal, we
should be able to cope with it.
Therefore increase the initial search window for at least 4 consecutive
synchronization points to 1kB.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2626>
This allows the reception of streams that don't exactly match
the codec preferences. In particular, the ssrc in the codec preferences
is local sender SSRC, the other side is expected to send a different SSRC.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2615>
Some encoders (e.g. Makito) have H265 field-based interlacing, but then
also specify an 1:2 pixel aspect ratio. That makes it kind-of work with
decoders that don't properly support field-based decoding, but makes us
end up with the wrong aspect ratio if we implement everything properly.
As a workaround, detect 1:2 pixel aspect ratio for field-based
interlacing, and check if making that 1:1 would make the new display
aspect ratio common. In that case, we override it with 1:1.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2577>
Various variables were of smaller types than needed and there were no
checks for any overflows when doing additions on the sizes. This is all
checked now.
In addition the size of the decompressed data is limited to 200MB now as
any larger sizes are likely pathological and we can avoid out of memory
situations in many cases like this.
Also fix a bug where the available output size on the next iteration in
the zlib decompression code was provided too large and could
potentially lead to out of bound writes.
Thanks to Adam Doupe for analyzing and reporting the issue.
CVE: tbd
https://gstreamer.freedesktop.org/security/sa-2022-0003.html
Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/1225
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2610>
Various variables were of smaller types than needed and there were no
checks for any overflows when doing additions on the sizes. This is all
checked now.
In addition the size of the decompressed data is limited to 120MB now as
any larger sizes are likely pathological and we can avoid out of memory
situations in many cases like this.
Also fix a bug where the available output size on the next iteration in
the zlib/bz2 decompression code was provided too large and could
potentially lead to out of bound writes.
Thanks to Adam Doupe for analyzing and reporting the issue.
CVE: CVE-2022-1922, CVE-2022-1923, CVE-2022-1924, CVE-2022-1925
https://gstreamer.freedesktop.org/security/sa-2022-0002.html
Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/1225
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2610>
Uses prelude header files with #defines to rename DASH and MSS
symbols duplicated in their old standalone versions.
Also redefines soup-related functions when building it for
adaptivedemux2 to prevent symbol conflicts there.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2534>
Currently, video format is decided with downstream caps intersection,
but that's not correct since chroma is not considered. The video
decoders have to decide the output format given the used chroma, not
by the downstream caps negotiation.
This patch changes that. Still, caps feature is selected by caps
negotiation, then, with the preferred caps feature, the output format
is search within that caps feature.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2569>
The mq we get out of the weak ref might be NULL if we're
shutting down, which could cause assertion failures or
crashes.
It might also cause miscompilations where the compiler just
optimises away the NULL check because it jumps to a code path
that then dereferences the pointer which clearly isn't going
to work. Seems like something like this happens with gcc 11.
Fixes#1262
Co-authored-by: Doug Nazar <nazard@nazar.ca>
Co-authored-by: Sebastian Dröge <sebastian@centricular.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2599>
Fixes:
../plugins/elements/gstmultiqueue.c: In function ‘gst_multi_queue_loop’:
../plugins/elements/gstmultiqueue.c:2394:19: warning: ‘is_query’ may be used uninitialized in this function [-Wmaybe-uninitialized]
2394 | if (object && !is_query)
| ^~~~~~~~~
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2593>
Rewriting GstD3D11Converter (equivalent to GstVideoConverter)
to optimize some conversion path and clean up.
* Extract YUV <-> RGB conversion matrix building method to
utils. It will be used by other implementation
* Use calculated offset values for YCbCr <-> YPbPr conversion
instead of hardcoded values
* Handle color range adjustment
* Move transform matrix building helper function to utils.
The method will be used by other elements
* Use single constant buffer. Multiple constatne buffer for
conversion pipeline is almost pointless
* Remove lots of duplicated HLSL code and split pixel shader
code path into sampling -> colorspace conversion ->
shader output packing
* Avoid floating point precision error around UV coordinates
* Optimize RGB -> YUV conversion path
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2581>
Copy V4L2_PIX_FMT_P010 define from linux header.
V4L2_PIX_FMT_P010 is the little endian definition of P010 so map
it GST_VIDEO_FORMAT_P010_10LE.
Add it v4l2 default video formats to allows v4l2 decoders to
enumerate and use it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2590>
In case that input is D3D11 texture, QSV seems to work regardless
of the alignment. Actually the alignment requirement seems to make
only sense for system memory.
Other Intel GPU dependent implementations (new VA encoder, and MediaFoundation)
do not require such alignment nor other vendor specific ones (NVENC and AMF)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2540>
A few header files in -bad contain comments that start with the
/** gtk-doc pattern, but should not actually be parsed (and warned
about as such).
Previously, we were using far-reaching wildcard patterns to avoid
parsing those, but this had the unintended side effect of also
excluding legitimate files, and creating confusion when comments
were not parsed from those.
Switch to excluding specific files instead.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2576>
macOS features hidden devices. These are devices that will
not be shown in the macOS UIs and that cannot be retrieved
without having the specific UID of the hidden device. There
are cases when you might want to have a hidden device, for example
when having a virtual speaker that forwards the data to a virtual
hidden input device from which you can then grab the audio.
The blackhole project supports these hidden devices and
this patch provides a way that if the device id is a hidden
device it will use it instead of check the hardware list of devices
to understand if the device is valid.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2251>
Msdkdec should use it own pool when the allocation from downstream query
is not any msdk_allocator (i.e. msdk_video_allocator,
msdk_dmabuf_allocator and msdk_system_allocator). Otherwise, when using
pipeline "msdkh264dec ! vah264enc !" to transcode a not 16-bit-aligned
stream (i.e. 1920x1080), the transcoding will fail due to the size
mismatch issue between decoder pool and encoder pool.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2451>
gtk_gl_area_get_error() doesn't return a copy of the error, but just the
error. If initialising OpenGL fails, then GtkGstGLWidget will consume
the error, and cause GTK to try and display freed memory.
==50914== Invalid read of size 8
==50914== at 0x4C4CB8A: gtk_gl_area_draw_error_screen (gtkglarea.c:663)
==50914== by 0x4C4CB8A: gtk_gl_area_draw (gtkglarea.c:687)
==50914== by 0x4E061CA: gtk_widget_draw_internal (gtkwidget.c:7084)
==50914== by 0x4BAEFB1: gtk_container_propagate_draw (gtkcontainer.c:3854)
==50914== by 0x4D4B6BF: gtk_stack_render (gtkstack.c:2207)
==50914== by 0x4BB4B03: gtk_css_custom_gadget_draw (gtkcsscustomgadget.c:159)
==50914== by 0x4BBA4C4: gtk_css_gadget_draw (gtkcssgadget.c:885)
==50914== by 0x4D4D780: gtk_stack_draw (gtkstack.c:2119)
==50914== by 0x4E061CA: gtk_widget_draw_internal (gtkwidget.c:7084)
==50914== by 0x4BAEFB1: gtk_container_propagate_draw (gtkcontainer.c:3854)
==50914== by 0x4BAF0C3: gtk_container_draw (gtkcontainer.c:3674)
==50914== by 0x4E061CA: gtk_widget_draw_internal (gtkwidget.c:7084)
==50914== by 0x4BAEFB1: gtk_container_propagate_draw (gtkcontainer.c:3854)
==50914== Address 0x187a0818 is 8 bytes inside a block of size 16 free'd
==50914== at 0x48480E4: free (vg_replace_malloc.c:872)
==50914== by 0x49A5B8C: g_free (gmem.c:218)
==50914== by 0x49C1013: g_slice_free1 (gslice.c:1183)
==50914== by 0x4990DE4: g_error_free (gerror.c:870)
==50914== by 0x4990FE9: g_clear_error (gerror.c:1052)
==50914== by 0x1A489780: _get_gl_context (gtkgstglwidget.c:540)
==50914== by 0x1A4863CB: gst_gtk_invoke_func (gstgtkutils.c:39)
==50914== by 0x49A3834: g_main_context_invoke_full (gmain.c:6137)
==50914== by 0x1A486450: gst_gtk_invoke_on_main (gstgtkutils.c:59)
==50914== by 0x1A48A29E: gtk_gst_gl_widget_init_winsys (gtkgstglwidget.c:632)
==50914== by 0x1A4887E7: gst_gtk_gl_sink_start (gstgtkglsink.c:267)
==50914== by 0x6579810: gst_base_sink_change_state (gstbasesink.c:5662)
==50914== Block was alloc'd at
==50914== at 0x484586F: malloc (vg_replace_malloc.c:381)
==50914== by 0x49A9278: g_malloc (gmem.c:125)
==50914== by 0x49C1BA5: g_slice_alloc (gslice.c:1072)
==50914== by 0x49C3BCC: g_slice_alloc0 (gslice.c:1098)
==50914== by 0x499096B: g_error_allocate (gerror.c:708)
==50914== by 0x4990AF1: UnknownInlinedFun (gerror.c:722)
==50914== by 0x4990AF1: g_error_copy (gerror.c:892)
==50914== by 0x4C4B9F9: gtk_gl_area_set_error (gtkglarea.c:1036)
==50914== by 0x4C4BAF7: gtk_gl_area_real_create_context (gtkglarea.c:346)
==50914== by 0x4B21B28: _gtk_marshal_OBJECT__VOIDv (gtkmarshalers.c:2730)
==50914== by 0x4920B78: UnknownInlinedFun (gclosure.c:893)
==50914== by 0x4920B78: g_signal_emit_valist (gsignal.c:3406)
==50914== by 0x4920CB2: g_signal_emit (gsignal.c:3553)
==50914== by 0x4C4B927: gtk_gl_area_realize (gtkglarea.c:308)
Reproduced by running:
MESA_GL_VERSION_OVERRIDE=2.7 totem
See https://gitlab.gnome.org/GNOME/totem/-/issues/522
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2565>
This new signal allows data-channel consumers to configure signal handlers on a
newly created data-channel, before any data or state change has been notified.
The webrtcin unit-tests were refactored to make use of this new signal.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2427>
If our downstream caps didn't intersect, we attempted to convert between
raw and ADTS stream formats, if possible. If the caps still did not
intersect, we then used the modified `src_caps` but left the
`output_header_type` unmodified.
This caused a mismatch between caps and actual stream format.
Avoid this by first copying the `src_caps` to `convcaps` for the
additional intersection tests, replacing `src_caps` if we succeed.
While we're here, clean up the code a bit and remove the `codec_data`
field from outgoing ADTS caps.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2550>
In preparation for the new element `GstGtkWaylandSink`, move reusable
parts out of `GstWaylandSink` into the already exisiting but very
barebone library.
Notable changes include:
- the `GstWaylandVideo` interface was dropped
- support for `wl-shell` was dropped
- lots of renaming in order to match established naming patterns
- lots of code modernisations, reducing boilerplate
- members were made private wherever possible
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2479>
Adding a uri interface enables plugging in RFB/VNC sources to anything
that makes use of uridecodebin:
gst-play-1.0 rfb://:password@10.40.216.180:5903?shared=1
Use userinfo to pass user (ignored) and password, other key/value pairs
can be encoded in the query part of the URI (see shared)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1963>
This is a workaround for pts because oneVPL cannot handle the pts
correctly when there is b-frames. We first cache the input frame pts in
a queue then retrive the smallest one for the output encoded frame as
we always output the coded frame when this frame is displayable.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2089>
gst_amf_encoder_try_output() pushes at most one output buffer downstream
although more may be ready. As a consequence, output samples will keep
queueing up in AMFComponent whenever QueryOutput() returns AMF_REPEAT
(and do_wait is FALSE). This has negative impact on latency when the
video being encoded is a live stream.
In order to avoid it, always retrieve and push all samples available in
AMFComponent's output queue at once.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2536>
In case of per features registration such as the
customizable gstreamer-full library, each
element should check that the soup library can be loaded to
facilitate the element registration.
Initialize the debug categories properly
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2348>
the licence in gstreamer/subprojects/gstreamer/gst/gstplugin.c
currently is defined to be one of:
LGPL GPL QPL GPL/QPL MPL BSD MIT/X11 0BSD Proprietary
The open source project for the kinesis plugin is using an
Apache 2.0 license. Because "Apache 2.0" is not one of the
supported licenses it automatically falls back to Proprietary.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2514>
https://bugzilla.gnome.org/show_bug.cgi?id=741398 changed
rtpptdemux in 2014 to not post a GST_ELEMENT_ERROR on the
bus when dropping an invalid (non-RTP) packet, but still
returned GST_FLOW_ERROR upstream - so the pipeline still
stops, but now without a useful bus error.
Return GST_FLOW_OK instead, so the pipeline keeps
running. Some old telephony equipment can send invalid
packets before the real RTP traffic starts.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2520>
Intel DXVA driver crashes sometimes (from GPU thread) if
ID3D11VideoDecoder is released while there are outstanding view objects.
To make sure the object life cycle, holds an ID3D11VideoDecoder refcount
in GstD3D11Memory object.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2504>
We prefer black color as an initial texture color and
Direct3D11 runtime will initialize texture with zeros (except for alpha)
which is fine for RGB formats. But UV components of YUV texture
requires manual clear for black color.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2502>
1) check for right macro name when checking for NICE_VERSION_CHECK
2) if libnice version is 0.1.18.1 this should not satisfy
a NICE_VERSION_CHECK(0,1,19).
Fixes build with libnice 0.1.18.1 subproject checkout.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2499>
get_colorspace() checks input caps transfer when mapping V4L2_XFER_FUNC_709
back to V4L2_COLORSPACE_BT2020 and GST_VIDEO_TRANSFER_BT2020_12. After
receiving source change event, decoder will G_FMT and S_FMT again. So need
to reset transfer when acquiring format to avoid using the old transfer.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2475>
We should not reset the input/output_frame_count when some configure
changes. For example, the if resolution changes, the current way just
resets the frame count and make the PTS of the output buffer restart
from the original PTS of the first frame. That causes a lot of QOS
event and drop all the new frames.
We should only reset them when encoder start().
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2489>
When collection is updated, decodebin3 exposes pad first and then
streams-selected message is posted.
The condition can cause a situation where playbin3 links non-existing
combiner/playsink pads (since streams-selected is not posted yet) with
new decodebin output pad. This commit will re-check selected/active
streams condition on pad-added and reconfigure output if needed.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2482>
In case of per features registration such as the
customizable gstreamer-full library, each
element should check that the soup library can be loaded to
facilitate the element registration.
Initialize the debug category properly
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2349>
d3d11screencapture can miss a cursor shape to draw or draw an outdated cursor shape.
- AcquireNextFrame only provides cursor shape when there is one update
- current d3d11screencapture skips cursor shape when mouse is not drawn
So, if a gstreamer application uses d3d11screencapture with cursor initially not drawn
"show-cursor"=false and then switches this property to true, the cursor will not be
actually drawn until AcquireNextFrame provides a new cursor shape.
This commit makes d3d11screencapture always update the cursor shape information, even
if the mouse is not drawn. d3d11screencapture will always have the latest cursor shape
when requested to draw it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2485>
zlib is required, and if it isn't found it is checked several ways and
then forced via subproject(). This code was added in commit
b93e37592a, to account for systems where
zlib doesn't have pkg-config files installed.
But Meson already does dependency fallback, and also, since 0.54.0, does
the in-between checks for find_library('z') and has_header('zlib.h') via
the "system" type dependency. Simplify dependency lookup by marking it
as required, which also makes sure that the console log doesn't
confusingly list "not found".
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2484>
The pool process function may poll and get the resolution-change event
whenever it is not possible to share our buffers. This typically happen
when downstream does not support GstVideoMeta.
Not handling this would cause the decoder thread to exit silently and the
pipeline to stall.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2457>
WHen bundling, if multiple medias are used with the same media payload, then
each of the fec/rtx/red additions would add a distinct payload. This could
very easily overflow the available payload space.
Instead, track the relationship between the media payload value and
the relevant fec/rtx/red payload values and reuse them whenever
necessary, even when bundling.
e.g.
...
a=group:BUNDLE video0 video1
m=video 9 UDP/SAVPF 96 97
a=mid:video0
a=rtpmap:96 VP8/90000
a=rtpmap:97 rtx/90000
a=fmtp:97 apt=96
...
m=video 9 UDP/SAVPF 96 97
a=mid:video1
a=rtpmap:96 VP8/90000
a=rtpmap:97 rtx/90000
a=fmtp:97 apt=96
...
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2474>
* Enhance debug log to print human readable D3D11_FORMAT_SUPPORT flags
value, instead of packed numeric flagset value.
* Only device supported format will be added to format table.
Depending on device feature level (i.e., D3D9 feature devices),
16bits formats will not be supported. Although there might be formats
we deinfed but not supported, it will not be a major issue in practice
since our D3D11 implementation does not support legacy devices already
(known limitation) and also old d3dvideosink will be promoted in that case.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2441>
Output may attemp to set the width and height to zero values if
caps has no such information, which will cause capture get invalid
dimensions. Then decoder reports negotiation failure.
So need to set default resolution if caps has no such information.
Real values can be set again until source change event is signaled.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2400>
Now it uses the JPEG parser in libgstcodecparsers, while the whole
code is simplified by relying more in baseparser class for tag
handling.
The element now signals chroma-format and default framerate is 0/1,
which is for still-images.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1473>
While current and future LoongArch machines that are supposed to run
GStreamer all support unaligned accesses, there might be future
lower-end cores (e.g. the embedded product line) without such support,
and we may not want to penalize these use cases.
So, mark LoongArch as not supporting unaligned accesses for now, and
hope the compilers do a good job optimizing them. We can always flip
switch later.
Suggested-by: CHEN Tao <redeast_cn@outlook.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2443>
It is valid to have the padding set to 1 on the first packet and it
happens very often from TWCC packets coming from libwebrtc. This means
that we were totally ignoring many TWCC packets.
Fix test that checked that a first packet with padding was not valid and
instead test a single twcc packet with padding to check precisely what
this patch was about.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2422>
When processing the first event after probing the
file and being activated, requeue sticky events
as there's no requirement that demuxers send tag
and other events again after a seek - that's
why they're sticky.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2432>
- Consistently unref the chained buffer at the end of the chain
function, if we're not handing it off to `gst_pad_push`. This avoids a
few buffer leaks in the error paths in `_chain` and `_push_history`.
- When mapping the video frame fails, return a flow error instead of
crashing.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2428>
If we just break the loop, we might run into the `gop != NULL` assert
that follows it. Rather, exit immediately with flushing flow.
Also use this flushing mechanism when we release a pad. This avoids
having an extra flag.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1030>
Background:
Whenever a caps event is received by appsink, the caps are stored in the
same internal queue as buffers. Only when enough buffers have been
popped from the queue to reach the caps, `priv->sample` gets its caps
updated to match, so that they are correct for the following buffers.
Note that as far as upstream elements are concerned, the caps of appsink
are updated immediately when the CAPS event is sent. Samples pulled from
appsink retain the old caps until a later buffer -- one that was sent by
upstream elements after the new caps -- is pulled.
The race condition:
When a flush is received, appsink clears the entire internal queue. The
caps of `priv->sample` are not updated as part of this process, and
instead remain as those of the sample that was last pulled by the user.
This leaves open a race condition where:
1. Upstream sends a new caps event, and possibly some buffers for the
new caps.
2. Upstream sends a flush (possibly from a different thread).
3. Upstream sends a new buffer for the new caps. Since as far as
upstream is concerned, appsink caps are the new caps already, no new
CAPS event is sent.
4. The appsink user pulls a sample, having not pulled before enough
samples to reach the buffers sent in step 1.
Bug: the pulled sample has the old caps instead of the new caps.
Fixing the race condition:
To avoid this problem, when a buffer is received after a flush,
`priv->sample`'s caps should be updated with the current caps before the
buffer is added to the internal queue.
Interestingly, before this patch, appsink already had code for this, in
gst_app_sink_render_common():
/* queue holding caps event might have been FLUSHed,
* but caps state still present in pad caps */
if (G_UNLIKELY (!priv->last_caps &&
gst_pad_has_current_caps (GST_BASE_SINK_PAD (psink)))) {
priv->last_caps = gst_pad_get_current_caps (GST_BASE_SINK_PAD (psink));
gst_sample_set_caps (priv->sample, priv->last_caps);
GST_DEBUG_OBJECT (appsink, "activating pad caps %" GST_PTR_FORMAT,
priv->last_caps);
}
This code assumes `priv->last_caps` is reset when a flush is received,
which makes sense, but unfortunately, there was no code in the flush
code path resetting it.
This patch adds such code, therefore fixing the race condition. A unit
test demonstrating the bug and testing its behavior with the fix has
also been added.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2413>
This transition is meant to be very similar to crossfade, but
instead of fading out the background video at the same time as the
foreground fades in, the background video stays at 100% opacity
during the whole transition.
This essentially "restores" the old crossfade behaviour that was changed in:
eb48faf342
but using a new type enum, so that both behaviours are available,
letting applications choose.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2385>
The timestamp in the tfdt refers to the first trun box and if there are
multiple trun boxes then the distance between the first timestamps will
grow.
At some point this distance reaches a threshold and triggers the
resetting of the first sample's timestamp of this trun box to be reset
to the tfdt.
This threshold is implemented for files where there is a jump in the
timeline between fragments and where this can be detected via a jump
between the end timestamp of the previous fragment and the tfdt of the
next. This behaviour is preserved.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2409>
NVIDIA GPUs have undocumented limitation regarding minimum resolution
and it can be queried via a NVDEC API. However, since we don't want to
bring CUDA/NVDEC API into D3D11, use hardcoded values for now
until we find a nice way for capability check.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2406>
Various elements are assuming that the pointer matches a pad template
they know about, and also randomly created pad templates might be
missing some important information that is necessary to create a valid
pad.
For example, creating a new pad template for audiomixer's sinkpad
without providing the correct GType would cause audiomixer to create a
GstAggregatorPad. That will then later fail spectacularly because it
assumes that it got a GstAudioAggregatorPad.
Passing a pad template that does not belong to the element class in here
will easily lead to undefined behaviour.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2410>
If for some reason the encoder produces frames with a pts higher than
the input one, we were dropping all the video encoder frames and ended
up crashing when trying to access the pts of a NULL pointer returned by
gst_video_encoder_get_oldest_frame().
I hit this scenario by feeding a decreasing timestamp to vp8enc which
seem to confuse the encoder.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2405>
Until March 2022, the FFmpeg MXF muxer would write the various index table
segments with the same instance ID, which should only be used if it is a
duplicate/repeated table.
In order to cope with those, we first compare the other index table segment
properties (body/index SID, start position) before comparing the instance
ID. This will ensure that we don't consider them as duplicate, but can still
detect "real" duplicates (which would have the same other properties).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2407>
If the stream chroma doesn't match with any video format in the source
caps template (generated from va config surface formats) instead of
return unknown, return the first available format in the template,
assuming that the driver would be capable to do color conversions.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2404>
Use newly added gst_h265_parser_identify_and_split_nalu_hevc()
method to handle broken streams where packetized NAL unit
contain start code prefix in it.
It's obviously wrong stream but we know how to work around it
and even need to support such broken streams since
stateless decoder implementations are being a primary
decoder element.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2394>
Add gst_h265_parser_identify_and_split_nalu_hevc() method to
handle a case where packetized stream contains start-code prefix.
This new method behaves similar to exisiting gst_h265_parser_identify_nalu_hevc()
but it will scan start-code prefix to split given data into
NAL units.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2394>
Instead of using a hard-coded list of preferred formats according the
chroma type, now if now caps are pre-negotiated, from template caps
will choose the first format with the same chroma type. If
pre-negotiated, then it will choose the first format, with same chroma
type, from the first caps structure.
Also all the decoders will check if GST_VIDEO_FORMAT_UNKNOWN is
returned, failing the negotiation.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2351>
Hantro H1 and Rockchip VEPU2 drivers will pad the width/height to a
multiple of 16. In order to obtain the right JPEG size, the image needs
to be cropped using the S_SELECTION API. This support is added as best
effort since older drivers may emulate this by looking at the capture
queue width/height.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2329>
gst_value_serialize() does more than what's needed to printf-ing
especially when given GValue is already string. Just print string
value as-is without gst_value_serialize() to avoid unreadable
string print, especially for multi-bytes character encoding cases.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2387>
V4L spec now requires decode_params flags to be set in accordance to the
frame's type. In particular this is required by H.264 decoder of NVIDIA
Tegra SoC to operate properly. Set the flags based on type of parsed
slices.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1757>
* Remove fields no longer used, or that can be replaced by smaller code
* Rename "channels" to a more meaningful "input pads"
* Directly handle/use combiner pads in the combiners instead of on the playbin3
main structure
Remove the corresponding combiner sinkpad whenever a uridecodebin3 source pad
goes away
* If used, store the corresponding combiner sink pad in the SourcePad helper
structure
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2384>
GstD3D11ScreenCapture object is pipeline-independent global object
and the object can be shared by multiple src elements,
in order to overcome a limitation of DXGI Desktop Duplication API.
Note that the API allows only single capture session in a process for
a monitor.
Therefore GstD3D11ScreenCapture object must be able to handle a case
where a src element holds different GstD3D11Device object. Which can
happen when GstD3D11Device context is not shared by pipelines.
What's changed:
* Allocates capture texture with D3D11_RESOURCE_MISC_SHARED for the
texture to be able to copied into other device's texture
* Holds additional shader objects per src element and use it when drawing
mouse
Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/1197
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2366>
mp4mux can't negotiate caps with upstream/downstream and always outputs
specific caps based on the input streams. This will always happen before
it produces the first buffers.
By having the default aggregator negotiation enabled the same caps
would be pushed twice in the beginning, and again every time a
reconfigure event is received.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2372>
mxfmux can't negotiate caps with upstream/downstream and always outputs
specific caps based on the input streams. This will always happen before
it produces the first buffers.
By having the default aggregator negotiation enabled the same caps
would be pushed twice in the beginning, and again every time a
reconfigure event is received.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2372>
mpegtsmux can't negotiate caps with upstream/downstream and always outputs
specific caps based on the input streams. This will always happen before
it produces the first buffers.
By having the default aggregator negotiation enabled the same caps
would be pushed twice in the beginning, and again every time a
reconfigure event is received.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2372>
flvmux can't negotiate caps with upstream/downstream and always outputs
specific caps based on the input streams. This will always happen before
it produces the first buffers.
By having the default aggregator negotiation enabled the same caps
would be pushed twice in the beginning, and again every time a
reconfigure event is received.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2372>
Otherwise setting the srcpad caps based on the sinkpad caps event will
already push a segment event downstream before the upstream segment is
known.
If the upstream segments are just forwarded when the upstream segment
event arrives this would result in two segment events being sent
downstream, of which the first one will usually be simply wrong.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2372>
Baseclass calls get_preferred_output_delay() in a chain of
sequence header parsing and then new_sequence() is called
with required DPB size (includes render-delay) information.
Thus latency query should happen before the sequence header
parsing for subclass to report required render-delay accordingly
via get_preferred_output_delay() method.
(e.g., zero delay in case of live pipeline)
This commit is to fix wrong liveness signalling in case of
upstream packetized format.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2363>
Baseclass calls get_preferred_output_delay() in a chain of
sequence header parsing and then new_sequence() is called
with required DPB size (includes render-delay) information.
Thus latency query should happen before the sequence header
parsing for subclass to report required render-delay accordingly
via get_preferred_output_delay() method.
(e.g., zero delay in case of live pipeline)
This commit is to fix wrong liveness signalling in case of
upstream packetized format.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2364>
In the case where not all streams have received any data, growing the interleave
by only 100ms is too restrictive and would cause some (valid) mpeg-ts streams to
hang.
Bump up the interleave growth rate for those use-cases to 500ms per input (still
up to the limit of 5s).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2370>
If there weren't any moved/dirty regions in the captured frame, the
viewport of the ID3D11DeviceContext would be left at whatever previous
value it had, which could lead to the cursor being drawn in a wrong
position and/or in an incorrect size.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2362>
Make all codecs consistent so that subclass can know additional DPB
size requirement depending on render-delay configuration regardless
of codec. Note that render-delay feature is not implemented for AV1
yet but it's planned.
Also, consider new_sequence() is mandatory requirement, not optional
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2343>
As implemented, we only support OpenGL 3 API from version 3.2. Though, there
is no issue enabling GLSL 1.30 even if we are going to restrict our API usage
to 2. This allows using texelFetch() on OpenGL 3.0 and 3.1 drivers.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2190>
Since the addition of tiling format with subsampled tile size
(NV12_16L32S), getting the tile width/height shifts and tile
size have become more complex. Add a helper to extract and
scale this information for the selected plane and format.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2190>
Since both g_value_set_object() and g_weak_ref_get() takes a reference
there will be two new references to the GstWebRTCICE object when there
should be only one. g_value_take_object() has the same functionality as
g_value_set_object() but does not take a reference.
Without this change, the GstWebRTCICE object will be leaked.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2333>
Mixing C loops with switch statements is a bad idea as break has a
different meaning in both. Breaking inside the switch statements wrongly
caused further loop iterations.
Instead use goto to get out of the loop and continue to do another loop
iteration, and never ever use break except for the end of a case.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2336>
Some streams have 2 PMT sections in a single TS packet. The first one is "valid"
but doesn't contain/define any streams. That causes an unrecoverable issue when
we try to activate the 2nd (valid) PMT.
Instead of doing that, pre-emptively refuse to process PMT without any streams
present within. We still do post that section on the bus to inform applications.
Fixes#1181
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2310>
In push mode (streaming), if the received chunk buffer size from _chain is bigger
than output buffer size, the flags of the divided-buffers are propagated to the
DISCONT flag from first received chunk buffer. This unexpected buffers contained DISCONT
flags are abnormally transformed when changing the sampling rate by audioresample element.
So unset unnecessary DISCONT flag before pad_push().
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2305>
Previously, we only added it when actually performing synchronization
based on the NTP time.
The information can be useful downstream in other situations too, and
we can compute a NTP time as soon as we get a sender report with the
relevant information.
Co-authored-by: Mathieu Duponchelle <mathieu@centricular.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2252>
The decision to store the input buffer depends on whether extensions
are to be added to the output buffer, I assume as an optimization.
This creates an issue for subclasses that call negotiate(), where
header_exts is actually populated, from their handle_buffer()
implementation: at chain time, no header extension has been negotiated
yet, which means that we don't add extensions to the first batch of
buffers that comes out.
Keep track of whether negotiate has been called (this is different
from the negotiated field) and always store the input buffer until
then. This fixes the issue while largely preserving the optimization.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2304>
The previous iteration of the code was inferring the type of the
frame by looking at the overall size of the gst-payloaded packet.
It is more robust to actually parse the payload and look at the
actual data buffers it contains.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2303>
regardless of whether they are input as individual buffers or
buffer lists.
The ONVIF specification requires all packets to hold the extension,
it makes no sense to behave differently when handling buffer lists.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2303>
Pipeline such as:
gst-launch-1.0 -vf videotestsrc ! video/x-raw,format=NV12,colorimetry=\(string\)bt709 \
! videoscale ! video/x-raw,format=I420 ! fakesink
Always trigger a error:
ERROR video-info video-info.c:556:gst_video_info_from_caps: no width property given
Because it is called before the fixate_size(), the src caps' resolution
may be absent or not fixed. That causes that the src video info can not
be created correctly and we can not inherit the colorimetry and chroma-site
from the input caps.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2289>
Fixing this pipeline:
gst-launch-1.0 filesrc location=sample.png ! pngdec ! videorate ! fakesink
- videorate receives a single buffer with pts = 0, duration = invalid;
- then it receives eos triggering this buffer to be pushed downstream;
- the pushing code was assuming that a duration was set, which is
impossible as we received a single buffer and no output framerate was
set either. So the best we can do is to push the buffer without
duration.
Fix#1177
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2296>
The va pool is used for GPU side surface/image, its alignment should
not be changed arbitrarily by others. So we decide not to expose the
GST_BUFFER_POOL_OPTION_VIDEO_ALIGNMENT flag anymore.
Instead, user can call gst_buffer_pool_config_set_va_alignment() to
set its surface/image alignment.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2282>
According to spec:
color range equal to 0 shall be referred to as the studio swing
representation and color range equal to 1 shall be referred to as
the full swing representation.
The current status is just the opposite.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2288>
GAP events flagged with MISSING_DATA are transformed into GAP buffers
flagged with CORRUPTED.
In these cases, it is preferable to simply keep rendering the previous
buffer (if there was one) instead of flashing the pad in and out of
view.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/708>
When the GAP event was flagged with MISSING_DATA, subclasses
may want to adopt a different behaviour, for example by repeating
the last buffer.
As we turn these gap events into gap buffers, we need to flag
those, we do so with a new custom meta.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/708>
Returning TRUE from the `transform_meta` function tells
GstBaseTransform to copy the meta into the new buffer. If videoscale
has already transformed a meta by scaling it, it should always return
FALSE to avoid duplicating the meta.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1630>
Meson generates a gdbinit file that will automatically load gstreamer
script. However that script uses a helper python module that needs
PYTHONPATH to be pointing into the right location in the source
tree to be able to find gst_gdb.py.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1796>
When the sink is configured to create sockets with an explicit bind
address, then the created socket gets set to the udp_socket field
irregardless of whether the bind address indicated that the socket
family should be IPv4 or IPv6. When binding to an IPv6 address, this
results in the following error:
gstmultiudpsink.c:1285:gst_multiudpsink_configure_client:<rtcpsink>
error: Invalid address family (got 10)
This patch adds a check of the address family being bound to and sets
the created socket to used_socket or used_socket_v6, accordingly.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1551>
RFC 8216 6.3.3 "Playing the Media Playlist File" : states that for live media
playlists "the client SHOULD NOT choose a segment that starts less than three
target durations from the end of the Playlist file"
This is an off-by-one error. Since we are looking for the "index" of the
segment, we need to subtract 1 from the searched position.
Ex: For a playlist with 12 entries, we want to start playback on the 9th segment
... which is at index 8.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2259>