The `add_candidate` vfunc of the GstWebRTCICE interface gained a GstPromise
argument, which is an ABI break. We're not aware of any external user of this
interface yet so we think it's OK.
This change is useful in cases where the application needs to bubble up errors
from the underlying ICE agent, for instance when the agent was given an invalid
ICE candidate.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3960>
The signal triggers an asynchronous task on the PC thread but in some cases it
can be useful for apps to be notified when the task completed. This method of
the PeerConnection spec also returns a Promise so the interface is now more
coherent with the spec.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3960>
The av1decoder class does not implement the ->parse() virtual function,
and we always need to add the av1parse element before it. So we should
set_needs_format of the decoder to TRUE, then if no parse before it, it
can fail with a "not-negotiated" error early, rather than go on and
generate unexpected error.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4064>
The vp9decoder class does not implement the ->parse() virtual function,
and we always need to add the vp9parse element before it. So we should
set_needs_format of the decoder to TRUE, then if no parse before it, it
can fail with a "not-negotiated" error early, rather than go on and
generate unexpected error.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4064>
The vp8decoder class does not implement the ->parse() virtual function,
it can only accepts frame aligned data. If some element such as filesrc
feed it with unaligned data, the behaviour is undecided. So we should
set_needs_format of the decoder to TRUE, then it can fail with a
"not-negotiated" error early, rather than go on and generate unexpected
error.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4064>
The mpeg2decoder class does not implement the ->parse() virtual function,
and we always need to add the mpegvideoparse element before it. So we should
set_needs_format of the decoder to TRUE, then if no parse before it, it
can fail with a "not-negotiated" error early, rather than go on and
generate unexpected error.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4064>
The h264decoder class does not implement the ->parse() virtual function,
and we always need to add the h264parse element before it. So we should
set_needs_format of the decoder to TRUE, then if no parse before it, it
can fail with a "not-negotiated" error early, rather than go on and
generate unexpected error.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4064>
The h265decoder class does not implement the ->parse() virtual function,
and we always need to add the h265parse element before it. So we should
set_needs_format of the decoder to TRUE, then if no parse before it, it
can fail with a "not-negotiated" error early, rather than go on and
generate unexpected error.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4064>
Raw 608 caps can now contain a "field" field. On the input side it
signifies that the input raw 608 is attached to either field 0 or 1,
on the output side it allows selecting whether to extract the raw 608
data for field 0 or 1 for field-aware formats.
In addition, it is also allowed to use ccconverter to "convert" 608
field 0 to 608 field 1 (and conversely), this is passthrough as the
change only needs to happen in the caps.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4031>
The dimension of the overlay texture directly corresponds to the size of the overlay **buffer** which is given by its video meta.
The dimension at which the overlay should be displayed directly correspond to the overlay `render_width`and `render_height`.
This match the behavior of glimagesink
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4046>
It's only malformed data in APP when its length is less than 6 chars,
because it should have at least an id string. Otherwise, if the id string
is not handled, no warning is raised, only a debug message noticing it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3943>
When the QoS stats are reset (e.g. changing the source) the counters for
dropped + rendered frames are reset to zero which result in negative values
for their difference. This results in max-fps getting pegged at an extremely
high value.
```
fpsdisplaysink.c:373:display_current_fps:<fpsdisplaysink0> Updated max-fps to 36840705952231460864.000000
```
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3989>
Instead of creating new decoder instance per new sequence,
re-use configured decoder instance via cuvidReconfigureDecoder()
API. It will make output surface reusable without re-allocation.
Also, in order for application to be able to reserve higher resolution
output surface, "init-max-width" and "init-max-height" properties are
added to each decoder.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3884>
Call input resource map functions (i.e., nvEncRegisterResource,
nvEncUnregisterResource, nvEncMapInputResource, and
nvEncUnmapInputResource) only once and reuse the mapped resources,
instead of per input frame map/unmap
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3884>
Wrap mapped decoder output surface using GstCudaMemory and
output without any copy operation. Also, for application to be able to
control the number of zero-copyable output surfaces,
"num-output-surfaces" property is added.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3884>
The encoder does not support reconfiguration, and only deinitializing it
and then initializing it again causes deadlocks.
Also only reconfigure and drain the encoder if the video info has
actually changed.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3957>
Fixes#1358.
Passing ARGB64/RGBA64 to vtenc caused the encoding to fail
when running on M1 Pro/Max variants with macOS 12.x, so let's
remove these formats from caps when such scenario is detected.
This issue appears to have been fixed OS-side in macOS 13.0.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3912>
This was causing incorrect output when seeking, especially
when used with a multithreaded source like `videotestsrc n-threads=2`.
It should now correctly wait for frames still being processed by VT
while vtdec is flushing.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3922>
We are using std::isspace() with one parameter. That function is defined
in the cctype header.
```
win32ipcutils.cpp(34): error C2672: 'std::isspace': no matching overloaded function found
win32ipcutils.cpp(34): error C2780: 'bool std::isspace(_Elem,const std::locale &)': expects 2 arguments - 1 provided
```
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3933>
This will be used for CUDA stream sharing.
* Adding GstCudaPoolAllocator object. The pool allocator will
control synchronization of allocated memory objects.
* Modify gst_cuda_allocator_alloc() API so that caller can specify/set
GstCudaStream object for the newly allocated memory.
* GST_CUDA_MEMORY_TRANSFER_NEED_SYNC flag is added in addition to
existing GST_CUDA_MEMORY_TRANSFER_NEED_{UPLOAD,DOWNLOAD}.
The flag indicates that any GPU command queued in the CUDA stream
may not be finished yet, and caller should take care of the
synchronization.
The flag is controlled by GstCudaMemory object if the memory holds
GstCudaStream. (Otherwise, GstCudaMemory will do synchronization
as before this commit). Specifically, GstCudaMemory object will set
the new flag automatically when memory is mapped with
(GST_MAP_CUDA | GST_MAP_WRITE) flags. Caller will need to unset
the flag via GST_MEMORY_FLAG_UNSET() if it's already synchronized
by client code.
* gst_cuda_memory_sync() helper function is added to perform synchronization
* Why not use CUevent object to keep track of synchronization status?
CUDA provides fence-like interface already via CUevent object,
but cuEventRecord/cuEventQuery APIs are not zero-cost operations.
Instead, in this version, the status is tracked by using map and
object flags.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3629>
And also keep the default encoder settings but simply override them with
our own values that we care about.
This mirrors the encoder configuration behaviour from ffmpeg.
Add AVTP Raw Video Format de-payload support. The element supports only
GRAY16_LE output format, so:
- active pixels (no vertical blanking),
- progressive mode,
- 8 and 16-bit pixel depth,
- mono pixel format,
- grayscale colorspace.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1335>
Add AVTP Raw Video Format payload support. The element supports only GRAY16_LE
input format, so:
- active pixels (no vertical blanking),
- progressive mode,
- 8 and 16-bit pixel depth,
- mono pixel format,
- grayscale colorspace.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1335>
Due to a bug in the VT API, attempting to encode interlaced content
with ProRes results in an error, halting the pipeline instead of
gracefully falling back to software encoding.
Should be removed in the future if Apple ever fixes this issue.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3222>
The VA API has not defined the scaling list entries for U/V planes
for the 4:4:4 stream. In fact, we do not meet the 4:4:4 format output
for H264 so far, and scaling list is not used frequently, so we just
print out some warning and ignore these scaling list values.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3749>
* Extend protocol so that client can notify of releasing shared memory
* Server will hold shared memory object until it's released by client
* Add allocator/buffer pool to reuse shared memory objects and buffers
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3765>
If we know there's only one stream we care about and we
don't have to synchronise audio and video, or send RRs,
we might just as well not hook up all the RTCP bits and
use fewer threads and sockets and simplify the pipeline.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3531>
Spec 7.1.3:
If a memory object does not have the VK_MEMORY_PROPERTY_HOST_COHERENT_BIT
property, then vkFlushMappedMemoryRanges must be called in order to guarantee
that writes to the memory object from the host are made available to the host
domain, where they can be further made available to the device domain via a
domain operation. Similarly, vkInvalidateMappedMemoryRanges must be called to
guarantee that writes which are available to the host domain are made visible to
host operations.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3723>
There is no byte-stream/au format for AV1 but only for H264, and the
encoder actually outputs obu-stream/tu instead of the annexb
stream-format that is similar to H264 byte-stream format.
Without this the encoder can't be used with elements that require a
specific AV1 stream-format, e.g. the MP4 or Matroska/WebM muxer.
It is really difficult for people to figure out why nvcodec has
0 features. Even the debug log is cryptic. Also make sure the errors
go to the ERROR log level, which is more likely to be enabled by
default.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3776>