There may be garbage or some bits before a SOI comes in some problematic
mjpeg streams. For example, some network error may cause the EOI marker
of the previous frame lost, and when the new frame's SOI comes, we still
use the state of the last frame, which will generate errors.
For this kind of frames without EOI, if that frame already has some data
(the SOS segment is detected), we still push it as a frame with CORRUPTED
flag set. But if not, we just discard all the data before the new SOI.
Co-Authored-By: Víctor Jáquez <vjaquez@igalia.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4039>
The previous implementation was a bit primitive, assuming the subclass
had registered a template name starting with sink_ . Instead make
the effort of parsing the actual template name, and use that to generate
the final pad name.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4032>
These checks were introduced to prevent exposing ARGB64/RGBA64 in the caps
when running on M1 Pro/Max with macOS <13 because of a bug in VideoToolbox.
Unfortunately, the initial buffer size of 15 is too short when running
in a VM - the CPU brand string there looks like "Apple M1 Pro (Virtual)",
which due to its length causes sysctlbyname to return -1, resulting in
broken formats still showing up in the caps.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4080>
We did several things to enable the new memory logic in msdkdec:
(1) We always use video memory for decoder in linux path;
(2) We give negotiated pool to alloc_pool stored in GstMsdkContext which
will be used in callback mfxFrameAllocator:Alloc to alloc surfaces as
MediaSDK needs, and this pool is also available for decoder itself;
(3) We modify decide_allocation process, that is we make pool negotiaion
before gst_msdk_init_decoder to ensure the pool is decided and ready for
use in mfxFrameAllocator:Alloc callback; then we will consider the case
when we need to do the gpu to cpu copy.
(4) In gst_msdkdec_finish_task, we modify the way for copy following the
logic in (3).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3439>
Add a pool creation function name as 2 for later use which will create
va pool for video memory in linux and keep system pool for windows.
This gst_msdkdec_create_buffer_pool2 will replace gst_msdkdec_create_buffer_pool
when all the memory allocation modifications are ready in the commits after.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3439>
Rewrite gst_msdk_frame_alloc and name it as xxx_2 before applying it.
It uses negotiated bufferpool stored in GstMsdkContext to allocate buffers
in the callback MfxFrameAllocator:Alloc, then extract VASurface from buffer,
wrap it as mfxMemIDs and pass these IDs to MediaSDK/oneVPL.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3439>
The `add_candidate` vfunc of the GstWebRTCICE interface gained a GstPromise
argument, which is an ABI break. We're not aware of any external user of this
interface yet so we think it's OK.
This change is useful in cases where the application needs to bubble up errors
from the underlying ICE agent, for instance when the agent was given an invalid
ICE candidate.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3960>
The signal triggers an asynchronous task on the PC thread but in some cases it
can be useful for apps to be notified when the task completed. This method of
the PeerConnection spec also returns a Promise so the interface is now more
coherent with the spec.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3960>
The av1decoder class does not implement the ->parse() virtual function,
and we always need to add the av1parse element before it. So we should
set_needs_format of the decoder to TRUE, then if no parse before it, it
can fail with a "not-negotiated" error early, rather than go on and
generate unexpected error.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4064>
The vp9decoder class does not implement the ->parse() virtual function,
and we always need to add the vp9parse element before it. So we should
set_needs_format of the decoder to TRUE, then if no parse before it, it
can fail with a "not-negotiated" error early, rather than go on and
generate unexpected error.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4064>
The vp8decoder class does not implement the ->parse() virtual function,
it can only accepts frame aligned data. If some element such as filesrc
feed it with unaligned data, the behaviour is undecided. So we should
set_needs_format of the decoder to TRUE, then it can fail with a
"not-negotiated" error early, rather than go on and generate unexpected
error.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4064>
The mpeg2decoder class does not implement the ->parse() virtual function,
and we always need to add the mpegvideoparse element before it. So we should
set_needs_format of the decoder to TRUE, then if no parse before it, it
can fail with a "not-negotiated" error early, rather than go on and
generate unexpected error.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4064>
The h264decoder class does not implement the ->parse() virtual function,
and we always need to add the h264parse element before it. So we should
set_needs_format of the decoder to TRUE, then if no parse before it, it
can fail with a "not-negotiated" error early, rather than go on and
generate unexpected error.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4064>
The h265decoder class does not implement the ->parse() virtual function,
and we always need to add the h265parse element before it. So we should
set_needs_format of the decoder to TRUE, then if no parse before it, it
can fail with a "not-negotiated" error early, rather than go on and
generate unexpected error.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4064>
Raw 608 caps can now contain a "field" field. On the input side it
signifies that the input raw 608 is attached to either field 0 or 1,
on the output side it allows selecting whether to extract the raw 608
data for field 0 or 1 for field-aware formats.
In addition, it is also allowed to use ccconverter to "convert" 608
field 0 to 608 field 1 (and conversely), this is passthrough as the
change only needs to happen in the caps.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4031>
The dimension of the overlay texture directly corresponds to the size of the overlay **buffer** which is given by its video meta.
The dimension at which the overlay should be displayed directly correspond to the overlay `render_width`and `render_height`.
This match the behavior of glimagesink
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4046>
It's only malformed data in APP when its length is less than 6 chars,
because it should have at least an id string. Otherwise, if the id string
is not handled, no warning is raised, only a debug message noticing it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3943>
When the QoS stats are reset (e.g. changing the source) the counters for
dropped + rendered frames are reset to zero which result in negative values
for their difference. This results in max-fps getting pegged at an extremely
high value.
```
fpsdisplaysink.c:373:display_current_fps:<fpsdisplaysink0> Updated max-fps to 36840705952231460864.000000
```
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3989>
Instead of creating new decoder instance per new sequence,
re-use configured decoder instance via cuvidReconfigureDecoder()
API. It will make output surface reusable without re-allocation.
Also, in order for application to be able to reserve higher resolution
output surface, "init-max-width" and "init-max-height" properties are
added to each decoder.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3884>
Call input resource map functions (i.e., nvEncRegisterResource,
nvEncUnregisterResource, nvEncMapInputResource, and
nvEncUnmapInputResource) only once and reuse the mapped resources,
instead of per input frame map/unmap
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3884>
Wrap mapped decoder output surface using GstCudaMemory and
output without any copy operation. Also, for application to be able to
control the number of zero-copyable output surfaces,
"num-output-surfaces" property is added.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3884>
The encoder does not support reconfiguration, and only deinitializing it
and then initializing it again causes deadlocks.
Also only reconfigure and drain the encoder if the video info has
actually changed.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3957>
Fixes#1358.
Passing ARGB64/RGBA64 to vtenc caused the encoding to fail
when running on M1 Pro/Max variants with macOS 12.x, so let's
remove these formats from caps when such scenario is detected.
This issue appears to have been fixed OS-side in macOS 13.0.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3912>
This was causing incorrect output when seeking, especially
when used with a multithreaded source like `videotestsrc n-threads=2`.
It should now correctly wait for frames still being processed by VT
while vtdec is flushing.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3922>
We are using std::isspace() with one parameter. That function is defined
in the cctype header.
```
win32ipcutils.cpp(34): error C2672: 'std::isspace': no matching overloaded function found
win32ipcutils.cpp(34): error C2780: 'bool std::isspace(_Elem,const std::locale &)': expects 2 arguments - 1 provided
```
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3933>
This will be used for CUDA stream sharing.
* Adding GstCudaPoolAllocator object. The pool allocator will
control synchronization of allocated memory objects.
* Modify gst_cuda_allocator_alloc() API so that caller can specify/set
GstCudaStream object for the newly allocated memory.
* GST_CUDA_MEMORY_TRANSFER_NEED_SYNC flag is added in addition to
existing GST_CUDA_MEMORY_TRANSFER_NEED_{UPLOAD,DOWNLOAD}.
The flag indicates that any GPU command queued in the CUDA stream
may not be finished yet, and caller should take care of the
synchronization.
The flag is controlled by GstCudaMemory object if the memory holds
GstCudaStream. (Otherwise, GstCudaMemory will do synchronization
as before this commit). Specifically, GstCudaMemory object will set
the new flag automatically when memory is mapped with
(GST_MAP_CUDA | GST_MAP_WRITE) flags. Caller will need to unset
the flag via GST_MEMORY_FLAG_UNSET() if it's already synchronized
by client code.
* gst_cuda_memory_sync() helper function is added to perform synchronization
* Why not use CUevent object to keep track of synchronization status?
CUDA provides fence-like interface already via CUevent object,
but cuEventRecord/cuEventQuery APIs are not zero-cost operations.
Instead, in this version, the status is tracked by using map and
object flags.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3629>
And also keep the default encoder settings but simply override them with
our own values that we care about.
This mirrors the encoder configuration behaviour from ffmpeg.
Add AVTP Raw Video Format de-payload support. The element supports only
GRAY16_LE output format, so:
- active pixels (no vertical blanking),
- progressive mode,
- 8 and 16-bit pixel depth,
- mono pixel format,
- grayscale colorspace.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1335>
Add AVTP Raw Video Format payload support. The element supports only GRAY16_LE
input format, so:
- active pixels (no vertical blanking),
- progressive mode,
- 8 and 16-bit pixel depth,
- mono pixel format,
- grayscale colorspace.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1335>
Due to a bug in the VT API, attempting to encode interlaced content
with ProRes results in an error, halting the pipeline instead of
gracefully falling back to software encoding.
Should be removed in the future if Apple ever fixes this issue.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3222>
The VA API has not defined the scaling list entries for U/V planes
for the 4:4:4 stream. In fact, we do not meet the 4:4:4 format output
for H264 so far, and scaling list is not used frequently, so we just
print out some warning and ignore these scaling list values.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3749>
* Extend protocol so that client can notify of releasing shared memory
* Server will hold shared memory object until it's released by client
* Add allocator/buffer pool to reuse shared memory objects and buffers
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3765>
If we know there's only one stream we care about and we
don't have to synchronise audio and video, or send RRs,
we might just as well not hook up all the RTCP bits and
use fewer threads and sockets and simplify the pipeline.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3531>
Spec 7.1.3:
If a memory object does not have the VK_MEMORY_PROPERTY_HOST_COHERENT_BIT
property, then vkFlushMappedMemoryRanges must be called in order to guarantee
that writes to the memory object from the host are made available to the host
domain, where they can be further made available to the device domain via a
domain operation. Similarly, vkInvalidateMappedMemoryRanges must be called to
guarantee that writes which are available to the host domain are made visible to
host operations.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3723>
There is no byte-stream/au format for AV1 but only for H264, and the
encoder actually outputs obu-stream/tu instead of the annexb
stream-format that is similar to H264 byte-stream format.
Without this the encoder can't be used with elements that require a
specific AV1 stream-format, e.g. the MP4 or Matroska/WebM muxer.
It is really difficult for people to figure out why nvcodec has
0 features. Even the debug log is cryptic. Also make sure the errors
go to the ERROR log level, which is more likely to be enabled by
default.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3776>
_alloca CRT function is deprecated. Moreover, stack allocation
for string is not a good idea. We can use _malloca inline
function instead, but all use of _alloca in d3d11 library/plugin
are not performance critical path at all.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3652>
This allows handling input buffers with non-default strides, which was
already handled fine by the element code.
Without this, potentially expensive conversion was needed.
The private data is not copied over for SVT AV1 encoder so this code
path would've never worked.
Instead of relying on the PTS, which is not required to be unique or
existing at all, we always take the oldest frame as AV1 has no frame
reordering / B frames.
No matter if they're allocated via GSlice or malloc(). The allocator is
completely irrelevant, all local tags need to be in the primer so they
can be handled.
This didn't have any effect in practice because all local tags that
appear in the muxer are allocated via GSlice. Only from the demuxer they
might be allocated via malloc().
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3699>
As the path to the gir file is passed to hotdoc.generate_doc() and
not the build target itself, meson doesn't know about the dependency.
In turn, as the CI doesn't build everything before building the
documentation target, some gir files might not exist, for instance
in the case of gst-rtsp-server, causing the output documentation to
be empty.
The error occurred silently because hotdoc accepts wildcards for
*-sources arguments, thus it won't warn about a missing gir file as
it is legitimate for glob matching to resolve to nothing.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3686>
The VAAPI vaQueryVideoProcPipelineCaps() requires the context as the
parameter. So far, we always pass VA_INVALID_ID and it can succeed.
But the API does not say that and in theory, a valid context is required.
Now the new platform really needs a valid context and so we have to
delay that query until the context is created.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3613>
NVDEC launches CUDA kernel function (ConvertNV12BLtoNV12 or so)
when CuvidMapVideoFrame() is called. Which seems to be
NVDEC's internal post-processing kernel function, maybe
to convert tiled YUV to linear YUV format or something similar.
A problem if we don't pass CUDA stream to the CuvidMapVideoFrame()
call is that the NVDEC's internel kernel function will use default CUDA stream.
Then lots of the other CUDA API calls will be blocked/serialized.
To avoid the unnecessary blocking, we should pass our own
CUDA stream object to the CuvidMapVideoFrame() call
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3605>
If a discontinuity is detected in push mode, we need to clear the cached section
observations since they might have potentially changed.
This was only done properly when operating with TIME segments (dvb, udp,
adaptive demuxers, ...) but not with BYTE segments (such as with custom app/fd
sources).
We still don't want to flush out the PCR observations, since this might be
needed for seeking in push-based BYTE sources.
Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/1650
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3584>