Name of the caps feature for indicating the use of #GstCudaMemory
#G_TYPE_BOOLEAN Allows stream ordered allocation. Default is %FALSE
Name of cuda memory type
A #GstAllocator subclass for cuda memory
Controls the active state of @allocator. Default #GstCudaAllocator is
stateless and therefore active state is ignored, but subclass implementation
(e.g., #GstCudaPoolAllocator) will require explicit active state control
for its internal resource management.
This method is conceptually identical to gst_buffer_pool_set_active method.
%TRUE if active state of @allocator was successfully updated.
a #GstCudaAllocator
the new active state
a newly allocated #GstCudaMemory
a #GstCudaAllocator
a #GstCudaContext
a #GstCudaStream
a #GstVideoInfo
Allocates a new memory that wraps the given CUDA device memory.
@info must represent actual memory layout, in other words, offset, stride
and size fields of @info should be matched with memory layout of @dev_ptr
By default, wrapped @dev_ptr will be freed at the time when #GstMemory
is freed if @notify is %NULL. Otherwise, if caller sets @notify,
freeing @dev_ptr is callers responsibility and default #GstCudaAllocator
will not free it.
a new #GstMemory
a #GstCudaAllocator
a #GstCudaContext
a #GstCudaStream
a #GstVideoInfo
a CUdeviceptr CUDA device memory
user data
Called with @user_data when the memory is freed
Controls the active state of @allocator. Default #GstCudaAllocator is
stateless and therefore active state is ignored, but subclass implementation
(e.g., #GstCudaPoolAllocator) will require explicit active state control
for its internal resource management.
This method is conceptually identical to gst_buffer_pool_set_active method.
%TRUE if active state of @allocator was successfully updated.
a #GstCudaAllocator
the new active state
Allocates new #GstMemory object with CUDA virtual memory.
a newly allocated memory object or
%NULL if allocation is not supported
a #GstCudaAllocator
a #GstCudaContext
a #GstCudaStream
a #GstVideoInfo
allocation property
allocation flags
%TRUE if active state of @allocator was successfully updated.
a #GstCudaAllocator
the new active state
A newly created #GstCudaBufferPool
The #GstCudaContext to use for the new buffer pool
Create #GstCudaContext with given device_id
a new #GstCudaContext or %NULL on
failure
device-id for creating #GstCudaContext
Note: The caller is responsible for ensuring that the CUcontext and CUdevice
represented by @handle and @device stay alive while the returned
#GstCudaContext is active.
A newly created #GstCudaContext
A
[CUcontext](https://docs.nvidia.com/cuda/cuda-driver-api/group__CUDA__TYPES.html#group__CUDA__TYPES_1gf9f5bd81658f866613785b3a0bb7d7d9)
to wrap
A
[CUDevice](https://docs.nvidia.com/cuda/cuda-driver-api/group__CUDA__TYPES.html#group__CUDA__TYPES_1gf9f5bd81658f866613785b3a0bb7d7d9)
to wrap
Pops the current CUDA context from CPU thread
%TRUE if @ctx was pushed without error.
Query whether @ctx can access any memory which belongs to @peer directly.
%TRUE if @ctx can access @peer directly
a #GstCudaContext
a #GstCudaContext
Get CUDA device context. Caller must not modify and/or destroy
returned device context.
the `CUcontext` of @ctx
a #GstCudaContext
Get required texture alignment by device
the `CUcontext` of @ctx
a #GstCudaContext
Pushes the given @ctx onto the CPU thread's stack of current contexts.
The specified context becomes the CPU thread's current context,
so all CUDA functions that operate on the current context are affected.
%TRUE if @ctx was pushed without error.
a #GstCudaContext to push current thread
External resource interop API support
OS handle supportability in virtual memory management
Virtual memory management supportability
Free @resource
a #GstCudaGraphicsResource
Map previously registered resource with map flags
the `CUgraphicsResource` if successful or %NULL when failed
a #GstCudaGraphicsResource
a CUstream
a CUgraphicsMapResourceFlags
@resource a #GstCudaGraphicsResource
Register the @buffer for access by CUDA.
Must be called from the gl context thread with current cuda context was
pushed on the current thread
whether @buffer was registered or not
a GL buffer object
a `CUgraphicsRegisterFlags`
Unmap previously mapped resource
a #GstCudaGraphicsResource
a `CUstream`
Unregister previously registered resource.
For GL resource, this method must be called from gl context thread.
Also, current cuda context should be pushed on the current thread
before calling this method.
a #GstCudaGraphicsResource
Create new #GstCudaGraphicsResource with given @context and @type
a new #GstCudaGraphicsResource.
Free with gst_cuda_graphics_resource_free
a #GstCudaContext
a graphics API specific context object
a #GstCudaGraphicsResourceType of resource registration
Resource represents a EGL resource.
Exports virtual memory handle to OS specific handle.
On Windows, @os_handle should be pointer to HANDLE (i.e., void **), and
pointer to file descriptor (i.e., int *) on Linux.
The returned @os_handle is owned by @mem and therefore caller shouldn't
close the handle.
%TRUE if successful
a #GstCudaMemory
a pointer to OS handle
Query allocation method
a #GstCudaMemory
Gets CUDA stream object associated with @mem
a #GstCudaStream or %NULL if default
CUDA stream is in use
A #GstCudaMemory
Creates CUtexObject with given parameters
%TRUE if successful
A #GstCudaMemory
the plane index
filter mode
a pointer to CUtexObject object
Gets back user data pointer stored via gst_cuda_memory_set_token_data()
user data pointer or %NULL
a #GstCudaMemory
an user token
Gets user data pointer stored via gst_cuda_allocator_alloc_wrapped()
the user data pointer
A #GstCudaMemory
Sets an opaque user data on a #GstCudaMemory
a #GstCudaMemory
an user token
an user data
function to invoke with @data as argument, when @data needs to be
freed
Performs synchronization if needed
A #GstCudaMemory
Ensures that the #GstCudaAllocator is initialized and ready to be used.
CUDA memory allocation method
Memory allocated via cuMemAlloc or cuMemAllocPitch
Memory allocated via cuMemCreate and cuMemMap
Called to request cuda memory pool object. If callee returns a memory pool,
@allocator will allocate memory via cuMemAllocFromPoolAsync.
Otherwise device default memory pool will be used with cuMemAllocAsync method
Configured #GstCudaMemoryPool object
a #GstCudaAllocator
a #GstCudaContext
the user data
Creates a new #GstCudaMemoryPool with @props. If @props is %NULL,
non-exportable pool property will be used.
a new #GstCudaMemoryPool or %NULL on
failure
a #GstCudaContext
a CUmemPoolProps
Get CUDA memory pool handle
a CUmemoryPool handle
a #GstCudaMemoryPool
Increase the reference count of @pool.
@pool
a #GstCudaMemoryPool
Decrease the reference count of @pool.
a #GstCudaMemoryPool
CUDA memory transfer flags
the device memory needs downloading to the staging memory
the staging memory needs uploading to the device memory
the device memory needs synchronization
A #GstCudaAllocator subclass for cuda memory pool
Creates a new #GstCudaPoolAllocator instance.
a new #GstCudaPoolAllocator instance
a #GstCudaContext
a #GstCudaStream
a #GstVideoInfo
Creates a new #GstCudaPoolAllocator instance for virtual memory allocation.
a new #GstCudaPoolAllocator instance
a #GstCudaContext
a #GstCudaStream
a #GstVideoInfo
Creates a new #GstCudaPoolAllocator instance with given @config
a new #GstCudaPoolAllocator instance
a #GstCudaContext
a #GstCudaStream
a #GstVideoInfo
a #GstStructure with configuration options
Acquires a #GstMemory from @allocator. @memory should point to a memory
location that can hold a pointer to the new #GstMemory.
a #GstFlowReturn such as %GST_FLOW_FLUSHING when the allocator is
inactive.
a #GstCudaPoolAllocator
a #GstMemory
Creates a new #GstCudaStream
a new #GstCudaStream or %NULL on
failure
a #GstCudaContext
Get CUDA stream handle
a `CUstream` handle of @stream or %NULL if @stream is %NULL
a #GstCudaStream
Increase the reference count of @stream.
@stream
a #GstCudaStream
Decrease the reference count of @stream.
a #GstCudaStream
Flag indicating that we should map the CUDA device memory
instead of to system memory.
Combining #GST_MAP_CUDA with #GST_MAP_WRITE has the same semantics as though
you are writing to CUDA device/host memory.
Conversely, combining #GST_MAP_CUDA with
#GST_MAP_READ has the same semantics as though you are reading from
CUDA device/host memory
Gets configured allocation method
a buffer pool config
the currently configured #GstCudaStream
on @config or %NULL if @config doesn't hold #GstCudaStream
a buffer pool config
%TRUE stream ordered allocation option was specified
a buffer pool config
whether stream ordered allocation was requested or not
Sets allocation method
a buffer pool config
Sets @stream on @config
a buffer pool config
a #GstCudaStream
Sets stream ordered allocation option
a buffer pool config
whether stream ordered allocation is allowed
Clears a reference to a #GstCudaMemoryPool.
a pointer to a #GstCudaMemoryPool reference
Clears a reference to a #GstCudaStream.
a pointer to a #GstCudaStream reference
a new #GstContext embedding the @cuda_ctx
a #GstCudaContext
Creates new user token value
user token value
Perform the steps necessary for retrieving a #GstCudaContext from the
surrounding elements or from the application using the #GstContext mechanism.
If the content of @cuda_ctx is not %NULL, then no #GstContext query is
necessary for #GstCudaContext.
whether a #GstCudaContext exists in @cuda_ctx
the #GstElement running the query
preferred device-id, pass device_id >=0 when
the device_id explicitly required. Otherwise, set -1.
the resulting #GstCudaContext
Create new #GstCudaGraphicsResource with given @context and @type
a new #GstCudaGraphicsResource.
Free with gst_cuda_graphics_resource_free
a #GstCudaContext
a graphics API specific context object
a #GstCudaGraphicsResourceType of resource registration
Whether the @query was successfully responded to from the passed
@context.
a #GstElement
a #GstQuery of type %GST_QUERY_CONTEXT
a #GstCudaContext
Helper function for implementing #GstElementClass.set_context() in
CUDA capable elements.
Retrieves the #GstCudaContext in @context and places the result in @cuda_ctx.
whether the @cuda_ctx could be set successfully
a #GstElement
a #GstContext
preferred device-id, pass device_id >=0 when
the device_id explicitly required. Otherwise, set -1.
location of a #GstCudaContext
Loads the cuda library
%TRUE if the libcuda could be loaded %FALSE otherwise
Ensures that the #GstCudaAllocator is initialized and ready to be used.
Source code to compile
Compiled CUDA assembly code if successful,
otherwise %NULL
Source code to compile
CUDA device
Loads the nvrtc library.
%TRUE if the library could be loaded, %FALSE otherwise
the GQuark for given @id or 0 if @id is unknown value
a #GstCudaQuarkId
Sets global need-pool callback function
the callbacks
an user_data argument for the callback
a destory notify function
CUDA device API return code `CUresult`
Check if @mem is a cuda memory
A #GstMemory