gstreamer/subprojects/gst-plugins-bad/ext/onnx
Daniel Morin 8169863f01 analytics: Make GstTensor more suitable for inline allocation
GstTensor contained two fields (data, dims) that were dynamicallay allocated. For
data it's for a GstBuffer and we have pool for efficient memory management. For
dims it's a small array to store the dimension of the tensor. The dims field
can be allocated inplace by moving it at the end of the structure. This will
allow a better memory management when GstTensor is stored in an analytics meta
which will take advantage of the _clear interface for re-use.

- New api to allocate and free GstTensor
To continue to support use-cases where GstTensor is not stored in an
analytics-meta we provide gst_tensor_alloc, gst_tensor_alloc_n and
gst_tensor_free that will facilitate memory management.
- Make GstTensor a boxed type

Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6000>
2024-11-08 14:58:49 +00:00
..
gstml.h onnx: add gstonnxinference element 2023-10-20 00:33:29 +00:00
gstonnx.c tensordecoders: Move decoder out of the ONNX plugin 2024-11-08 14:58:49 +00:00
gstonnxclient.cpp analytics: Make GstTensor more suitable for inline allocation 2024-11-08 14:58:49 +00:00
gstonnxclient.h analytics: Move tensor meta to the analytics library 2024-11-08 14:58:49 +00:00
gstonnxinference.cpp analytics: Move batch to GstTensor 2024-11-08 14:58:49 +00:00
gstonnxinference.h onnx: Remove enums file 2023-10-20 00:33:29 +00:00
meson.build tensordecoders: Move decoder out of the ONNX plugin 2024-11-08 14:58:49 +00:00
README.md onnx: Update build instructions to use onnx-runtime 0.16.3 2023-12-22 14:43:23 -05:00

ONNX Build Instructions

Build

  1. do a recursive checkout of onnxruntime tag 1.16.3
  2. $SRC_DIR and $BUILD_DIR are local source and build directories
  3. To run with CUDA, both CUDA and cuDNN libraries must be installed.
$ cd $SRC_DIR
$ git clone --recursive https://github.com/microsoft/onnxruntime.git && cd onnxruntime && git checkout -b v1.16.3 refs/tags/v1.16.3
$ mkdir $BUILD_DIR/onnxruntime && cd $BUILD_DIR/onnxruntime

  1. CPU
$ cmake -Donnxruntime_BUILD_SHARED_LIB=ON -DBUILD_TESTING=OFF -Donnxruntime_BUILD_UNIT_TESTS=OFF $SRC_DIR/onnxruntime/cmake && make -j$(nproc) && sudo make install
  1. CUDA
cmake -Donnxruntime_BUILD_SHARED_LIB=ON -DBUILD_TESTING=OFF -Donnxruntime_BUILD_UNIT_TESTS=OFF -Donnxruntime_USE_CUDA=ON -Donnxruntime_CUDA_HOME=/usr/local/cuda -Donnxruntime_CUDNN_HOME=/usr/local/cuda -DCMAKE_CUDA_ARCHITECTURES=native -DCMAKE_CUDA_COMPILER=/usr/local/cuda/bin/nvcc $SRC_DIR/onnxruntime/cmake && make -j$(nproc) && sudo make install
  1. Intel oneDNN

3.0 install intel oneDNN

3.1 clone, build and install Khronos OpenCL SDK. Build dependencies for Fedora are

sudo dnf install libudev-devel libXrandr-devel mesa-libGLU-devel mesa-libGL-devel libX11-devel intel-opencl

3.2 build and install onnxruntime :

cmake -Donnxruntime_BUILD_SHARED_LIB=ON -DBUILD_TESTING=OFF -Donnxruntime_BUILD_UNIT_TESTS=OFF -Donnxruntime_USE_DNNL=ON -Donnxruntime_DNNL_GPU_RUNTIME=ocl -Donnxruntime_DNNL_OPENCL_ROOT=$SRC_DIR/OpenCL-SDK/install  $SRC_DIR/onnxruntime/cmake && make -j$(nproc) && sudo make install