gstreamer/subprojects/gst-plugins-bad/ext/onnx
Daniel Morin 6a5a63f051 analytics: Adding abstraction on tensor dims
Tensor can be row or col major, but it's also possible that the order by we need
to read the tensor with more than two dimension need to be described. The
reserved field in GstTensorDim is there for this purpose. If we need this we
can add  GST_TENSOR_DIM_ORDER_INDEXED, and follow an index defining order for
each dimension.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6000>
2024-11-08 14:58:49 +00:00
..
gstml.h onnx: add gstonnxinference element 2023-10-20 00:33:29 +00:00
gstonnx.c tensordecoders: Move decoder out of the ONNX plugin 2024-11-08 14:58:49 +00:00
gstonnxclient.cpp analytics: Adding abstraction on tensor dims 2024-11-08 14:58:49 +00:00
gstonnxclient.h analytics: Move tensor meta to the analytics library 2024-11-08 14:58:49 +00:00
gstonnxinference.cpp analytics: Move batch to GstTensor 2024-11-08 14:58:49 +00:00
gstonnxinference.h onnx: Remove enums file 2023-10-20 00:33:29 +00:00
meson.build tensordecoders: Move decoder out of the ONNX plugin 2024-11-08 14:58:49 +00:00
README.md onnx: Update build instructions to use onnx-runtime 0.16.3 2023-12-22 14:43:23 -05:00

ONNX Build Instructions

Build

  1. do a recursive checkout of onnxruntime tag 1.16.3
  2. $SRC_DIR and $BUILD_DIR are local source and build directories
  3. To run with CUDA, both CUDA and cuDNN libraries must be installed.
$ cd $SRC_DIR
$ git clone --recursive https://github.com/microsoft/onnxruntime.git && cd onnxruntime && git checkout -b v1.16.3 refs/tags/v1.16.3
$ mkdir $BUILD_DIR/onnxruntime && cd $BUILD_DIR/onnxruntime

  1. CPU
$ cmake -Donnxruntime_BUILD_SHARED_LIB=ON -DBUILD_TESTING=OFF -Donnxruntime_BUILD_UNIT_TESTS=OFF $SRC_DIR/onnxruntime/cmake && make -j$(nproc) && sudo make install
  1. CUDA
cmake -Donnxruntime_BUILD_SHARED_LIB=ON -DBUILD_TESTING=OFF -Donnxruntime_BUILD_UNIT_TESTS=OFF -Donnxruntime_USE_CUDA=ON -Donnxruntime_CUDA_HOME=/usr/local/cuda -Donnxruntime_CUDNN_HOME=/usr/local/cuda -DCMAKE_CUDA_ARCHITECTURES=native -DCMAKE_CUDA_COMPILER=/usr/local/cuda/bin/nvcc $SRC_DIR/onnxruntime/cmake && make -j$(nproc) && sudo make install
  1. Intel oneDNN

3.0 install intel oneDNN

3.1 clone, build and install Khronos OpenCL SDK. Build dependencies for Fedora are

sudo dnf install libudev-devel libXrandr-devel mesa-libGLU-devel mesa-libGL-devel libX11-devel intel-opencl

3.2 build and install onnxruntime :

cmake -Donnxruntime_BUILD_SHARED_LIB=ON -DBUILD_TESTING=OFF -Donnxruntime_BUILD_UNIT_TESTS=OFF -Donnxruntime_USE_DNNL=ON -Donnxruntime_DNNL_GPU_RUNTIME=ocl -Donnxruntime_DNNL_OPENCL_ROOT=$SRC_DIR/OpenCL-SDK/install  $SRC_DIR/onnxruntime/cmake && make -j$(nproc) && sudo make install