diff --git a/subprojects/gst-plugins-bad/ext/onnx/README.md b/subprojects/gst-plugins-bad/ext/onnx/README.md index c00ab625a1..191bc21ca9 100644 --- a/subprojects/gst-plugins-bad/ext/onnx/README.md +++ b/subprojects/gst-plugins-bad/ext/onnx/README.md @@ -3,13 +3,13 @@ ONNX Build Instructions ### Build - 1. do a recursive checkout of [onnxruntime tag 1.15.1](https://github.com/microsoft/onnxruntime) + 1. do a recursive checkout of [onnxruntime tag 1.16.3](https://github.com/microsoft/onnxruntime) 1. `$SRC_DIR` and `$BUILD_DIR` are local source and build directories 1. To run with CUDA, both [CUDA](https://developer.nvidia.com/cuda-downloads) and [cuDNN](https://docs.nvidia.com/deeplearning/cudnn/archives/cudnn_762/cudnn-install/index.html) libraries must be installed. ``` $ cd $SRC_DIR -$ git clone --recursive https://github.com/microsoft/onnxruntime.git && cd onnxruntime && git checkout -b v1.15.1 refs/tags/v1.15.1 +$ git clone --recursive https://github.com/microsoft/onnxruntime.git && cd onnxruntime && git checkout -b v1.16.3 refs/tags/v1.16.3 $ mkdir $BUILD_DIR/onnxruntime && cd $BUILD_DIR/onnxruntime ``` @@ -34,5 +34,3 @@ cmake -Donnxruntime_BUILD_SHARED_LIB=ON -DBUILD_TESTING=OFF -Donnxruntime_BUILD_ ``` cmake -Donnxruntime_BUILD_SHARED_LIB=ON -DBUILD_TESTING=OFF -Donnxruntime_BUILD_UNIT_TESTS=OFF -Donnxruntime_USE_DNNL=ON -Donnxruntime_DNNL_GPU_RUNTIME=ocl -Donnxruntime_DNNL_OPENCL_ROOT=$SRC_DIR/OpenCL-SDK/install $SRC_DIR/onnxruntime/cmake && make -j$(nproc) && sudo make install ``` - -Note: for Fedora build, you must add the following to `CMAKE_CXX_FLAGS` : `-Wno-stringop-overflow -Wno-array-bounds` diff --git a/subprojects/gst-plugins-bad/ext/onnx/gstonnxinference.cpp b/subprojects/gst-plugins-bad/ext/onnx/gstonnxinference.cpp index 3be3f40fec..349b3108f0 100644 --- a/subprojects/gst-plugins-bad/ext/onnx/gstonnxinference.cpp +++ b/subprojects/gst-plugins-bad/ext/onnx/gstonnxinference.cpp @@ -27,32 +27,8 @@ * This element can apply an ONNX model to video buffers. It attaches * the tensor output to the buffer as a @ref GstTensorMeta. * - * To install ONNX on your system, recursively clone the repository - * https://github.com/microsoft/onnxruntime.git, check out tag 1.15.1 - * and build and install with cmake: - * - * CPU: - * - * cmake -Donnxruntime_BUILD_SHARED_LIB=ON -DBUILD_TESTING=OFF \ - * -Donnxruntime_BUILD_UNIT_TESTS=OFF $SRC_DIR/onnxruntime/cmake \ - * && make -j$(nproc) && sudo make install - * - * - * CUDA : - * - * cmake -Donnxruntime_BUILD_SHARED_LIB=ON -DBUILD_TESTING=OFF \ - * -Donnxruntime_BUILD_UNIT_TESTS=OFF -Donnxruntime_USE_CUDA=ON \ - * -Donnxruntime_CUDA_HOME=/usr/local/cuda \ - * -Donnxruntime_CUDNN_HOME=/usr/local/cuda -DCMAKE_CUDA_ARCHITECTURES=native \ - * -DCMAKE_CUDA_COMPILER=/usr/local/cuda/bin/nvcc \ - * $SRC_DIR/onnxruntime/cmake && make -j$(nproc) && sudo make install - * - * - * where : - * - * 1. $SRC_DIR and $BUILD_DIR are local source and build directories - * 2. To run with CUDA, both CUDA and cuDNN libraries must be installed. - * + * To install ONNX on your system, follow the instructions in the + * README.md in with this plugin. * * ## Example launch command: *