-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Always getting "Failed to create CUDAExecutionProvider" #11092
Comments
Hi, your code works for me on local dev box. |
@ytaous Yes, fijipants@FPSERVER:~$ nvidia-smi
Sat Apr 2 11:20:28 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.82.01 Driver Version: 470.82.01 CUDA Version: 11.4 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... Off | 00000000:01:00.0 Off | N/A |
| 0% 28C P8 18W / 370W | 10MiB / 24268MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 NVIDIA GeForce ... Off | 00000000:05:00.0 Off | N/A |
| 30% 25C P8 9W / 350W | 10MiB / 24268MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 1347 G /usr/lib/xorg/Xorg 4MiB |
| 0 N/A N/A 2133 G /usr/lib/xorg/Xorg 4MiB |
| 1 N/A N/A 1347 G /usr/lib/xorg/Xorg 4MiB |
| 1 N/A N/A 2133 G /usr/lib/xorg/Xorg 4MiB |
+-----------------------------------------------------------------------------+ |
I was able to get it working inside of a Docker container Then I tried it again locally, and it worked! The only thing I did locally was Should that be added as a requirement? |
Not sure nvidia-container-toolkit should be added as a requirement given we don't need to install that in our CIs and it's focused on building in a docker container (i.e. specific to that setup and not building with CUDA in general). Often the CUDA EP not loading is an issue with cudnn not being installed correctly, leading to missing required libraries. These instructions need to be followed exactly to get everything in the right places: https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html |
@fijipants can you share the final workable Dockerfile? I am struggling on this problem for 2 days. |
I have been facing the same problem for 3 days. And I couldn't find any solution. People have asked many questions and opened threads about similar issues. unfortunately no answer from Microsoft Team. Am I wrong? |
I hate that this fixed it, but yes, importing |
I ran into this as well and had to look through a few different files before I found the imports at fault. Is the import order documented anywhere? I'm now importing ORT through a module that also imports torch: # this file exists to make sure torch is always imported before onnxruntime
# to work around /~https://github.com/microsoft/onnxruntime/issues/11092
import torch # NOQA
from onnxruntime import * # NOQA |
Got same issue under GTX A6000 and my onnx runtime version is 1.14.1 and cuda version is 12.0, any updates? |
Having the same issue also, GPU is A100, onnxruntime-gpu 1.14.1, CUDA 12.0. Importing torch before onnxruntime does not solve the issue. I don't know how to solve this and it is quite annoying. If someone has an idea... |
Please CC @microsoft/onnxruntime @microsoft/onnxruntime-admin |
Please make sure both CUDA and CUDNN are fully installed. The reason it works if the GPU torch package is loaded first is because torch includes the CUDA binaries in the python package. There is no actual dependency between ORT and torch, so this workaround is covering for an invalid system wide install of CUDA + CUDNN. I started with a clean Ubuntu 22.04 WSL install. I saw the error from the attempted usage of the CUDA EP until CUDA and CUDNN were fully installed. I installed CUDA and CUDNN using the NVIDIA packages. I used this script to see which libraries were loaded (adjust the model path to something valid): import ctypes
import onnxruntime as ort
def get_loaded_libraries():
from ctypes.util import find_library
class _dl_phdr_info(ctypes.Structure):
_fields_ = [
("dlpi_addr", ctypes.c_uint64),
("dlpi_name", ctypes.c_char_p),
("dlpi_phdr", ctypes.c_void_p),
("dlpi_phnum", ctypes.c_uint32),
]
def match_library_callback(info, size, data):
# Get the path of the current library
filepath = info.contents.dlpi_name
if filepath:
filepath = filepath.decode("utf-8")
print(filepath)
return 0
c_func_signature = ctypes.CFUNCTYPE(ctypes.c_int, # Return type
ctypes.POINTER(_dl_phdr_info),
ctypes.c_size_t,
ctypes.c_char_p,)
c_match_library_callback = c_func_signature(match_library_callback)
data = ctypes.c_char_p(b"")
dl_iterate_phdr = ctypes.CDLL('libc.so.6').dl_iterate_phdr
dl_iterate_phdr(c_match_library_callback, data)
get_loaded_libraries()
ort_session = ort.InferenceSession("mnist.onnx", providers=["CUDAExecutionProvider"]) Once CUDA and CUDNN were installed fully I see the following in the output from the script. List trimmed to relevant libraries.
The loaded libraries match the CUDA dependencies the ORT library has here, along with the CUDNN requirement for libz to have been installed. If it is using CUDA binaries from the torch install you'll see the python site-packages directory in the path. e.g. from a user local install of torch
|
I worked with nvidia ngc docker container nvcr.io/nvidia/pytorch:23.01-py3 , which have all the envrioment installed initially, but still failed. |
I'd suggest running the script to see which libraries are not being loaded as expected. The location of those libraries should be listed in the output from There's also a known issue listed at the bottom of this page that mentions having to manually set LD_LIBRARY_PATH to resolve a problem with cudnn. Not sure if that is potentially a factor. ORT doesn't officially support CUDA 12 yet either, so it may be better to try the 22.12 version of the container. |
Thanks but the output is, now I fall back into 22.06 and it works for me now ... maybe ort doesn't officially support CUDA 12 even while they announced at the latest release note 😆 : linux-vdso.so.1
/lib/x86_64-linux-gnu/libc.so.6
/lib/x86_64-linux-gnu/libpthread.so.0
/lib/x86_64-linux-gnu/libdl.so.2
/lib/x86_64-linux-gnu/libutil.so.1
/lib/x86_64-linux-gnu/libm.so.6
/lib/x86_64-linux-gnu/libexpat.so.1
/lib/x86_64-linux-gnu/libz.so.1
/lib64/ld-linux-x86-64.so.2
/usr/lib/python3.8/lib-dynload/_ctypes.cpython-38-x86_64-linux-gnu.so
/lib/x86_64-linux-gnu/libffi.so.7
/usr/local/cuda-11/targets/x86_64-linux/lib/libcublasLt.so.11
/lib/x86_64-linux-gnu/librt.so.1
/lib/x86_64-linux-gnu/libgcc_s.so.1
/usr/local/cuda/compat/lib/libcuda.so.1
/usr/local/cuda-11/targets/x86_64-linux/lib/libnvrtc.11
/usr/local/cuda-11/targets/x86_64-linux/lib/libcublas.so.11
/lib/x86_64-linux-gnu/libcudnn.so.8
/lib/x86_64-linux-gnu/libstdc++.so.6
/usr/local/cuda/targets/x86_64-linux/lib/libcurand.so.10
/usr/local/cuda-11/targets/x86_64-linux/lib/libcudart.so.11.0
/lib/x86_64-linux-gnu/libnvinfer.so.8
/lib/x86_64-linux-gnu/libnvinfer_plugin.so.8
/usr/local/lib/python3.8/dist-packages/onnxruntime/capi/onnxruntime_pybind11_state.cpython-38-x86_64-linux-gnu.so
/usr/lib/python3.8/lib-dynload/_json.cpython-38-x86_64-linux-gnu.so
/usr/local/lib/python3.8/dist-packages/numpy/core/_multiarray_umath.cpython-38-x86_64-linux-gnu.so
/usr/local/lib/python3.8/dist-packages/numpy/core/../../numpy.libs/libopenblas64_p-r0-15028c96.3.21.so
/usr/local/lib/python3.8/dist-packages/numpy/core/../../numpy.libs/libgfortran-040039e1.so.5.0.0
/usr/local/lib/python3.8/dist-packages/numpy/core/../../numpy.libs/libquadmath-96973f99.so.0.0.0
/usr/lib/python3.8/lib-dynload/_contextvars.cpython-38-x86_64-linux-gnu.so
/usr/local/lib/python3.8/dist-packages/numpy/core/_multiarray_tests.cpython-38-x86_64-linux-gnu.so
/usr/local/lib/python3.8/dist-packages/numpy/linalg/_umath_linalg.cpython-38-x86_64-linux-gnu.so
/usr/local/lib/python3.8/dist-packages/numpy/fft/_pocketfft_internal.cpython-38-x86_64-linux-gnu.so
/usr/local/lib/python3.8/dist-packages/numpy/random/mtrand.cpython-38-x86_64-linux-gnu.so
/usr/local/lib/python3.8/dist-packages/numpy/random/bit_generator.cpython-38-x86_64-linux-gnu.so
/usr/local/lib/python3.8/dist-packages/numpy/random/_common.cpython-38-x86_64-linux-gnu.so
/usr/lib/python3.8/lib-dynload/_hashlib.cpython-38-x86_64-linux-gnu.so
/lib/x86_64-linux-gnu/libcrypto.so.1.1
/usr/local/lib/python3.8/dist-packages/numpy/random/_bounded_integers.cpython-38-x86_64-linux-gnu.so
/usr/local/lib/python3.8/dist-packages/numpy/random/_mt19937.cpython-38-x86_64-linux-gnu.so
/usr/local/lib/python3.8/dist-packages/numpy/random/_philox.cpython-38-x86_64-linux-gnu.so
/usr/local/lib/python3.8/dist-packages/numpy/random/_pcg64.cpython-38-x86_64-linux-gnu.so
/usr/local/lib/python3.8/dist-packages/numpy/random/_sfc64.cpython-38-x86_64-linux-gnu.so
/usr/local/lib/python3.8/dist-packages/numpy/random/_generator.cpython-38-x86_64-linux-gnu.so
/usr/lib/python3.8/lib-dynload/_opcode.cpython-38-x86_64-linux-gnu.so
/usr/local/lib/python3.8/dist-packages/onnxruntime/capi/libonnxruntime_providers_shared.so
/usr/lib/python3.8/lib-dynload/_bz2.cpython-38-x86_64-linux-gnu.so
/lib/x86_64-linux-gnu/libbz2.so.1.0
/usr/lib/python3.8/lib-dynload/_lzma.cpython-38-x86_64-linux-gnu.so
/lib/x86_64-linux-gnu/liblzma.so.5
2023-03-28 00:26:00.089302146 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:541 CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Please reference https://onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html#requirements to ensure all dependencies are met. |
CUDA 12 is not fully supported. We noted build compatibility in the 1.14 release notes, however it was partial (only Linux) as we found issues with Windows builds later. |
FWIW printing out the loaded libraries can be done with this function on multiple platforms if you install Note that on Windows the CUDA libraries do not get loaded on def get_loaded_libraries():
import os
import psutil
p = psutil.Process(os.getpid())
for lib in p.memory_maps():
print(lib.path) |
First ,you should check your onnxruntime-gpu version ,cuda version and cudnn version in this link 'https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html' |
Have you resolved your issue? |
@ajinkya-algo, Need install CUDA 11.* if you want to use onnxruntime-gpu 1.14.1, another solution is to build onnxruntime-gpu from source for CUDA 12.0. |
@BotScutters thank you very much. Importing torch before onnxruntime helped. |
@kopyl, onnxruntime does not depend on torch. You can just install onnxruntime-gpu with compatible version of CUDA and cuDNN in docker image. For Ubuntu, it could be like the following (cuda 11.6 to 11.8, cuDNN 8.5 to 8.7 shall be fine):
|
@tianleiwu thank you. Can I run it on Cuda 12? |
@kopyl, no, the official package does not support cuda 12. However, you can build from source to run in cuda 12. |
For anyone who may trap in the same problem, I record all I done for a week to bring onnxruntime-gpu up in a container on debian system as below: install nvidia drive on debian from nvidia, follow instructions https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#network-repo-installation-for-debian
check driver status with
the driver veresion and cuda version should both not empty. install
the configuration is from https://docs.docker.com/compose/gpu-support/. and in the container, the command nvidia-smi should output the same info as from the base system.
and then export LD_LIBRARY_PATH as
And it works fine by now. for tensorrt just install
|
Doesn't work |
Once I remove patchelf from setup.py and resolve the issue #9754 , it will be much easier for you to know which shared library was missing. |
yesss this worked for me as well. Any idea why this happens and how to solve the issue without installing torch in the environment? |
Importing torch is NOT necessary UNLESS you have not installed the required CUDA libraries and setup the environment correctly. The python torch GPU module directly includes all the CUDA libraries which are very large - hence the size of that package is over 2.5GB. When you import torch it updates the environment the script is running in to point to the CUDA libraries directly included in that package.
e.g. from windows showing some of the CUDA related libraries in the torch packages's lib directory ![]() The ORT python package does not include the CUDA libraries, which is more efficient and flexible, but requires the user to install the required CUDA libraries and setup the environment correctly so that they can be found. Note that you need CUDA and cuDNN to be installed. |
Can you please tell me whose cuda and cudnn is utilized by onnxruntime
|
@avinash-218, you can try run a python script like the following in your machine:
Then change the order of first two lines, and run again. In Windows, I tested torch 2.0.1 and ORT 1.16.1, and it seems that import order does not matter. In both case, the cuda and cudnn from torch will be used. It is likely torch will load those DLLs during import, while ORT will delay loading cuda and cudnn until they are used. In Linux, the result might be different so you will need to try it by yourself. |
any update on official CUDA 12 builds support? |
@vadimkantorov, cuda 12 is supported. See https://onnxruntime.ai/docs/install/ for installation. |
This docker images works for me. # Stage 1: Builder/Compiler
FROM nvidia/cuda:12.2.2-cudnn8-runtime-ubuntu22.04 AS builder
ARG DEBIAN_FRONTEND=noninteractive
# Install dependencies
# Update and install dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
python3 \
python3-pip \
python3-setuptools \
python3-wheel \
&& rm -rf /var/lib/apt/lists/*
# Install Python virtual environment
RUN python3 -m pip install --upgrade pip setuptools
COPY requirements.txt /.
RUN pip3 install --no-cache-dir -r requirements.txt
RUN pip3 install onnxruntime-gpu==1.17.1 --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/
COPY ./nltk_data /root/nltk_data
EXPOSE 8080
# CMD ["sh","-c", "jupyter lab --notebook-dir=/home/jovyan --ip=0.0.0.0 --no-browser --allow-root --port=8888 --NotebookApp.token='' --NotebookApp.password='' --NotebookApp.allow_origin='*' --NotebookApp.base_url=${NB_PREFIX}"]```
[requirements.txt](/~https://github.com/microsoft/onnxruntime/files/14786985/requirements.txt) |
Read this if you want to know why this issue happens:The problem is that ONNX doesn't know how to search for CUDA. PyTorch knows how to search for it, and adds it to Python's internal path, so that ONNX can later find it. The bug/issue is with ONNX library. I have coded a workaround here: |
@skottmckay provided the response to use onnx-runtime on CUDA. I have written Python code to import CUDA binaries:
Use: |
Describe the bug
When I try to create InferenceSession in Python with
providers=['CUDAExecutionProvider']
, I get the warning:And my CPU usage shoots up, while my GPU usage stays at 0.
Urgency
Not urgent.
System information
To Reproduce
Download and run any model, e.g. here's one from PyTorch's ONNX example:
super_resolution.zip
Expected behavior
It should work, or at least print out a more informative warning.
Screenshots
Additional context
The text was updated successfully, but these errors were encountered: