#StackBounty: #ubuntu #python #aws #amazon-ec2 #cuda CUDA not available on Deep Learning AMI (DLAMI) running on an Amazon EC2 P2 instance

Bounty: 50

I am running the Ubuntu 18.04 Deep Learning AMI (DLAMI) on AWS, and am attempting to run it on a p2.xlarge EC2 instance, but CUDA is not available in my Python interpreter. I was assuming that CUDA would work out of the box, since it’s an AMI that is supposedly designed to work with torch/CUDA.

I am trying to run my code within the pytorch_latest_p37 conda environment that comes pre-installed with the DLAMI. This uses Python3.7, and comes with PyTorch 1.7.1 built with CUDA 11.0.:

ubuntu@ip-111-21-33-212:~$ source activate pytorch_latest_p37

The output of nvidia-smi and nvcc both seem to indicate that CUDA is installed:

(pytorch_latest_p37) ubuntu@ip-111-21-33-212:~$ nvidia-smi
Sun Jul 18 07:51:09 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.119.03   Driver Version: 450.119.03   CUDA Version: 11.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Tesla K80           On   | 00000000:00:1E.0 Off |                    0 |
| N/A   32C    P8    30W / 149W |      0MiB / 11441MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+


(pytorch_latest_p37) ubuntu@ip-111-21-33-212:~$ nvcc --version                  nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Mon_Oct_12_20:09:46_PDT_2020
Cuda compilation tools, release 11.1, V11.1.105
Build cuda_11.1.TC455_06.29190527_0

But torch.cuda.is_available() is returning false in ipython and I am getting errors saying that torch was not compiled with CUDA support:

(pytorch_latest_p37) ubuntu@ip-111-21-33-212:~$ ipython
Python 3.9.5 (default, Jun  4 2021, 12:28:51)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.22.0 -- An enhanced Interactive Python. Type '?' for help.

In [1]: import torch

In [2]: torch.cuda.is_available()
Out[2]: False

In [3]: torch.zeros(1).cuda()
---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
<ipython-input-2-0904fac96cba> in <module>
----> 1 torch.zeros(1).cuda()

~/anaconda3/envs/pytorch_latest_p37/lib/python3.9/site-packages/torch/cuda/__init__.py in _lazy_init()
    164                 "Cannot re-initialize CUDA in forked subprocess. " + msg)
    165         if not hasattr(torch._C, '_cuda_getDeviceCount'):
--> 166             raise AssertionError("Torch not compiled with CUDA enabled")
    167         if _cudart is None:
    168             raise AssertionError(

AssertionError: Torch not compiled with CUDA enabled

What is going on here? What do I need to do to get CUDA running on P2/P3 instances?

Thanks!


Get this bounty!!!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.