Although the graphics cards released by Nvidia are used by many users to run game software today, we should know that these graphics cards are used in many areas. Graphics cards are used especially in artificial intelligence technologies to quickly complete the workload. Especially Nvidia designed by Tesla graphics cards are popularly used in this field. Users, on the other hand, can quickly perform artificial intelligence studies with any Nvidia graphics card on their computers.

One of the artificial intelligence studies CPU There are some problems with its implementation. The most important of these problems is that the data capacity it can receive per transaction is limited and therefore the duration of the transaction being carried out is spread over a very long time period. located on the graphics card. of GPU more powerful data processing capacity and the instantaneous realization of this data on the RAM memory on the graphics card instead of the system RAM memory accelerates the work of artificial intelligence. To summarize this situation CPU While you may have to wait for days when you want to perform any artificial intelligence work on GPU If you do these operations on it, you will only have to wait a few hours.

In order to more actively use Nvidia graphics cards, which are indispensable especially for artificial intelligence supported computer vision, image processing, object tracking, face recognition, natural language processing, unmanned vehicle technologies, and to contribute to the artificial intelligence studies of Pardus users. CUDA ve cuDNN I will explain the setup.

First of all, we need to learn briefly about CUDA and cuDNN. CUDA (Compute Unified Device Architecture), written in C language. It provides the opportunity to manage the memory and processor units belonging to the GPU. cuDNN on GPU deep learning used in the transactions Python libraries (TensorFlow, PyTorch etc.) It is software created to speed up. Actively with these two software Pardus artificial intelligence work on accelerated we will be.

Important

To avoid some installation errors, you can remove Nvidia drivers on your system with the commands below.

sudo apt-get purge nvidia *

sudo apt-get autoremove nvidia*

CUDA Installation

First, go to the address specified in the link. https://developer.nvidia.com/cuda-toolkit

on this page Download Now Click the button.

In the next step, select the options below in order. Here we have chosen the operating system type, processor architecture, operating system distribution and version. Pardus, Debian Since it was developed on the basis of the operating system, the following steps to Debian We will do it appropriately. In this step local, network ve runfile You can choose the installation options. The next steps will continue to be installed on the server.

We will see a screen like the one below, when we perform the steps here, the CUDA installation will be done successfully.

wget https://developer.download.nvidia.com/compute/cuda/repos/debian11/x86_64/cuda-keyring_1.0-1_all.deb
sudo dpkg -i cuda-keyring_1.0-1_all.deb
sudo add-apt-repository contrib
sudo apt-get update
sudo apt-get -y install cuda

In some cases third in the line An error may occur for the command. To do this, perform the following actions.

How to Fix “sudo add-apt-repository contrib” Error

sudo apt-get install software-properties-common

In some cases, you need to do the following steps manually during installation. In the CUDA installation, you are asked for permission to take action on a conflicting package. Confirm this action.

After these steps, the following screen will appear on your terminal screen and you need to restart your system after this step.

After your system has rebooted, type the following code on your terminal screen and verify the CUDA installation for your video card.

nvidia-ks

We have successfully installed CUDA. Pay attention to the CUDA version, especially for cuDNN installation. We will install cuDNN for CUDA 12.1.

cuDNN Installation

After successfully installing CUDA, open the specified address to the specified link. You will see a screen like the one below. on this screen Download cuDNN Click the button.

After this step, you must have an Nvidia Developer account. To do this, you must create an account or use your existing developer account.

After the necessary login procedures, the following page will appear and accept the terms of use specified by Nvidia. Then we work above nvidia-ks We select the relevant version in accordance with the CUDA version we obtained with the command.

Since the previously mentioned CUDA version is 12.1 Download cuDNN v8.8.1 (March 8th, 2023), for CUDA 12.x under the section Local Installer for Debian 11 (Deb) Download the package. Install this package with the following command.

sudo dpkg -i cudnn-local-repo-debian11-8.8.1.3_1.0-1_amd64.deb

Then run the following command.

sudo cp /var/cudnn-local-repo-debian11-8.8.1.3/cudnn-local-313BFFCD-keyring.gpg /usr/share/keyrings/

Update the repositories.

sudo apt-get update

Then run the following commands for the runtime library and the developer library in order.

sudo apt-get install libcudnn8=8.9.0.131-1+cuda12.1
sudo apt-get install libcudnn8-dev=8.9.0.131-1+cuda12.1

After these steps, we have successfully installed cuDNN.

Warning

For CUDA and cuDNN installation, you need to pay attention to the graphics card you have and the artificial intelligence library you want to use. For example, supported CUDA and cuDNN versions in PyTorch and TensorFlow official sources may differ. Select the versions that are suitable for your artificial intelligence projects that you want to develop.

Resources

https://tr.wikipedia.org/wiki/CUDA

https://developer.nvidia.com/cuda-toolkit

https://developer.nvidia.com/cudnn

https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html