Installing TensorFlow 2.19 with GPU support on Fedora 42

python
tensorflow
cuda
Author

Hygor X. Araújo

Published

May 3, 2025

This post aims to be a simple guide on how to install TensorFlow 2.19 with GPU support on Fedora 42.

If you only want to use TensorFlow with GPU support without all the hassle of installing the NVIDIA CUDA Toolkit (NVCC), it is probably easier to just use it with Docker (Docker install guide). This guide is for those who want to install TensorFlow with GPU support on the host system.

Requirements

  • Python 3.9-3.12
  • GCC 13.3
  • CUDA enabled GPU
  • NVIDIA CUDA Toolkit 12.5
  • NVIDIA cuDNN 9.3
  • NVIDIA driver greater or equal to 525.60.13

Steps

1. Setup GCC 13.3 (based on the guide linked in the references)

The NVCC 12.5 requires GCC 13.3, which is not available in the Fedora 42 repositories. To install it, we need to build it from source. The steps are as follows:

  1. Install the build requirements
sudo dnf group install development-tools

sudo dnf install mpfr-devel gmp-devel libmpc-devel \
zlib-devel glibc-devel.i686 glibc-devel isl-devel \
g++ gcc-gnat gcc-gdc libgphobos-static
  1. Get the source code
wget https://ftp.gwdg.de/pub/misc/gcc/releases/gcc-13.3.0/gcc-13.3.0.tar.xz
  1. Extract the source code tar xvf gcc-13.3.0.tar.xz
  2. Configure GCC
# Make a build directory
cd gcc-13.3.0
mkdir build
cd build

# Configure GCC for the build
../configure --enable-bootstrap \
--enable-languages=c,c++,fortran,objc,obj-c++,ada,go,d,lto \
--prefix=/usr/local --program-suffix=-13.3 --mandir=/usr/share/man \
--infodir=/usr/share/info --enable-shared --enable-threads=posix \
--enable-checking=release --enable-multilib --with-system-zlib \
--enable-__cxa_atexit --disable-libunwind-exceptions \
--enable-gnu-unique-object --enable-linker-build-id \
--with-gcc-major-version-only --enable-libstdcxx-backtrace \
--with-libstdcxx-zoneinfo=/usr/share/zoneinfo --with-linker-hash-style=gnu \
--enable-plugin --enable-initfini-array --with-isl \
--enable-offload-targets=nvptx-none --enable-offload-defaulted \
--enable-gnu-indirect-function --enable-cet --with-tune=generic \
--with-arch_32=i686 --build=x86_64-redhat-linux \
--with-build-config=bootstrap-lto --enable-link-serialization=1 \
--with-default-libstdcxx-abi=new --with-build-config=bootstrap-lto
  1. Build GCC (this will probably take a long time) make -j<number_of_cores_to_use>
  2. Install GCC sudo make install
  3. Verify the installation gcc-13.3 -v

2. Install the NVIDIA driver

  1. Install the driver: You can follow the RPM Fusion guide in the references to install the NVIDIA driver. Basically, you need to have the RPM Fusion repositories enabled in dnf and install the below packages:
sudo dnf install akmod-nvidia xorg-x11-drv-nvidia-cuda
  1. Reboot your system
  2. Check the modules with:
# loaded modules
lsmod | grep nvidia

# module information
modinfo nvidia

# NVIDIA system management interface
nvidia-smi

Note: you will need to disable secure boot for the akmod-nvidia to work.

3. Install NVIDIA CUDA Toolkit 12.5

  1. Get the installation script from the NVIDIA website. It will be like below:
# Get the script
wget https://developer.download.nvidia.com/compute/cuda/12.5.0/local_installers/cuda_12.5.0_555.42.02_linux.run
  1. Setup the alternative gcc for CUDA
# Create the directory structure that CUDA will use after installation
sudo mkdir -p /usr/local/cuda-12.5/bin

# Create a link for the gcc 13.3 version
sudo ln -s /usr/local/bin/gcc-13.3 /usr/local/cuda-12.5/bin/gcc
  1. Start the installation Since we are not using the default system GCC from Fedora 42, we need to make sure the installation script will find the correct version. We will do that by overwriting some environment variables during execution: CC and PATH.
sudo CC="/usr/local/cuda-12.5/bin/gcc" PATH="/usr/local/cuda-12.5/bin:$PATH" sh cuda_12.5.0_555.42.02_linux.run  --silent --toolkit --no-opengl-libs

This will install the toolkit with the default options, without installing the NVIDIA driver again and accepting the End User License Agreement (EULA).

  1. Update environment variable in your .bashrc or .zshrc:
export PATH=/usr/local/cuda-12.5/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda-12.5/lib64:$LD_LIBRARY_PATH
  1. Check the installation with nvcc --version

4. Install and use TensorFlow 2.19

Using the uv package manager (you can also install it with pip), we can install TensorFlow 2.19 with GPU support. The steps are as follows:

  1. Create a virtual environment uv venv --python 3.12
  2. Install cuDNN with uv add nvidia-cudnn-cu12==9.3.0.75
  3. Install TensorFlow with uv add "tensorflow[and-cuda]==2.19"
  4. Manually link the required libraries with TensorFlow
# Navigate into the main TensorFlow package directory
pushd $(dirname $(python -c 'import tensorflow as tf; print(tf.__file__)'))

# Create symbolic links to the NVIDIA shared libraries (.so files)
# This links all *.so* files from../nvidia/*/lib/ into the current directory
ln -svf ../nvidia/*/lib/*.so* .

# Return to your previous directory
popd

# Create a symbolic link for ptxas (CUDA assembler)
# Find ptxas within the pip-installed nvidia_cuda_nvcc package and link it to the venv's bin
ln -sf $(find $(dirname $(dirname $(python -c "import nvidia.cuda_nvcc; print(nvidia.cuda_nvcc.__file__)"))/bin/) -name ptxas -print -quit) $VIRTUAL_ENV/bin/ptxas
  1. Check the installation with the code below. It should list your GPUs, something like [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
import tensorflow as tf

print(tf.config.list_physical_devices('GPU'))
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]

References