Skip to content

Plex GPU transcoding in Docker on LXC on Proxmox

I recently had to get GPU transcoding in Plex to work. The setup involved running Plex inside a Docker container, inside of an LXC container, running on top of Proxmox. I found some general guidelines online, but none that covered all aspects (especially dual layer of virtualization). I ran into a few challenges to get this working properly, so I’ll attempt to give a complete guide here.

I’ll assume you’ve got Proxmox and LXC set up, ready to go, running Debian 11 (Bullseye). In my example I’ll be running LXC container named docker1 (ID 101) on my Proxmox host. Everything will be headless (i.e. no X involved). The LXC will be privileged with fuse=1,nesting=1 set as features. I’ll use a Nvidia RTX A2000 as the GPU. All commands will be run as root.

Proxmox host

First step is to install the drivers on the host. Nvidia has an official Debian repo, that we could use. However, that introduces a potential problem; we need to install the drivers on the LXC container later without kernel modules. I could not find a way to do this using the packages within the official Debian repo, and therefore had to install the drivers manually within the LXC container. The other aspect is that both the host and the LXC container need to run the same driver version (or else it won’t work). If we install using official Debian repo on the host, and manual driver install on the LXC container, we could easily end up with different versions (whenever you do an apt upgrade on the host). In order to have this as consistent as possible, we’ll install the driver manually on both the host and within the LXC container.

# install pve headers matching your current kernel
apt install pve-headers-$(uname -r)

# download + install nvidia driver
# 510.47.03 was the latest at the time of this writing
wget -ONVIDIA-Linux-x86_64-510.47.03.run  https://us.download.nvidia.com/XFree86/Linux-x86_64/510.47.03/NVIDIA-Linux-x86_64-510.47.03.run

chmod +x NVIDIA-Linux-x86_64-510.47.03.run
./NVIDIA-Linux-x86_64-510.47.03.run --check
# answer "no" when it asks if it should update X config
./NVIDIA-Linux-x86_64-510.47.03.run

With the drivers installed, we need to add some udev-rules. This is to make sure proper kernel modules are loaded, and that all the relevant device files is created upon boot.

# add kernel modules
echo -e '\n# load nvidia modules\nnvidia-drm\nnvidia-uvm' >> /etc/modules-load.d/modules.conf

# add the following to /etc/udev/rules.d/70-nvidia.rules
# will create relevant device files within /dev/ during boot
KERNEL=="nvidia", RUN+="/bin/bash -c '/usr/bin/nvidia-smi -L && /bin/chmod 666 /dev/nvidia*'"
KERNEL=="nvidia_uvm", RUN+="/bin/bash -c '/usr/bin/nvidia-modprobe -c0 -u && /bin/chmod 0666 /dev/nvidia-uvm*'"
SUBSYSTEM=="module", ACTION=="add", DEVPATH=="/module/nvidia", RUN+="/usr/bin/nvidia-modprobe -m"

To avoid that the driver/kernel module is unloaded whenever the GPU is not used, we should run the Nvidia provided persistence service. It’s made available to us after the driver install.

# copy and extract
cp /usr/share/doc/NVIDIA_GLX-1.0/samples/nvidia-persistenced-init.tar.bz2 .
bunzip2 nvidia-persistenced-init.tar.bz2
tar -xf nvidia-persistenced-init.tar

# remove old, if any (to avoid masked service)
rm /etc/systemd/system/nvidia-persistenced.service

# install
chmod +x nvidia-persistenced-init/install.sh
./nvidia-persistenced-init/install.sh

# check that it's ok
systemctl status nvidia-persistenced.service
rm -rf nvidia-persistenced-init*

If you’ve come so far without any errors, you’re ready to reboot the Proxmox host. After the reboot, you should see the following outputs (GPU type/info will of course change depending on your GPU);

root@foobar:~# nvidia-smi
Wed Feb 23 01:34:17 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.47.03    Driver Version: 510.47.03    CUDA Version: 11.6     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA RTX A2000    On   | 00000000:82:00.0 Off |                  Off |
| 30%   36C    P2    4W /  70W |       1MiB /  6138MiB |     0%       Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

root@foobar:~# systemctl status nvidia-persistenced.service
● nvidia-persistenced.service - NVIDIA Persistence Daemon
     Loaded: loaded (/lib/systemd/system/nvidia-persistenced.service; enabled; vendor preset: enabled)
     Active: active (running) since Wed 2022-02-23 00:18:04 CET; 1h 16min ago
    Process: 9300 ExecStart=/usr/bin/nvidia-persistenced --user nvidia-persistenced (code=exited, status=0/SUCCESS)
   Main PID: 9306 (nvidia-persiste)
      Tasks: 1 (limit: 154511)
     Memory: 512.0K
        CPU: 1.309s
     CGroup: /system.slice/nvidia-persistenced.service
             └─9306 /usr/bin/nvidia-persistenced --user nvidia-persistenced

Feb 23 00:18:03 foobar systemd[1]: Starting NVIDIA Persistence Daemon...
Feb 23 00:18:03 foobar nvidia-persistenced[9306]: Started (9306)
Feb 23 00:18:04 foobar systemd[1]: Started NVIDIA Persistence Daemon.

root@foobar:~# ls -alh /dev/nvidia*
crw-rw-rw- 1 root root 195,   0 Feb 23 00:17 /dev/nvidia0
crw-rw-rw- 1 root root 195, 255 Feb 23 00:17 /dev/nvidiactl
crw-rw-rw- 1 root root 195, 254 Feb 23 00:17 /dev/nvidia-modeset
crw-rw-rw- 1 root root 511,   0 Feb 23 00:17 /dev/nvidia-uvm
crw-rw-rw- 1 root root 511,   1 Feb 23 00:17 /dev/nvidia-uvm-tools

If the correct GPU shows from nvidia-smi, the persistence service runs fine, and all five files are available, we’re ready to proceed to the LXC container.

LXC container

We need to add relevant LXC configuration to our container. Shut down the LXC container, and make the following changes to the LXC configuration file;

# edit /etc/pve/lxc/101.conf and add the following
lxc.cgroup2.devices.allow: c 195:* rwm
lxc.cgroup2.devices.allow: c 509:* rwm
lxc.cgroup2.devices.allow: c 511:* rwm
lxc.mount.entry: /dev/nvidia0 dev/nvidia0 none bind,optional,create=file
lxc.mount.entry: /dev/nvidiactl dev/nvidiactl none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-modeset dev/nvidia-modeset none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-uvm-tools none bind,optional,create=file

The numbers on the cgroup2-lines are from the fifth column in the device-list above (via ls -alh /dev/nvidia*). For me, the two nvidia-uvm files changes randomly between 509 and 511, while the three others remain static as 195. I don’t know why they alternate between the two values (if you know how to make them static, please let me know), but LXC does not complain if you configure numbers that doesn’t exist (i.e. we can add all three of them to make sure it works).

We can now turn on the LXC container, and we’ll be ready to install the Nvidia driver. This time we’re going to install it without the kernel drivers, and there is no need to install the kernel headers.

wget -ONVIDIA-Linux-x86_64-510.47.03.run  https://us.download.nvidia.com/XFree86/Linux-x86_64/510.47.03/NVIDIA-Linux-x86_64-510.47.03.run
chmod +x NVIDIA-Linux-x86_64-510.47.03.run
./NVIDIA-Linux-x86_64-510.47.03.run --check
# answer "no" when it asks if it should update X config
./NVIDIA-Linux-x86_64-510.47.03.run --no-kernel-module

At this point you should be able to reboot your LXC container. Verify that the files and driver works as expected, before moving on to the Docker setup.

root@docker1:~# ls -alh /dev/nvidia*
crw-rw-rw- 1 root root 195,   0 Feb 23 00:17 /dev/nvidia0
crw-rw-rw- 1 root root 195, 255 Feb 23 00:17 /dev/nvidiactl
crw-rw-rw- 1 root root 195, 254 Feb 23 00:17 /dev/nvidia-modeset
crw-rw-rw- 1 root root 511,   0 Feb 23 00:17 /dev/nvidia-uvm
crw-rw-rw- 1 root root 511,   1 Feb 23 00:17 /dev/nvidia-uvm-tools

root@docker1:~# nvidia-smi
Wed Feb 23 01:50:15 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.47.03    Driver Version: 510.47.03    CUDA Version: 11.6     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA RTX A2000    Off  | 00000000:82:00.0 Off |                  Off |
| 30%   34C    P8    10W /  70W |      3MiB /  6138MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

Docker container

Now we can move on to get the Docker working. We’ll be using docker-compose, and we’ll also make sure to have the latest version by removing the Debian-provided docker and docker-compose. We’ll also install the Nvidia-provided Docker runtime. Both these are relevant in terms of making the GPU available within Docker.

# remove debian-provided packages
apt remove docker-compose docker docker.io containerd runc

# install docker from official repository
apt update
apt install ca-certificates curl gnupg lsb-release
curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian \
  $(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null

apt update
apt install docker-ce docker-ce-cli containerd.io

# install docker-compose
curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose

# install docker-compose bash completion
curl \
    -L https://raw.githubusercontent.com/docker/compose/1.29.2/contrib/completion/bash/docker-compose \
    -o /etc/bash_completion.d/docker-compose

# install nvidia-docker2
apt install -y curl
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | tee /etc/apt/sources.list.d/nvidia-docker.list
curl -s -L https://nvidia.github.io/nvidia-container-runtime/experimental/$distribution/nvidia-container-runtime.list | tee /etc/apt/sources.list.d/nvidia-container-runtime.list

apt update
apt install nvidia-docker2

# restart systemd + docker (if you don't reload systemd, it might not work)
systemctl daemon-reload
systemctl restart docker

We should now be able to run Docker containers with GPU support. Let’s test it.

root@docker1:~# docker run --rm --gpus all nvidia/cuda:11.0-base nvidia-smi
Tue Feb 22 22:15:14 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.47.03    Driver Version: 510.47.03    CUDA Version: 11.6     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA RTX A2000    Off  | 00000000:82:00.0 Off |                  Off |
| 30%   29C    P8     4W /  70W |      1MiB /  6138MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

root@docker1:~# cat docker-compose.yml
version: '3.7'
services:
  test:
    image: tensorflow/tensorflow:latest-gpu
    command: python -c "import tensorflow as tf;tf.test.gpu_device_name()"
    deploy:
      resources:
        reservations:
          devices:
            - capabilities: [gpu]

root@docker1:~# docker-compose up
Starting test_test_1 ... done
Attaching to test_test_1
test_1  | 2022-02-22 22:49:00.691229: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
test_1  | To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
test_1  | 2022-02-22 22:49:02.119628: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /device:GPU:0 with 4141 MB memory:  -> device: 0, name: NVIDIA RTX A2000, pci bus id: 0000:82:00.0, compute capability: 8.6
test_test_1 exited with code 0

Yay! It’s working! Let’s add the final pieces together for a fully working Plex docker-compose.yml.

version: '3.7'

services:
  plex:
    container_name: plex
    hostname: plex
    image: linuxserver/plex:latest
    restart: unless-stopped
    deploy:
      resources:
        reservations:
          devices:
            - capabilities: [gpu]
    environment:
      TZ: Europe/Paris
      PUID: 0
      PGID: 0
      VERSION: latest
      NVIDIA_VISIBLE_DEVICES: all
      NVIDIA_DRIVER_CAPABILITIES: compute,video,utility
    network_mode: host
    volumes:
      - /srv/config/plex:/config
      - /storage/media:/data/media
      - /storage/temp/plex/transcode:/transcode
      - /storage/temp/plex/tmp:/tmp

And it’s working! Woho!

Problems encountered

When trying to get everything working, I had a few challenges. The solutions have all been incorporated in the above guide, but I’ll briefly mention them for reference here.

1. nvidia-smi not working in Docker container

I got the error message Failed to initialize NVML: Unknown Error when running nvidia-smi within the Docker container. This turned out to be caused by cgroup2 superseeding cgroup on the host.

My initial workaround was to disable cgroup2, and revert back to cgroup. This can be done via updated GRUB parameter, like this;

# assuming EFI/UEFI
# other commands for legacy BIOS
echo "$(cat /etc/kernel/cmdline) systemd.unified_cgroup_hierarchy=false" > /etc/kernel/cmdline
proxmox-boot-tool refresh

However, the proper fix would be to change the lxc.cgroup.devices.allow lines in the LXC config file, to lxc.cgroup2.devices.allow, which permanently resolves the issue.

2. docker-compose GPU config

The official documentation for docker-compose and Plex, states that GPU support is added via the parameter runtime. Running latest docker and docker-compose from stable Debian repository (Debian 11) could not use the runtime: nvidia parameter.

The newer method to consume GPU in docker-compose, the deploy parameter, is only supported on newer docker-compose (v1.28.0+), which is newer than what’s included in the stable Debian 11 repository. We need to use the latest versions in order to get this to work, where we would use the new deploy parameter.

3. docker-compose GPU environment variables

GPU transcoding in Plex did not work with just the deploy parameter. It also needs the two environment variables in order to work. This was not clearly documented, and caused some frustration when trying to get everything working.

NVIDIA_VISIBLE_DEVICES: all
NVIDIA_DRIVER_CAPABILITIES: compute,video,utility

4. High CPU usage from fuse-overlayfs

I also observed high CPU usage from fuse-overlayfs (which is the storage driver I’m using for Docker) caused by the Plex container. It turned out to be the “Detecting intros” background task, which transcodes the audio (to find the intros). It used /tmp as the transcode directory, which was part of the / mounted fuse-overlayfs. This happened despite having the transcode path set to /transcode (Settings -> Transcoder temporary directory). Normal transcoding seems to use /transcode, so it seems to only be the “Detecting intros” task that has this problem. Mounting this path caused the issue to go away.

One Comment

  1. Rob Rob

    excellent guide thank you!

Leave a Reply

Your email address will not be published.