Skip to main content
  1. Posts/

Proxmox - Unprivileged Jellyfin Container

At the end of last year I bought an Intel NUC to play around with Proxmox VE and set up some containers as well as virtual machines. One of the applications that I wanted to run on this system was Jellyfin, an open source media center. The main reason for this was the fact that I wanted to stream my series and film collection from my NAS to my TV without having to worry about whether the TV supports a specific video codec or not. The NUC that I bought has a 13th generation Intel processor whose Quick Sync Video feature allows hardware accelerated transcoding of both H.264 and H.265 video streams.

The NUC that I bought
The NUC that I bought

Interestingly, the official Jellyfin documentation has a section on how to install it as an LXC container on Proxmox. However, I found the topic rather short and not very beginner-friendly, especially if you want to get hardware acceleration working. In this blog post I therefore want to show how to set up Jellyfin in an unprivileged LXC container with hardware acceleration for video transcoding. Additionally, I’m going to mount an SMB share from my NAS to access the video files. I hope this can prove helpful to other people out there in the same situation or to myself in case I have to set up the system again in the future πŸ˜„.

Before we begin please be aware that the commands used in the following sections are tailored to my NUC and its graphics unit from Intel. If you use a different system, e.g. with a graphics card from NVIDIA, you need to adjust the commands to your hardware.

Initial Setup #

I assume that at least Proxmox 8 is already installed and running on the host hardware. Throughout this blog post several commands need to be executed either on the Proxmox host or the Jellyfin LXC container. To differentiate between these two cases I use Proxmox in the title of the code blocks to refer to the Proxmox host itself and Container to the LXC container, in which Jellyfin runs.

In the first step install some utilities on the Proxmox host, that we need throughout this tutorial. This includes tools to test the Intel graphics card of the NUC as well as the cifs-utils package, which is required for mounting SMB shares.

Proxmox: Install Dependencies
$ apt install vim vainfo intel-gpu-tools cifs-utils

After installing the packages, execute the vainfo command to check if the graphics card is correctly detected. The output of the command should look similar to this and display several profiles, that the graphics card supports:

Proxmox: Test Graphics Card
$ vainfo
error: can't connect to X server!
libva info: VA-API version 1.17.0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so
libva info: Found init function __vaDriverInit_1_17
libva info: va_openDriver() returns 0
vainfo: VA-API version: 1.17 (libva 2.12.0)
vainfo: Driver version: Intel iHD driver for Intel(R) Gen Graphics - 23.1.1 ()
vainfo: Supported profile and entrypoints
      VAProfileNone                   : VAEntrypointVideoProc
      VAProfileNone                   : VAEntrypointStats
      VAProfileMPEG2Simple            : VAEntrypointVLD
      VAProfileMPEG2Simple            : VAEntrypointEncSlice
      VAProfileMPEG2Main              : VAEntrypointVLD
      VAProfileMPEG2Main              : VAEntrypointEncSlice
[...]

Jellyfin Installation #

Afterwards create the LXC container for Jellyfin. This can be done normally through the Proxmox web UI, which I’m not going to show here. Make sure to assign enough resources to the container and mark it as unprivileged as privileged containers are considered unsafe1. Please be aware that I use Debian 12 as template for my container. If you rely on another Linux distribution you may need to adjust the following commands accordingly.

Once the container is set up, update it and install the necessary dependencies such as the graphics driver as well as Jellyfin itself. The following commands are therefore executed inside the newly created container:

Container: Install Software
$ apt update && apt upgrade
# Install necessary requirements
$ apt install curl vim vainfo software-properties-common
# Download the Jellyfin install script from their website
# This command was taken from the Jellyfin documentation and should produce no output
$ diff <( curl -s https://repo.jellyfin.org/install-debuntu.sh -o install-debuntu.sh; \
                  sha256sum install-debuntu.sh ) <( curl -s https://repo.jellyfin.org/install-debuntu.sh.sha256sum )
# Start the Jellyfin installation process and follow the instructions on screen
$ bash install-debuntu.sh
# Add the non-free repository to get access to the propriety Intel graphics driver
$ add-apt-repository -y non-free non-free-firmware
$ apt update
# Install the graphics card driver
$ apt install intel-media-va-driver-non-free

If you execute the vainfo command inside the container at this point in time no profiles will show up in contrast to the output on the Proxmox host. This behavior is expected as we haven’t passed the graphics card to the container yet. We will do this in the next step.

GPU Passthrough #

During its installation Jellyfin will create its own jellyfin user and add it to the render group. Before proceeding verify this and extract the group ID of the render group as we need it later on:

Container: Identify Group ID
# The render group has ID 106 and the jellyfin user is part of it
$ getent group render
render❌106:jellyfin
# Shutdown the container
$ shutdown -h now

Once the container is shut down add a device passthrough via the Proxmox web UI. Select your container, go the Resources menu, click on the Add button and choose Device Passthrough as seen in the following screenshot:

Add a device passthrough to the Jellyfin container
Add a device passthrough to the Jellyfin container

In the following popup two important things must be specified. First the Device Path where we need to enter the absolute path to the device node responsible for rendering. Assuming the underlying system has a single graphics card this should be /dev/dri/renderD128. Second we need to define the GID of the group inside the container to which the device should be mapped. Here enter the GID of the render group, that we identified before.

Specify Device Path and GID in container
Specify Device Path and GID in container

Once the changes have been saved the container can be restarted. To check if the passthrough was successful take a look at the contents of the /dev/dri folder inside the container. A renderD128 object should appear here, and it should belong to the render group:

Container: Check Device
$ ls -lh /dev/dri/
total 0
crw-rw---- 1 nobody render 226, 128 Sep  5 16:52 renderD128

Additionally, as the jellyfin user run the vainfo command inside the container. Now its output should be similar to the one received on the Proxmox host at the start of this post. Additional profiles may be included here because of the installed proprietary drivers:

Container: Check Graphics Card
$ sudo -u jellyfin vainfo
error: XDG_RUNTIME_DIR is invalid or not set in the environment.
error: can't connect to X server!
libva info: VA-API version 1.17.0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so
libva info: Found init function __vaDriverInit_1_17
libva info: va_openDriver() returns 0
vainfo: VA-API version: 1.17 (libva 2.12.0)
vainfo: Driver version: Intel iHD driver for Intel(R) Gen Graphics - 23.1.1 ()
vainfo: Supported profile and entrypoints
      VAProfileNone                   : VAEntrypointVideoProc
      VAProfileNone                   : VAEntrypointStats
      VAProfileMPEG2Simple            : VAEntrypointVLD
      VAProfileMPEG2Simple            : VAEntrypointEncSlice
      VAProfileMPEG2Main              : VAEntrypointVLD
[...]

If several profiles show up then the device passthrough was successful and hardware acceleration should be possible. Before setting up Jellyfin we mount an SMB share in the next section.

SMB Share #

As unprivileged containers can’t mount remote SMB shares directly, a little workaround is necessary. We will mount the share directly on the Proxmox host and then pass the mounted folder to the container.

To start create a folder inside the container where the share should appear later on. In my case I created one inside /mnt/.

Container: Create Folder
$ mkdir /mnt/satellizer_anime

Additionally, identify the user ID of the jellyfin user as we need it later on.

Container: Identify User ID
# User ID of jellyfin is 103
$ id jellyfin
uid=103(jellyfin) gid=112(jellyfin) groups=112(jellyfin),44(video),106(render)

Afterwards switch to the CLI of the Proxmox host and create a folder where you want to mount the SMB share. In my case I chose /mnt/ again and created a new subfolder:

Proxmox: Create Folder
$ mkdir /mnt/satellizer_mount/

Next create a .smbcredentials file and add the username and password that are required to mount the share from the NAS:

Proxmox: Credential File
$ cat /root/.smbcredentials
username=<USERNAME>
password=<PASSWORD>

Then add a new line to the /etc/fstab file on the Proxmox host, specifying the mount and additional options:

Proxmox: /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
//satellizer/Anime/ /mnt/satellizer_mount/ cifs credentials=/root/.smbcredentials,x-systemd.automount,iocharset=utf8,rw,uid=100103,vers=3,_netdev 0 0

The different parts of the line have the following meaning:

  • //stallizer/Anime: The UNC path to the remote folder on my NAS
  • /mnt/satellizer_mount: The folder on the Proxmox host where the SMB share should be mounted
  • cifs: Specifying that this is an SMB share
  • credentials=/root/.smbcredentials: Path to the password file with necessary credentials created previously
  • x-systemd.automount: Try to automatically remount the share if it becomes unavailable
  • rw: Mount the share read writeable
  • uid=100103: Owner of the share on the host. User ID 100103 is the ID that the jellyfin user gets mapped to on the Proxmox host
  • vers=3: Use SMB version 3
  • _netdev: This is a network share, so it shouldn’t be mounted before the network system is online

Note

When using an unprivileged container the user IDs of users inside the container are mapped to user IDs on the host system according to the following scheme:

\(ID\_HOST = ID\_CONTAINER + 100000\)

Since the user ID of jellyfin is 103, we have to enter 100103 as uid.

With the line added reload the configuration and try to mount the share. To verify whether the mount was successful enter the mount folder and check if the remote files are present.

Proxmox: Mount Share
$ systemctl daemon-reload
$ mount -a

In the last step pass the mounted folder to the LXC container with the help of the pct set command:

Proxmox: Pass Share to Container
$ pct set 101 -mp0 /mnt/satellizer_mount/,mp=/mnt/satellizer_anime/

In this case 101 is the ID of my Jellyfin LXC container. The first path is the local path on the Proxmox host, where the SMB share is mounted. The second path is the folder inside the container from which the content should be accessible. Once the command has been executed go to the CLI of the container and verify whether you can see the files inside the given folder.

If that’s the case congratulations both hardware acceleration and SMB shares have been successfully set up. Now it’s time to perform a final test.

Final Test #

Connect to the web UI of Jellyfin and set up the application by following the instructions in the wizard. During this process add the share which appears as a normal folder inside the container. After the initial setup is done we need to enable hardware acceleration in Jellyfin itself. This can be done by navigating to the Admin Dashboard, selecting the Playback tab and then the Transcoding section. Set the correct hardware acceleration method there, in my case Intel Quicksync (QSV).

Now play a video and adjust the resolution such that the system is forced to transcode the video stream. At the same time go to the CLI of the Proxmox host and execute the intel_gpu_top command. If the graphics card is used for transcoding (which means hardware acceleration is working as expected) activity should be shown here like this:

Proxmox: Verify Usage
$ intel_gpu_top
intel-gpu-top: Intel Alderlake_p (Gen12) @ /dev/dri/card0 - 1298/1498 MHz;   0% RC6;  3.40/14.14 W
       1776 irqs/s

         ENGINES     BUSY                                                                MI_SEMA  MI_WAIT
       Render/3D   49.24% |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ                                 |      0%      0%
         Blitter    0.00% |                                                             |      0%      0%
           Video   38.73% |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ                                      |      0%      0%
    VideoEnhance   33.60% |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ                                         |      0%      0%

   PID              NAME      Render/3D            Blitter              Video            VideoEnhance
707236            ffmpeg |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž        ||                  ||β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž          ||β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž           |

Conclusion #

We have reached the end of this post and set up Jellyfin in an unprivileged LXC container with hardware acceleration. We also passed an SMB share to it so that we can access our video files on the NAS. I hope this post was helpful to you learn a thing to two. Thanks for reading and until next time πŸ‘‹.