Skip to main content N100 - PCE Passthrough - iGPU does get assigned an IOMMU Group : r/Proxmox

N100 - PCE Passthrough - iGPU does get assigned an IOMMU Group

I have been trying to passthrough the iGPU of an N100 to try and get Frigate and Plex hardware acceleration. My Frigate is currently running as an add-on Home Assistant - might move it over in the future but for now, that's how it sits. On my old computer, I was able to passthrough the iGPU. But I am struggling a lot this time round.

 

I have followed various guides to no success. My problem seems to be that the iGPU does not get assigned an IOMMU group - as you can see here.

 

Has anybody had any success with passing through the iGPU of an N100?

 

My GRUB looks like this:

GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt intel_iommu=igfx_off i915.enable_guc=3 i915.max_vfs=7 initcall_blacklist=sysfb_init pcie_aspm=off"
GRUB_CMDLINE_LINUX="intel_iommu=on intel_iommu=igfx_off iommu=pt vfio_iommu_type1.allow_unsafe_interrupts=1 kvm.ignore_msrs=1"
GRUB_TERMINAL=console

 

/etc/modules looks like this:

# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.

# Modules required for PCI passthrough
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

# Generated by sensors-detect on Fri Mar  1 21:10:48 2024
# Chip drivers
coretemp

# Generated by sensors-detect on Fri Mar  1 21:18:45 2024
# Chip drivers
coretemp

 

My /etc/kernel/cmdline looks like this:

 intel_iommu=on

 

The GPU is definitely recognized.

00:00.0 Host bridge: Intel Corporation Device 461c
00:02.0 VGA compatible controller: Intel Corporation Alder Lake-N [UHD Graphics]
00:02.1 VGA compatible controller: Intel Corporation Alder Lake-N [UHD Graphics]
00:02.2 VGA compatible controller: Intel Corporation Alder Lake-N [UHD Graphics]
00:02.3 VGA compatible controller: Intel Corporation Alder Lake-N [UHD Graphics]
00:02.4 VGA compatible controller: Intel Corporation Alder Lake-N [UHD Graphics]
00:02.5 VGA compatible controller: Intel Corporation Alder Lake-N [UHD Graphics]
00:02.6 VGA compatible controller: Intel Corporation Alder Lake-N [UHD Graphics]
00:02.7 VGA compatible controller: Intel Corporation Alder Lake-N [UHD Graphics]
00:0d.0 USB controller: Intel Corporation Device 464e
00:14.0 USB controller: Intel Corporation Alder Lake-N PCH USB 3.2 xHCI Host Controller
00:14.2 RAM memory: Intel Corporation Alder Lake-N PCH Shared SRAM
00:16.0 Communication controller: Intel Corporation Alder Lake-N PCH HECI Controller
00:17.0 SATA controller: Intel Corporation Device 54d3
00:1c.0 PCI bridge: Intel Corporation Device 54b8
00:1c.1 PCI bridge: Intel Corporation Device 54b9
00:1c.2 PCI bridge: Intel Corporation Device 54ba
00:1c.3 PCI bridge: Intel Corporation Device 54bb
00:1c.6 PCI bridge: Intel Corporation Device 54be
00:1f.0 ISA bridge: Intel Corporation Alder Lake-N PCH eSPI Controller
00:1f.3 Multimedia audio controller: Intel Corporation Alder Lake-N PCH High Definition Audio Controller
00:1f.4 SMBus: Intel Corporation Device 54a3
00:1f.5 Serial bus controller: Intel Corporation Device 54a4
01:00.0 Ethernet controller: Intel Corporation Ethernet Controller I226-V (rev 04)
02:00.0 Ethernet controller: Intel Corporation Ethernet Controller I226-V (rev 04)
03:00.0 Ethernet controller: Intel Corporation Ethernet Controller I226-V (rev 04)
04:00.0 Ethernet controller: Intel Corporation Ethernet Controller I226-V (rev 04)
05:00.0 SATA controller: JMicron Technology Corp. JMB58x AHCI SATA controller

 

IOMMU is enabled too:

root@proxmox:~# dmesg | grep -e DMAR -e IOMMU
[    0.009090] ACPI: DMAR 0x0000000075490000 000088 (v02 INTEL  EDK2     00000002      01000013)
[    0.009123] ACPI: Reserving DMAR table memory at [mem 0x75490000-0x75490087]
[    0.023831] DMAR: IOMMU enabled
[    0.023843] DMAR: Disable GFX device mapping
[    0.023904] DMAR: IOMMU enabled
[    0.023925] DMAR: Disable GFX device mapping
[    0.062253] DMAR: Host address width 39
[    0.062254] DMAR: DRHD base: 0x000000fed90000 flags: 0x0
[    0.062261] DMAR: dmar0: reg_base_addr fed90000 ver 4:0 cap 1c0000c40660462 ecap 29a00f0505e
[    0.062263] DMAR: DRHD base: 0x000000fed91000 flags: 0x1
[    0.062268] DMAR: dmar1: reg_base_addr fed91000 ver 5:0 cap d2008c40660462 ecap f050da
[    0.062270] DMAR: RMRR base: 0x0000007c000000 end: 0x000000803fffff
[    0.062272] DMAR-IR: IOAPIC id 2 under DRHD base  0xfed91000 IOMMU 1
[    0.062274] DMAR-IR: HPET id 0 under DRHD base 0xfed91000
[    0.062275] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[    0.064045] DMAR-IR: Enabled IRQ remapping in x2apic mode
[    0.286432] pci 0000:00:02.0: DMAR: Skip IOMMU disabling for graphics
[    0.369787] DMAR: No ATSR found
[    0.369788] DMAR: No SATC found
[    0.369790] DMAR: dmar1: Using Queued invalidation
[    0.370145] DMAR: Intel(R) Virtualization Technology for Directed I/O
[   11.015097] pci 0000:00:02.1: DMAR: Skip IOMMU disabling for graphics
[   11.020373] pci 0000:00:02.2: DMAR: Skip IOMMU disabling for graphics
[   11.025930] pci 0000:00:02.3: DMAR: Skip IOMMU disabling for graphics
[   11.030346] pci 0000:00:02.4: DMAR: Skip IOMMU disabling for graphics
[   11.035703] pci 0000:00:02.5: DMAR: Skip IOMMU disabling for graphics
[   11.039326] pci 0000:00:02.6: DMAR: Skip IOMMU disabling for graphics
[   11.042549] pci 0000:00:02.7: DMAR: Skip IOMMU disabling for graphics
POV: You’ve already won with boosted storage from the Crucial P310 SSD
  • POV: You’ve already won with boosted storage from the Crucial P310 SSD
  • POV: You’ve already won with boosted storage from the Crucial P310 SSD
  • POV: You’ve already won with boosted storage from the Crucial P310 SSD
  • POV: You’ve already won with boosted storage from the Crucial P310 SSD
  • POV: You’ve already won with boosted storage from the Crucial P310 SSD
  • Sort by:
    Best
    Open comment sort options

    I use a cluster of N100 minipcs with integrated GPU pass-through without issue. You seem to be applying a couple of different configurations and have left out some information, so I will try to explain in detail some of the processes.

    It used to be that you could pass-through a primary GPU to a VM and in doing so you would have to make sure the host didn't load the GPU drivers and take control of the GPU before the VM. This is when you configure IOMMU pass-through and blacklist the GPU drivers on the host. This would mean that the GPU could be passed on to just one VM, and the host would no longer have access.

    With the newer chipsets like the Alder Lake N100 it is possible to use SR-IOV to generate multiple virtual GPUs (up to seven) to pass on to multiple VMs. This would also allow host device to still have access to the GPU.

    You seem to be doing both configurations which isn't necessary. My suggestion would be to stick with the SR-IOV configuration.

    This configuration should only require that the SR-IOV i915 driver be built for the host and that the following GRUB configuration be used

    GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt i915.enable_guc=3 i915.max_vfs=7

    As well as the number of sriov vGPUs be configured in the sysfs.conf file.

    There is a great tutorial here https://www.derekseaman.com/2023/11/proxmox-ve-8-1-windows-11-vgpu-vt-d-passthrough-with-intel-alder-lake.html

    Now as for the VM getting the passed through GPU there are some considerations. First it has to be a q35 machine, it is required to have the OVMF bios. Further more when you pass the GPU it will have to be set as primary and your display set to none. The GPU that you pass also has to be a subset of the vGPUs 0000.00.20.1-7 and not the primary 0000.00.20.0 otherwise the host (proxmox) driver will crash and need to be restarted.

    That being said what isn't very clear is the VM guest has to be aware of this configuration as well. If you are running a windows VM and install the ARC drivers it should be fine. If you are running a Linux VM it will need pretty much the same configuration setup as the host. That means installing the SR-IOV drivers and putting the following in GRUB:

    GRUB_CMDLINE_LINUX_DEFAULT="quiet i915.enable_guc=3

    It would not need the IOMMU configuration or setting the PCI in sysfs as it's not going to be passing cards to anything else.

    Finally I have not tested using vGPUs in LXC, I assume it should work but it's a lot more nuanced than just throwing numbers at it as suggested in this thread. Here is a better explanation of how to setup LXC pass-through and how to find the correct cgroup references and map them https://www.youtube.com/watch?v=0ZDr5h52OOE

    I hope this helps.

    }

    I had no idea not only that you can do passthrough, but have multiple devices have access to it. Thanks a lot for sharing this

    }
    More replies
    More replies

    I'm not sure about Frigate, but I have a privileged Plex lxc running that shares the iGPU and handles hardware accelerated transcoding without touch grub, messing with IOMMU, etc.

    /etc/pve/lxc/###.cfg

    lxc.cgroup2.devices.allow: c 226:0 rwm
    lxc.cgroup2.devices.allow: c 226:128 rwm
    lxc.cgroup2.devices.allow: c 29:0 rwm
    lxc.autodev: 1
    lxc.hook.autodev: /var/lib/lxc/###/mount_hook.sh

    /var/lib/lxc/###/mount_hook.sh

    mkdir -p ${LXC_ROOTFS_MOUNT}/dev/dri
        mknod -m 666 ${LXC_ROOTFS_MOUNT}/dev/dri/card0 c 226 0
        mknod -m 666 ${LXC_ROOTFS_MOUNT}/dev/dri/renderD128 c 226 128
        mknod -m 666 ${LXC_ROOTFS_MOUNT}/dev/fb0 c 29 0

    I believe I also needed to install the Intel iGPU drivers in both the host and lxc.

    }

    So I could move Frigate to an LXC (instead of Home Assistant addon), and potentially use this for both Frigate and Plex, without having to faff with passthrough?

    }
    More replies
    More replies