Although Linux is not "officially" a gaming platform and is rarely supported by AAA game developers, there are many different ways to play games on it. The most common method is using a compatibility layer such as Wine/Proton/DXVK, however, most anti-cheat solutions don't work on such a layer.
This is where VFIO passthrough comes in. Given that you have compatible hardware, it is possible to have a VM of any OS with a dedicated GPU and near-native performance. Luckily, Lenovo's Legion 7 (2021) has the perfect hardware setup for such a project.
In this post, I will give detailed instructions on how to create a Windows gaming virtual machine on the Legion 7 (2021) which hijacks the system's dedicated GPU when it boots and passes it back to the host once it shuts down.
Note that this setup only works when the laptop is in Hybrid Graphics mode because of its architecture. You could technically adapt it to work while in Discrete Graphics mode as well, but you would have to give up on using your Linux environment while Windows is running.
Before doing anything else, you will need to change a couple of settings in the UEFI menu. While the computer is booting, either spam or hold the F2 key. Once you are in the menu make sure that AMD virtualization is turned on and that the laptop is in Discrete GPU mode.
You can check if virtualization is working by running sudo dmesg | grep IOMMU and sudo dmesg | grep AMD-Vi:
At this point, you will probably be ready to proceed, but if something went wrong you can try adding amd_iommu=on to your kernel parameters.
After booting your laptop, you're going to have to install the required packages (note that most commands in this post are meant to be used on Arch Linux):
If you get a warning about incompatibilities, you should be able to just uninstall the older package. After installation is complete, you will also need to add your user to the libvirt group: sudo usermod -aG libvirt user.
Now you can start the necessary services:
Finally, restart your laptop and you should be able to run virt-manager to manage your virtual machines.
Creating a Windows VM
Before doing any complicated hardware passthrough, it's a good idea to create a normal virtual machine to have as a working reference point. You could use Windows 11, but it's been known to have some performance issues that impact Ryzen CPUs, so I recommend sticking with Windows 10 for now.
Since I had a dual-boot setup before trying VFIO, I had already installed Windows on a second drive. This meant that I could simply pass through the entire drive to the VM, I just needed to install the VirtIO drivers beforehand.
After installing the VirtIO drivers, boot back into Linux and create a new VM in virt-manager:
Step 1: Select "Manual Install" and make sure the architecture is set to "x86_64"
Step 2: Set the Operating System to "Microsoft Windows 10".
Step 3: Allocate as much memory as you need but make sure to leave some for the host as well. You can see how much memory the host is currently using by running free -m. Changing the CPU options doesn't really matter since we'll manually change them later.
Step 4: Do not enable storage for this virtual machine for now. We'll do that during the customization.
Step 5: Select "Customize configuration before install" and make sure that "Virtual network 'default': NAT" is chosen.
A new window should pop up. This is where you can customize your virtual machine. Before booting it, you should change a couple of options:
Under "Overview", Set the Chipset to "Q35" and the Firmware to "UEFI x86_64: /usr/share/edk2-ovmf/x64/OVMF_CODE.fd". Make sure to click "Apply" before switching pages.
Under "CPUs", deselect "Copy host CPU configuration" and pick host-passthrough. You should also select "Manually set CPU topology" and set 1 Socket, 6 Cores, and 2 Threads (12 total virtual cores), leaving 4 for the host. If you have a different CPU, you should make sure to change these options to fit your configuration.
Click "Add Hardware > Storage" and add the drive that contains Windows. Set "Bus type" to "VirtIO" and "Cache mode" to "none" for better performance.
Make sure that the drive contains the Windows Bootloader. If it doesn't, you can resize the Windows partition, boot from a rescue USB, and create a Bootloader using BCDBOOT.
Set the Network Device and Video Device models to "VirtIO".
Finally, click "Begin Installation". You might have to manually add a Boot Entry to the UEFI by pressing the Escape key while the VM is booting, going to "Boot Maintenance Manager > Boot Options > Add Boot Option > Windows Disk > EFI > Microsoft > Boot > bootmgfw.efi".
If you get a "Boot Device Not Found" Blue Screen, make sure that you have the VirtIO drivers mounted, or try booting with a "SATA" drive instead of "VirtIO".
After that, if everything is working fine, you can go ahead and shut down the VM.
Next, we need to figure out the IOMMU groups of the NVIDIA GPU. IOMMU refers to the chipset device that maps virtual addresses to physical addresses of your I/O devices (i.e. GPU, disk, etc.). When passing through a device to a VM, you normally need to pass along all other devices in its IOMMU group.
To check your IOMMU groups, create an iommu.sh script with the following content:
When you run it, you should see output similar to this:
What we mainly care about are the GPU groups, specifically:
If you only want to pass through the GPU, you can go ahead and skip the following section. However, if you want to pass through more devices that are in less than ideal IOMMU groups, ACS patching can help.
By using the ACS override patch, we can basically force the kernel to falsely expose isolation capabilities for our components and add them to separate IOMMU groups. Luckily for us, we don't need to apply the patch ourselves since there is already an AUR package with the patch pre-applied: linux-vfio.
Before building the package, make sure to edit your makepkg settings so that you use all your CPU's cores. You can do that by editing /etc/makepkg.conf and setting the MAKEFLAGS line (under "Architecture, Compile Flags") to -j$(nproc).
Afterward, install the needed packages using your favorite AUR helper. In my case, the command is:
Note: If you get an error about invalid keys, try running:
You can now go ahead and make a cup of coffee or grab a snack as the building process takes about 20 minutes.
You should also be aware that the amdgpu module is not automatically loaded in the linux-vfio kernel, leading you to a blank screen. This can be easily fixed by creating a /etc/modules-load.d/display.conf file:
Finally, make sure to add pcie_acs_override=downstream,multifunction to your kernel's command line parameters. If you are using GRUB, this can be done by editing /etc/default/grub and then running sudo grub-mkconfig -o /boot/grub/grub.cfg.
After rebooting, you can check which kernel is running using the command uname -a. If everything went well, you should see that you are using the linux-vfio kernel and that most devices are in different IOMMU groups:
Creating the Hook Scripts
Before doing any passthrough, we need to create the scripts that will allocate the necessary resources to the VM before it boots and de-allocate them after it shuts down. To do that, we are going to be using libvirt hooks and The Passthrough Post's hook helper. You can find all the needed scripts in this project's git repository.
Warning: If you already have hooks set up, the next step will overwrite them.
Go ahead and run the following commands:
Next, you need to set up the directory structure, like so:
Scripts in the prepare/begin directory will be executed before the VM starts, scripts in the started/begin directory will be executed once the VM starts, and scripts in the release/end directory will be executed once the VM shuts down.
The first file we are going to create will contain all of our environment variables, specifically the addresses of the GPU we are going to pass through. Create a kvm.conf file in /etc/libvirt/hooks:
The addresses should be the same if you are using a Legion 7 but you can double-check by re-running iommu.sh.
Set WIN to the Windows partition if you want to unmount it automatically once the VM starts.
Now we are going to create the script that prepares the host for passthrough. Create a start.sh file in /etc/libvirt/hooks/qemu.d/Win10/prepare/begin with the following contents:
Make sure to change the highlighted lines to match your system:
Line 12 should stop whatever display manager you are using.
Lines 40-43 should unload all available Nvidia drivers and their dependencies. You can see them by running lsmod | grep -i nvidia.
Next, we are going to create the script that re-launches our display manager using only integrated graphics. Create a lightdm.sh file in /etc/libvirt/hooks/qemu.d/win10/started/begin:
Finally, we are going to create the script that reverts the host once the VM shuts down. This is basically going to be the inverse of the previous script with a couple of small differences. Create a revert.sh file in /etc/libvirt/hooks/qemu.d/win10/release/end:
Line 31 wakes up the GPU by querying its config, it might be redundant.
Lines 34-37 load all drivers that were unloaded in the start script.
You should also make sure that the scripts are owned by root and have execute permissions before moving on to the next section.
Passing Through Devices
Now is finally time to pass through the GPU. Open virt-manager, select your VM, and click the "Add Hardware > PCI Host Device" button. You should see a large list containing the same devices as shown when running the iommu.sh script.
Select your GPU, as well as any other devices that were in the same IOMMU group, and add them to the VM.
You can also add any other devices you like such as network cards or a USB controller, but make sure to also add their IOMMU neighbors.
When using mobile graphics cards, the Nvidia driver wants to check the status of the power supply. Since we are using a VM, no battery is present, and the driver shows the infamous "Error 43".
You could technically now boot the VM, install Looking Glass, and be good to go. However, there are a couple of things you can do to greatly improve performance.
CPU Pinning is the assignment of a process or task to a specific CPU core. This has the advantage of significantly increasing cache utilization, and therefore performance.
To see which cores you need to pin, you can use the lstopo utility:
Next up, edit your VM XML, and add the following parameters (make sure to pin neighboring cores):
If you are using an AMD CPU, you will also want to enable SMT:
After pinning your CPU cores, you will also want to enable huge pages to reduce memory latency. The hook scripts should handle memory allocation automatically, but you still need to edit your XML:
You should also enable some Hyper-V enlightenments to help the guest OS handle nested virtualization. You can find more information on what each option does here.
The VirtIO memballoon device allows the host to reclaim memory from a running VM. However, this functionality comes at a performance cost, so you can disable it by editing the <memballoon> tag in your XML like so:
Setting up Looking Glass
If you were to boot the VM now, you would notice that even though the GPU is passed through correctly, it is unusable. This happens because the Nvidia drivers expect a display to be connected, but we don't actually have one.
This is where a Dummy HDMI plug can come in handy. This little device can be used to trick the GPU into thinking a display is connected. The resolution of the plug doesn't really matter since we can set a custom one from the Nvidia control panel anyway.
We can then use Looking Glass to hijack the display signal and pass it back to the host with minimal latency. Download the host application and install it on your Windows VM. Once you're done, shut the VM down and follow the instructions below to finish the configuration.
First, you need to create the shared memory config. Create a new /etc/tmpfiles.d/10-looking-glass.conf file with the following contents:
You should also make sure that your user is in the libvirt group. After that, edit your XML and add the following in the <devices> section:
Once that's done, you should be able to remove any other graphics devices from your VM, plug the Dummy HDMI, and open the looking glass client on your host. If everything was set up correctly, you should see the Windows lock screen.
If you also want to pass through mouse and keyboard input using Looking Glass, you can simply add a new "Display Spice" device and set its model type to "none". You should also remove any "Tablet" devices you might have.
Finally, to enable clipboard sharing, edit your spicevmc channel like so:
Now that video passthrough is configured, there's only one step left: audio passthrough. First, edit your QEMU configuration and add your user id. You can find it using the id command.
Next, edit your XML once again, and add the following in the devices section:
If everything went well, you should now be able to hear your Windows VM's audio outputs on your host.
It is not often that you find a laptop with hardware capable of VFIO passthrough, so seeing this one work so flawlessly was a treat:
After some not-so-thorough benchmarking, I can say that I get around 75% of the performance compared to bare-metal Windows. This is not that big of a problem for normal usage, however, if I ever need that extra boost, I can simply boot Windows from GRUB and be good to go.
Was there a point in wasting all this time on such a fiddly setup? Maybe. Was it fun? Definitely.
A very special thanks to the authors and contributors of the following guides and blog posts: