FOR PEOPLE HAVING THIS ERROR: bdsDxe: failed to Ioad Boot0002 "UEFI QEMU QEMU HARDDISK" uncheck the "Pre-Enroll keys" option and it will boot via uefi! pls vote this up I googled for 5hrs to find the source of the problem. System: asus z590p, 11900k, 64gb kingston 2666.
@oddholstensson212 Жыл бұрын
Excellent guide. Do not forget to deselect Device Manager->Secure Boot Configuration->Attempt secure boot in VM UEFI BIOS when installing TrueNAS. Access it by pressing "Esc" key during boot sequence. Othervise you will get access denied on virtual installation disk.
@wirikidor7 ай бұрын
5 months later, this comment just saved me some headache.
@maconly345 ай бұрын
@@wirikidor MERCI !!!
@alexmoore49264 ай бұрын
I literally just disabled secure boot and it worked (as now it's just UEFI and no disk space is needed) hopefully that doesn't screw me down the road
@elikirkwood45803 ай бұрын
an hour of headache could've been solved by scrolling down. fml
@tormaid42 Жыл бұрын
Wish after so many years there was a simple gui option for this. Appreciate the guide!
@jttech44 Жыл бұрын
"Don't virtualize truenas" *Chuckles in 4 virtualized truenas servers in production*
@CraftComputing Жыл бұрын
STOP SAYING TH.... Wait.... nevermind :-D
@sarahjrandomnumbers6 ай бұрын
Just like Stockton Rush always said. REAL Men ALWAYS test in production.
@jttech446 ай бұрын
@@sarahjrandomnumbers Lmao rip
@shinythings74 ай бұрын
I have been on the fence if I wanted to do truenas on bare metal or virtualize it and this sentence and Jeff's quick explanation on why made me feel a lot better about doing it.
@jttech444 ай бұрын
@@shinythings7 you really don't lose much juice virtualizing anything nowadays.
@marc3793 Жыл бұрын
Proxmox really should just make these options available in the UI.
@cjmoss5111 ай бұрын
Truly. I just dont think these things occur to them when they are processing feature adds and the like. They can be slow to adope like Debian which is what its based on.
@TwiggehTV10 ай бұрын
Right? They have MOST of the UI, they just need the initialization bit to be UI-driven aswell. A full-feature product like Proxmox should have all of its functions available through its UI, "popping under the hood" with a terminal is an ugly solution, no matter how poweful it might be.
@Solkre829 ай бұрын
It's stupid easy in ESXi, too bad Broadcom killed it.
@manekdubash50228 ай бұрын
@@Solkre82That's where I'm coming from too. Moving from esxi to Proxmox - if my passthrough setup can be replicated in PVE...
@Solkre828 ай бұрын
@@manekdubash5022 I'm sure it can, just not as simple. I archived my ESXi 8 ISOs and Keys so I'm not worried about moving for a few years. Who knows, Broadcom might decide to do good.. HAHAHAHA my sides hurt!
@mistakek Жыл бұрын
I've been waiting for this. I already have 2 Erying systems as my Proxmox cluster, after your first video on this, and they've been working perfectly for me, but when you originally said you couldn't get HBA passthrough to work properly, I held off buying a 3rd, as I wanted the 3rd for exactly what you've done in this video, and to have a 3rd node for ceph. Now that I can see you figured it out using a sata card, I'm off to order all the bits for the 3rd node. Thank You, and after I order everything, I'll pop into your store to buy some glassware to show some appreciation.
@iamthesentinel5845 ай бұрын
I just have to say, I spent hours trying to get my GPU to passthrough correctly, and your one comment on Memory Ballooning just fixed it! Thank you so much! I didn't even see anything about that mentioned in any of the official documentation!
@harry45165 ай бұрын
Thank you for sharing your experience! It was incredibly helpful in getting GPU passthrough to work. However, I needed to make a few adjustments: In Proxmox 8, /etc/kernel/cmdline does not exist. Instead, I entered the settings in /etc/default/grub as follows: GRUB_CMDLINE_LINUX_DEFAULT="quiet nouveau.modeset=0 intel_iommu=on iommu=pt video=efifb:off pci=realloc vfio-pci.ids=10de:1d01" It's important to note the parameters video=efifb:off and pci=realloc, which were not mentioned elsewhere. These are crucial because many motherboards use shadow RAM for PCIe Slot 1, which can hinder GPU passthrough if not configured properly. With this setup, I believe all your GPUs should function correctly. Additionally, I had to blacklist the NVIDIA drivers.
@w33dp0w3r5 ай бұрын
hey, nice addition indeed ! what about the audio card ? this is my pain... can you give me some hints about that ?thx in advance.
@mattp34373 ай бұрын
"It's important to note the parameters video=efifb:off and pci=realloc, which were not mentioned elsewhere." So where do these parameters get added/edited?
@61212323Ай бұрын
@@w33dp0w3r if you have GPU passthrough you can use the monitor (HDMI/DP) for audio or pass an USB card (like i did). Some monitor have an audio out port on them, but only only works with HDMI or DP.
@airwolf_hd17 күн бұрын
For anyone who was confused like me there are 2 bootloaders, GRUB and Systemd-boot. /etc/kernel/cmdline only exists with Systemd-boot and this bootloader is used when Proxmox is installed on ZFS. Therefore, anyone with UEFI and not booting from ZFS should follow the GRUB instructions.
@scuzzy214210 ай бұрын
These tutorials are so much more usefull than Network Chucks and you dont seem like a shill trying to sell me something constantly.
@sirdewd219710 ай бұрын
Network Chuck is only good for ideas not how-to guides. He’s more of a cyber influencer to me.
@JamesMowery9 ай бұрын
This is actually such a good point. I barely/rarely watch Network Chuck anymore. He just feels fake to me now. Almost unwatchable. I haven't seen one of his videos in months.
@johndroyson79219 ай бұрын
seems like a good starting point for newbies or kids. I won't knock him for making the stuff sound exciting but I definitely grew out of his style.
@citypavement5 ай бұрын
I can't fucking stand that guy. "Look at my beard! Look, I'm drinking coffee! Buy my sponsored bullshit!"
@Oschar1573 ай бұрын
@@johndroyson7921 he's what got me into networking/homelab. He made it fun and entertaining, but now that I am getting more knowledgable about this stuff, I watch him less and less
@thatonetimeatbandcamp Жыл бұрын
As always you're Jeff.. There a situation where you aren't Jeff? like maybe Mike? or Chris?
@CraftComputing Жыл бұрын
I kind of like being Jeff.
@SP-ny1fk Жыл бұрын
@@CraftComputing Yeah it would be weird if you woke up as Patrick from STH.
@CraftComputing Жыл бұрын
That would be weird. I'd be a whole foot shorter.
@jonathanzj620 Жыл бұрын
@@CraftComputingDepends if you're cosplaying as an admin that day or not
@JeffGeerling Жыл бұрын
@@CraftComputingme too
@DJCarlido Жыл бұрын
Another little addition to this. It seems that you still need to add ""GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on" "" to the etc/default/grub boot cfg file if using the legacy grub boot menu. The legacy grub boot menu is still teh default if installing ext4 onto a single drive.
@DrNoCDN Жыл бұрын
Jeff - Just wanted to give an extreme thank you for the quality and content of your videos. I just finished up my TrueNAS Scale build using your guidance and it worked like a charm. I did use an Audheid as well, but the K7 8-bay model. I went with an LSI 9240-8i HBA (flashed P20 9211-8i IT Mode) and the instructions on Proxmox 8 you provided were flawless and easily had my array of 4TB Toshiba N300's available via the HBA in my TrueNAS Scale VM. Lastly, a shout out to your top-notch beer-swillery as I am an avid IPA consumer as well! (cheers)
@Tterragyello2 ай бұрын
6:45 -- systems for pve 8.2, you'll want to modify the grub boot settings at /etc/default/grub, append the same iommu text to the string value assigned to GRUB_CMDLINE_LINUX_DEFAULT. then execute update-grub.
@TheFrantic5 Жыл бұрын
Can we just take a step back and marvel at how now only that this is all possible, but also won't cost a dime in software?
@TheDimanoid99922 күн бұрын
its possible but at a cost. you'll sacrifice quite a lot in performance. like gpu will be working 50% maybe and nvme drives, connected through m.2 slots at 1/4 of full speed.
@RealVercas Жыл бұрын
SR-IOV and IOMMU are completely orthogonal features and enabling one will not magically make the other work. SR-IOV simply lets the kernel use a standard way of telling PCI-E devices to split themselves into virtual functions. SR-IOV does not require an IOMMU, and IOMMU does not require SR-IOV.
@livtown Жыл бұрын
Hey Jeff, quick tip: you can use the KZbin sections in the timeline to add timings so people can easily skip to where they need help.
@TomiWebPro10 ай бұрын
Sponserblock extension allows you to skip ads and see where you should start, try it
@LAWRENCESYSTEMS Жыл бұрын
This was helpful as I don't run Proxmox and many people have commented on my XCP-NG videos saying how much easier Proxmox handles this VS XCP-NG but in reality they are actually very similar. Both have the need to find the devices and make changes via the command line just to get it working.
@ryanmoore10167 ай бұрын
Thank you! Every time i'm stuck on a project in my home lab, you tend to have just the video i need and explain it very well!
@fanaticdavid Жыл бұрын
This tutorial series is top notch. Thank you so much, Jeff!
@brentirwin1011 ай бұрын
Thank you for this. I couldn't get hardware transcoding working properly. I turned off ballooning on the VM and BAM! It works. HUZZAH!
@henderstech7 ай бұрын
I had to reinstall proxmox for the first time in over a year. This guide was very much needed today. Thanks
@lilsammywasapunkrock Жыл бұрын
Been waiting for this. All the pcie passthrough write ups are old and outdated, and the only one that worked for me on prox 7.4 was yours.
@CraftComputing Жыл бұрын
Tutorials: update-grub Proxmox 8.0: "What's a grub?"
@lilsammywasapunkrock Жыл бұрын
@@CraftComputingexactly! Quickly for clarification sake, q35 means uefi and ifx440 or whatever is bios boot? Half the tutorials say to do one or the other, and this is the first time I have heard it mentioned otherwise, unless I just forgot 😅.
@danilfun Жыл бұрын
@@lilsammywasapunkrock Both machine types support bios and uefi. The primary difference between q35 and i440fx is that q35 uses PCI-e while i440fx uses the old PCI. If I remember correctly, I was able to use PCI-e passthrough with i440fx but only for one device at a time. I personally don't see any point in using i440fx in modern systems with modern host operating systems.
@CraftComputing Жыл бұрын
^^^ Bingo
@johnwhitney1344 Жыл бұрын
I really like these series on proxmox
@shawnhaywood41994 ай бұрын
Wahoo!! Your directions worked! Thanks. I'm installing Ollama LLM on a VM and want to passthrough the GPU, which worked thanks to you! I'm using an Intel based i7 Dell 3891, GTX 1650, and current Proxmox.
@snakeychantey8521 Жыл бұрын
Been searching for this for the past week or so. Love your work Jeff. Cheers
@18leines Жыл бұрын
Me to, since upgrade failed on my HP Z440 with xeon 2690 and Tesla M40 24G. Cheers
@AlexJoneses3 ай бұрын
Sierra Nevada is one of the best beers out there, hazy little thing is amazing
@SytheZN Жыл бұрын
For your next tutorial I'd love to see you get some VMs running with their storage hosted on the truenas VM!
@mikequinn8780 Жыл бұрын
Are you planning a video on USB and or PCI passthrough to LXC containers? Something about cgroups and permissions never could get it to work.
@thecameratherapychannel10 ай бұрын
Thank you sir! Just by adding a new physical NIC to Truenas, my write speed increased by x3 on my ZFS pool! I had saturated the just one NIC I had on board with a lot of LXC and VMs
@jafizzle9529 күн бұрын
I've moved all of my hypervisor duties from Unraid to Proxmox, but I gotta give kudos to Unraid for how easy they make hardware passthrough. A single checkbox to prepare the device for passthrough, reboot, then pass that bish through. Echoing the wishes from other commenters that Proxmox adds the passthrough prep steps to the GUI. There's a thousand different guides for passthrough on Proxmox and 1000 different ways to do it, it's hard to know which is correct or best.
@Glitch-Vids Жыл бұрын
Hey Jeff, I had issues passing through a GPU with the exact same hardware until I pulled the EFI ROM off the GPU and loaded it within the VM config PCI line. Adding the flag bootrom=“” to the line in the VM config pointed to the rom should do it. I think this is because the GPU gets ignored during the motherboard EFI bootup so the VROM gets set to legacy mode. When trying to pass it into an EFI VM it won’t boot since the VROM doesn’t boot as EFI
@boredprince10 ай бұрын
Could you explain a little more on how you got that working? I still can't get GPU passthrough working on my 11900h ES erying mobo. Also did you mean "romfile=" ?
@rakhanreturns10 ай бұрын
After looking at his documentation, I think you're onto something here.
@mattp34373 ай бұрын
@@boredprince bootrom="" seemed to be the wrong parameter and removed the GPU from the hardware. romfile seemed to be accepted but the VM failed to startup. So not sure this is the fix (for me).
@jtracy547 күн бұрын
I had to do this too for my system. I think I used a WinPE image + GPU-Z to pull the rom off the card and then in the config for my VM i used the following: hostpci0: 09:00,pcie=1,x-vga=1,romfile=GP104_Fixed.rom
@renhoeknl Жыл бұрын
I'd like to see more: * Sharing an nvidia card between multiple VMs using MIG * On a system running ProxMox, using a VM as a gaming desktop on the machine itself
@cldpt Жыл бұрын
a particular reason not to passthrough disks before installing is to make it easier not to mess up the installation drive, so it's good advice indeed
@iriolavagno4060 Жыл бұрын
Thanks Jeff, you saved me a LOT of frustrating research :-) I just managed to passthrough a couple of network interfaces to a microvm within my NixOS server, and it just took me a couple of hours, I expected to spend all night on it :-D
@timdenis6788 Жыл бұрын
You definitely CAN passtrough your primary GPU to a VM... Running a setup like this for e few years now. The 'disadvantage' is that a monitor to the proxmox is not available any more, and until the VM boots, the screen says 'loading initramfs'.
@m.l.9385 Жыл бұрын
Yes, definitely - and Proxmox UI is used through SSH from another device anyway as it usually isn't a thing to run the UI on the Proxmox Servers GPU itself anyway. It can be handy though to have another means of connecting a GPU to the system if the SSH-interface is messed up - I use a thunderbolt eGPU in such circumstances...
@chromerims Жыл бұрын
Thank you for the write-up, especially addressing upfront EFI vs legacy boot config for IOMMU (intel_iommu=on). Great video 👍 Kindest regards, neighbours and friends.
@SomeDudeUK Жыл бұрын
Just getting into my own homelab after watching for a while. Got an old ThinkCentre that I'm going to have a tinker with before fully migrating a Windows 11 PC with Plex etc. This video series is great
@subrezon Жыл бұрын
Great video! Waiting for one about SR-IOV, I tried using virtual functions on my Intel I350-T4 NIC and got nowhere with it
@MatthewHill Жыл бұрын
FYI, the instructions don't work if you're using GRUB. These instructions appear to be specific to systemd-boot. You'll need to look in /etc/default/grub rather than /etc/kernel/cmdline to make the kernel command line changes.
@OwO-ek6yd Жыл бұрын
You're a damn wizard! :v Thxx Mr Magical Pants!
@Man0fSteell10 ай бұрын
It took some good amount of hours to figure things out - but at the end it was worth it! I'm using GPU passthrough to run some language models locally
@MaxVoltageMiningCrypto Жыл бұрын
Darn it. I should have done this video. I got it working about a month ago. Great information!! So many people discouraged me from doing it as they said it wouldn't work. It works great for me.
@TheSolidSnakeOil6 ай бұрын
This has been a life saver. I finally was able to passthrough my 6700 XT for jellyfin hardware encoding.
@zr0dfx Жыл бұрын
You ever get PCIE pass through working for the x16 slot? Looking forward to part 4 😊
@brycedavey1252 Жыл бұрын
Great video, I enjoy your server content a lot when it's this kind of set up.
@LetsChess16 ай бұрын
So i know this is 7 month old. However i have spend the last month trying to figure out why i couldn't passthrough my gpu to my VMs and i finally figured it out and this might be why you weren't able to passthrough yours. I have no idea what this does but a random reddit post gave me this answer. I had to run this code in my proxmox shell qm set -args '-global q35-pcihost.pci-hole64-size=512G' No idea what it does but it fixed everything.
@ProjectInitiative Жыл бұрын
Great video! I wrote a hookscript a while ago to aid in PCIe passthrough. I found it useful to use specifically with a Ryzen system with no iGPU. It dynamically loads and unloads the kernel and vfio drivers so when say a windows gaming VM is not in use, the Proxmox console will re-attach when the VM stops. Could be useful for other devices too! If anyone is interested let me know, I'll try to point you to the Github gist. I don't think KZbin likes my comment with an actual link. :)
@jowdyboy Жыл бұрын
What's the name of the repo? We'll just search for it.
@ccoder4953 Жыл бұрын
@@jowdyboyYes, seconded - sounds useful. Any idea if it works with NVidia?
@ProjectInitiative Жыл бұрын
I use it with Nvidia, I've tried to post several comments, but I'm assuming they keep getting flagged.
@98f53 ай бұрын
Whats the repo name
@DavidAshwell Жыл бұрын
On the "Proxmox isn't the best tool for ZFS file server duties argument".. that's mostly right, however, your friends at 45 drives' Houston UI (running in cockpit) does a solid job at all the missing responsibilities you listed that TrueNAS typically handles. I personally still prefer TrueNAS myself, but you can run the Houston UI webgui and standard Proxmox webgui on the same box.
@igordasunddas337710 ай бұрын
I prefer separation of concerns and staying as close to default settings and usage as possible in order to be able to update much more easily. So if I needed or wanted to use ZFS (which I currently don't), I'd have gone for TrueNAS, possibly in a VM. I don't feel as comfortable with Proxmox (I am currently managing VMs and containers by hand or through Cockpit on my Ubuntu set up), though while it works, it's not that robust depending on what you do and it also requires a ton of manual work.
@SaifBinAdhed Жыл бұрын
I was able to passthrough an RTX A2000 with my Eyring i9 12900H motherboard . I populated 2 of the 3 nvme ports though.
@darkenaxe9 ай бұрын
Impressive to the point and yet full of details tutorial !
@Riyazatron8 ай бұрын
I love your videos. They educate me a lot. Whst ive also learnt is i for plex and jellyfin you dont need to run a VM for just that. Its simple to run it in an lxc container. It's more efficient for my use case. Correct me if my understanding of proxmox is wrong, after all a noob here, but lxc containers have full access to the hardware that promox has. So for example where you have to blacklist the hardware in proxmox for VM pass through, you dont for containers? The only thing I'm struggling with is WiFi card pass through on my silly setup. I dont think itll work on an lxc container but im struggling on VM too. I had planned to use my setup if proxmox as the following: Hardware connects to my internet. OPNsense as my router/fw etc. second nic goes to switch. It is also bridged in proxmox. Then second lxc or VM for openwrt to use WIFI in AP which is compatible with openwrt in AP mode, thats been checked. I struggle eith that. It also runs jellyfin and PLEX in 2 different containers. I mainly use PLEX but playing around with jellyfin recently. I also have another container for pihole. I am looking at adguard too but i think they're bith dns sinkholes. All these units use about 6 to 12w depending on demand with a peak of 28w when i was doing silly stuff. The truenas proxmox server is different and i have a oroxmox backup server running too. This is all because of your simple tutorials. Really appreciate the work you put in
@Grid21 Жыл бұрын
I am beginning to feel like Proxmox just out performs everything even VMware ESXi which I have used. I think at some point I am gonna build a "virtualization" server, and move my TrueNAS from Bare Metal to Virtual Metal. But since I need a software server more urgently, Proxmox is gonna have to take a back burner, but I'll still watch for the education.
@bradmorri Жыл бұрын
1. make sure that the vm uefi is set to efi mode and not csl mode. The EFI should be loading the drivers for the card at boot time. That could stop the GPU from passing through. 2. If you have two identical GPUs, consider cross flashing the vbios with one from competing AIB with the same specs. The new vbios will change the pcie id for the card without changing the functionality, letting you split up the two cards under iommu
@FlaxTheSeedOne Жыл бұрын
The later isn't needed as they are in different slots they have different buss IDs and thus should never collide with IOMMU. You are still able to assign them to different different VMs
@Momi_V Жыл бұрын
@@FlaxTheSeedOnebut not to use one on the host and one for passthrough
@omegatotal Жыл бұрын
@@Momi_Vyou dont need to use one for the host. turn on serial console if your cpu doesn't have integrated gpu.
@Momi_V Жыл бұрын
@@omegatotal I know. But the original comment provides a way to solve the "two identical GPUs" issue (which is inherent to this method of passthrough, not just on Proxmox where serial is an option) that also applies to other virtio passthrough scenarios (like a desktop/workstation virtualization setup). And it's not solved by different slots (which the comment I replied to implied), though I must admit pcie id is not the right term, vendor/device id is a more accurate name
@bradmorri Жыл бұрын
@@FlaxTheSeedOne I would have thought that too but it contradicts what was said in the video and seems to be a quirk of the Chinese motherboard with mobile cpu
@blastedflavor3604 Жыл бұрын
Man, I ran TrueNAS in a VM for years now. I never ran into issues.
@greenprotag Жыл бұрын
Thank you for this update. This is one if the more challenging tasks for me in proxmox and I was only successful through sheer dumb luck the last time I did this. The good news? Its still deployed and the only thing I have changed is the GPUs and Storage controller.
@ChrisJackson-js8rd Жыл бұрын
i always kinda like iommu as a name its a mouthful but at least its not easily confused with the many other acronyms i remember on something of the low end aorus gaming boards it used to be under overclocking settings > cpu > miscellanious cpu settings
@kahnzo Жыл бұрын
I always think that IOMMU is just the thing that Doctor Strange battles in the movie.
@RichardLangis Жыл бұрын
It's awesome having a homelab, but not as awesome when you've put the server in a mildly inaccessible spot, headless. Especially when you follow a PCI passthrough tutorial and the system reboots and doesn't come back.
@smalltimer43709 ай бұрын
There is no 'cmdline' in /etc/kernel :(
@ouya_expert8 ай бұрын
I created the /etc/kernel/cmdline file as well as edited GRUB_CMDLINE_LINUX_DEFAULT in /etc/default/grub. Not sure which one ended up making iommu work though
@FelipeBudinich4 ай бұрын
FYI while it's fine to run a Truenas VM with PCIe passthrough to a SATA controller, the problem you can stumble upon are IOMMU groups. If you try to do this and you can't ungroup the SATA controller from a IOMMU that has other important components (say the APU) it may cause the Proxmox host to crash; just tested this on a X300-STX motherboard with a Ryzen 4750G and the SATA controller basically shares the IOMMU group with almost everything and no amount of grub parameters and blacklists allowed me to get this going. I was just expecting too much of a Deskmini x300 😆 You COULD just enable samba on proxmox, but that would be a very bad security risk (as VMs would get access to the Host filesystem).
@mattp34373 ай бұрын
Ok Jeff, I have the Erying 11th 0000 1.8GHz i7 ES motherboard and I gave it the old college try. I followed your tutorial (for grub) as well as played with settings and also followed a few other tutorials out there (they all seem to be slightly different). No luck. I was able to pass the iGPU through but not my Nvidia GTX 1660S card. I even tried blacklisting and passing through all of the items in the same pci group (VGA, audio, USB, etc.). At that point, it borked my install and I threw in the towel. Too bad, would be really nice to have proxmox on this MB but I need to pass through the GPU to Plex. Unfortunately, everywhere I found where some said they successfully passed through a GPU on an Erying motherboard, there were little to no details on how it was done (BIOS settings, proxmox settings, etc.). So I went back to my Windows 10 install with VMWare workstation to run VM's as needed.
@kodream316 Жыл бұрын
would be interested in LXC tutorial with GPU passtrough / sharing to it... especially with something like intel NUC with only 1 integrated GPU, or maybe just sharing / passtrough of integrated GPU in general
@derekzhu73498 ай бұрын
it's not passthrough for lxc, it'd be just using the host gpu directly in a virtual environment. it's the same kernel
@stevanazlen11 ай бұрын
At first, adding a GPU to one of my VM's also did not work as you pointed out. I made it work by deleting that VM ( Debian 12 ), creating it again from scratch, BUT before the first boot, add PCI device and select your GPU. Go through the installation process, and once done, lspci showed my GTX 1060 6G in the list. Hope this helps anyone else looking for this.
@deathpie5000 Жыл бұрын
Hey Jeff I'm from Central Oregon been watching you channel for quite a while now, thank you so much for the videos please please more proxmox videos, show any and everything great content :) I'm trying to learn all ins and out of proxmox.
@dunknow9486 Жыл бұрын
Excellent tutorial on PCI passthrough. Could mention on how to passthrough on motherboard SATA and NVME drive?
@Jan127006 ай бұрын
6:55 Did the Path change? I only have install.d, postinst.d and postrm.d in the /etc/kernel directory.
@kienanvella Жыл бұрын
Efi booted host, cards don't have efi firmware on them, so the vbios doesn't get mirrored into memory. Get a dump of the vbios, and add it as a vbios file in the pci device section of your VM config.
@CraftComputing Жыл бұрын
DOH! You're probably right.
@dozerd42 Жыл бұрын
I would love an explanation of this comment or further resources. I don't understand efi, vbios, why and how that gets mirrored, or really anything that was said.
@kienanvella Жыл бұрын
@@dozerd42when a physical system boots, it copies the contents of your video card bios (vbios) into main system memory, into the memory region reserved for communicating with the card. Some cards have a uefi firmware in addition or instead of a traditional vbios. Without it though, the card won't initialize the display output during boot. In this case, the cards didn't initialize during boot at all, so providing the video bios to the VM gives it an opportunity to initialize the card on its own. While you can technically usually boot cards without supplying it, what will often happen is that the in memory copy will become overwritten in some cases - like if that memory region is needed for texture storage at some point. When that happens it's necessary to reload the vbios from the card, but if you don't supply the vbios separately, sometimes this reload fails, which will hard lock your host.
@davidschutte2368 Жыл бұрын
I was surprised that GPU passthrough to Debian or Windows based VMs worked out of the Box on my machine. I never configured anything inside Proxmox. I made sure that the UEFI bios was set up correctly. But that was it. Has been running great for months. (Im using a AMD 5900X on a MSI X570 Gaming Plus with a 1080ti)
@retrogear Жыл бұрын
Thanks Jeff. As always an excellent and succinct guide. Cheers for making the effort! 😊
@ozmosyd Жыл бұрын
Exactly what I had been looking for. Thanks for sharing.
@xXDeltaXxwhotookit Жыл бұрын
After the first video, I took the plunge on new server hardware... one of the AS Media controllers passes through fine, but the other doesn't - so waiting on a LSI HBA to arrive. Thanks Jeff... and Thanks Jeff It'd be interesting to see a Pt 3 / hear your thoughts on further VMs (Plex for example); installed on Proxmox or through TrueNAS
@burnbrighter7 ай бұрын
Three questions: 1. Were any Torpedos harmed in the making of this video? 2. Did you film this fast enough to not get warm beer when you shot this? 3. Your beer glass seems to leaking during the making of this vide - the beer is magically disappearing sequentially throughout the video - what is happening?
@ElementX32 Жыл бұрын
I'm trying to learn how to install and implement VM's and ultimately build a homelab, just for self satisfaction and knowledge.
@Skyverb7 ай бұрын
This worked like a charm for me! Turned a spare gaming laptop into a remote access gaming server. For me the graphics card worked, and I removed the errors on my Nvidia card by not adding the sub features of the card, like usb C, and the audio device as advised in this tutorial. It gives an error saying I added the card twice if I did.
@AV-th6kn Жыл бұрын
Quality stuff again. Was excited when I saw the thumbnail that finally I will see how to passthrough properly an nvme ssd to a truenas vm. Unfortunately this not happened this time. Hope that you will cover that as well somewhen and if you could explain how to get the truenas vm to put the hdd's to sleep, that would be just the cherry on top. Cheers Jeff.
@Lucas-av7 Жыл бұрын
Unfortunatelly, I'm receiving the error "No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync."
@bigun89 Жыл бұрын
This exact functionality I got working with an Nvidia P400 in Proxmox v7. I hadn't upgraded to 8 for fear of going through this again. Now I may have to take the dive.
@KomradeMikhail Жыл бұрын
Flash vBIOS to force GPU into UEFI mode and disable Legacy mode at boot ? Do you need to alter any of those CLI strings depending on chipset connected PCIe lanes vs. direct CPU lanes ?
@w3isserwolf9 ай бұрын
Does someone know why i dont have an cmdline file ( /etc/kernel/cmdline ). There is no. i have installed Virtual Environment 8.0.3 and also tried 8.1.2
@sstupa17 ай бұрын
the same issue in 8.1.10. Where the heck is cmdline?🤔
@Tterragyello2 ай бұрын
12:48 -- This source mentions IQR remapping, which I think actually does allow the primary monitor of the server and a VM to 'share' the GPU. Have not tested it yet.
@DublinV1 Жыл бұрын
Hi Jeff, Are there any drawbacks (i.e. Performance) not blacklisting your GPU from the host Proxmox O.S.? Currently I have GPU pass through working but I didn't black list that GPU from the host O.S. and everything seems to be working without issues. Thanks!
@T3hBeowulf Жыл бұрын
Same here. I did everything except the Proxmox blacklist and got it working in a Win11 VM. I also checked the "PCI Express" box on the pass-through model in Proxmox for the video card. It did not work without this. Additionally, my 1070 GTX needed a dummy HDMI plug (or external monitor) to initialize correctly.
@ericneo211 ай бұрын
If you can convert a video or see apps use cuda without crashing the VM then no, you are completely golden.
@nte0631 Жыл бұрын
Ive followed the isntructions but as soon as I add my HBA as a PCI device being passed through, my VM will just boot loop saying no boot device found. I checked the boot order and made sure it only had the lvm where truenas was installed but it still does this. If I remove the PCI devie, truenas boots fine.
@dimitristsoutsouras2712 Жыл бұрын
Nice presentation as always. At 11.26 you could also mention the iommu group ids (each pci-e device you want to passthrough must have it's own specific iommu group id, not being used by any other device) because this id might be shared with another device and then bad things start to happen. You could also tick the pci-express option and mention why you don t tick the all functions (like a gpu which consists of different audio and graphic chip both soldered at the same card) New edit: ok you ve amde a quick reference of it near the end of the video.
@Doc_Chronic Жыл бұрын
Thank you so much for this! Just what I was looking for
@ccoder4953 Жыл бұрын
One thing to mention is that its not strictly necessary to use PCIe passthrough to get direct access to SATA disks. You can do SATA passthrough also. That leaves the controller on the host and just the disks get passed through. It's alot more foolproof than PCIe passthrough. For some reason though, Proxmox doesn't have GUI support for it - you'll have to modify your VM config file by hand. Pretty easy though.
@truckerallikatuk Жыл бұрын
Disk passthrough works fine, but has the disadvantage of making you do work to pass through a replacement drive before you can fix a degraded pool. Probably not the extra steps you need when the array is in trouble.
@dustojnikhummer Жыл бұрын
And I should add that you should not do this with ZFS, ie TrueNAS. The VM will see those as "QEMU virtual disk". I ran that for about 2 weeks until I managed to pass through my HBA. You won't get stuff like proper S.M.A.R.T. on your VM.
@blkspade237 ай бұрын
I've found that sometimes using the "All Functions" option is what is actually causing the failure. Just adding the secondary device manually is more compatible.
@mikequinn8780 Жыл бұрын
FYI Many Ryzen chipset drivers have a bug in their passthrough code. There was an early version that worked, then a new version that didn't, and a newer one that did. I spent hours troubleshooting making sure everything was right in all the configs nothing. Did a BIOS update and perfect in a moment. I was on a x470 chipset with a Ryzen 2700.
@GeoffSeeley10 ай бұрын
It's possible to pass through just one of two identical cards using the driverctl package and is easier than adding kernel options and blacklists.
@NorthhtroN Жыл бұрын
FYI, if you are passing through a storage controller and runninto slow boot time's of your VM after try disabling ROM-Bar on the passthrough
@nadpro169 ай бұрын
Thank you for explaining why you virtualize your file server. I do it through cli on proxmox and wondered why you would do it through a vm. But HW passthrough of the sata controller makes sense. And I'm even thinking about trying how you do yours now.
@THEMithrandir09Сағат бұрын
I've had ballooning enabled on proxmox 7 and it still worked. I wonder if ballooning knows which areas need to be directly mapped and still works normally.
@munthon Жыл бұрын
Thank you one day, one day, I'll do setup like this.
@dwrout3 ай бұрын
I have followed this process on a couple of proxmox servers (Chinese Machinist Xeon MB and a SuperMicro i9) each time the only way I could get the nvidia GPU to pass through successfully was to set up the VM with SandyBridge as CPU type and BIOS set to SeaBIOS.
@dotanuki33718 ай бұрын
I set up VGA passthrough (what we called it then) back in 2013. I ran Xen, had one GPU for a Windows VM, another GPU for a linux VM, and a cheap GPU for console on the host/dom0. Back then it was really messy with card and driver support. Nvidia supported it on Quattro, but not on Geforce, so some people took a soldering iron to their geforce cards to get them to identify as Quattro cards. Then it worked. I used AMD, which worked for setting it up, but not taking it back down cleanly, as the driver didn't manage to reset properly. As a result, if I needed to boot any of the VM's, I needed to boot the whole system. Still though, I could play windows games in a VM with only a ~2% performance drop, and some charming artifacting in the top left corner, while leaving anything serious to linux, without having to reboot. Though if not for the tinkering in and of itself, I should have done what I recommended on the forums, "just get two computers".
@98f53 ай бұрын
I remember soldering a few GeForce cards to trick them into being quaddros lol those were the days.
@detectiveinspekta Жыл бұрын
I prefer using LXC for GFX passthough. Im using it for hardware encoding for Jelly fin. I installed NVIDA drivers on both host and LXC and did pci passthough. Worked straight away.
@cts006 Жыл бұрын
Meanwhile I spent 2 days pulling my hair out barely able to install the nvidia drivers and still don't have it working.
@KILLERTX95 Жыл бұрын
I watched this to make sure the process hasn't changed, it hasn't. Minor correction on top of what everyone else is saying, but to get passthrough to work sometimes you need to pass more values into grub. For EXAMPLE, this: ```bash GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on pcie_acs_override=downstream,multifunction video=efifb:off video=vesa:off vfio_iommu_type1.allow_unsafe_interrupts=1 kvm.ignore_msrs=1" ```
@Catge Жыл бұрын
I have a 3700x and a p1000. Is it not possible to use this p1000 for plex transcoding since proxmox requires a display?
@Catge Жыл бұрын
I do have a quadro k620 that could go into an x1 slot with an adapter. Would this resolve my issue?
@BlkRider Жыл бұрын
12:43 Not true, you can passthrough intel iGPU even if you don't have any other GPU in the system. You have to of cause set the VFIO driver for it at boot and you will lose video for proxmox. But as you do everything through web or SSH, you don't need video most of the time. You can always reboot to kernel without the VFIO driver linked to iGPU if you lose network connection or need to fix something. There is also GPU partitioning which certain Intel GPUs support. Then you can use one GPU for both proxmox and even multiple VMs. That is a bit more hardcore for now though.
@TedPhillips4 ай бұрын
i had nvidia-smi working fine for my quadro but plex wasn't doing hw transcode. after throwing some semi-stale additional virtualization tweaks at the wall, the real thing was that i used my distro's packaged nvidia driver - which didn't auto include libcuda1 and libnvidia-encode1. eventually figured it out from spelunking the plex debug logs, looks like those two extra packages are enough to get the full hw transcode going, but i'll update here if i notice anything else.
@lachaineguitarededavid8 ай бұрын
Hi. I am currently investigating the idea of creating a proxmox server to run various things, including MacOs, since i definitely need/want that one for audio. I can't really find a clear answer so i feel like asking you this : is it feasible to have low-latency audio on a VM ? Not remotely, locally of course, through an USB audio interface. I feel like PCI passthrough on a dedicated USB card can give me something viable, but i'm not completely sure. Maybe i can just passthrough my USB controller on the motherboard ? But in the end, will it provide me something useable for realtime audio treatment, as in "i plug my guitar in the audio interface, and i hear it's sound, processed by the computer, on my loudspeakers in real-time with a low latency, under, say, 15/30ms" ? )
@werewolfman007 Жыл бұрын
Hey Jeff have you ever tried unraid would like to know your point of view on it
@willfancher9775 Жыл бұрын
"TrueNAS, which runs the ZFS file system, and needs physical access to hard drives for it to work properly." This just isn't true. As far as ZFS is concerned, a block device is a block device. What matters is that the block devices actually behave like block devices. It's not uncommon for storage abstractions like hardware RAID cards or misconfigured virtual drives to behave badly, but that isn't ZFS's fault, and it would behave badly with any file system. It isn't difficult to correctly set up virtual drives or pass through individual drives such that they actual behave like block devices and ZFS (or any FS) is happy. That said, it's still quite nice to do PCIe passthrough, just because it removes unnecessary layers of complexity. But I see this myth about ZFS "needing physical access to the hard drives" all the time and it simply isn't true.