Factory Bicycle

Adventures in Proxmox

Back in August in 2021, I was talking to my friend about possibly setting up a new rig. He had sent me a GPU (AMD rx570) a while ago that he wasn't using. So he sent it to me and I tried to drop it into my Unraid machine to see if I could get it to pass through for some VM's, which sorta worked, but not 100%. I could get video to pass through but the way that the motherboard for my Unraid machine is set up doesn't play nice with the USB controller pass through, so that was a bit of a deal-breaker to try and make that work.

We discussed some other options, and then he told me about Proxmox which is a type one hypervisor which is capable of running multiple operating systems under one hood (including macOS, but not without some tinkering which I will get into later). And since I like to flip between OS'es, looking into using Proxmox seemed like a good fit instead of building out another Unraid machine. While Unraid is a great option and had a lot under the hood, at it's core it's a NAS and you need a select number of drives in order to make it work. Granted, I could've done this, but I wanted to not go through the hassle of trying to get a case to accommodate some larger HDD's just to run VM effectively.

I went ahead and bought a used Asus x299 motherboard and an Intel i7 processor from eBay. I wasn't in a big rush to build this out, so I started to get more pieces of the build as time went on. Eventually at the beginning of this month, I decided to pull the trigger and get the rest of the parts I needed to finish out the machine.

Once I got all the hardware slotted in, I installed proxmox and began setting everything up.

Since I had never really used anything like this before (outside Unraid) I needed some help. Luckily, there were two videos on YouTube that helped out:

Craft Computing: https://youtu.be/azORbxrItOo
Network Chuck: https://youtu.be/_u8qTN3cCnQ

The Craft Computing video is good in the stance of overall setup, but the Network Chuck one was good on how to allocate storage effectively since I wasn't planning on using a ZFS RAID between the three SSD drives that I have set up in my rig. Once I was able to successfully load up some ISO's and get them to run, I moved onto GPU pass through.

Now, this is where things start to get tricky. There are a couple of really great guides out there to give you a complete walk through on getting a GPU to pass through via PCI-express and to get your VM to show on a monitor. But there are some steps that are involved before that magic can happen. What I found was this guide and this video that walked through the steps:

Reddit Guide: https://www.reddit.com/r/homelab/comments/b5xpua/the_ultimate_beginners_guide_to_gpu_passthrough/
Lo-Res DIY YouTube video: https://youtu.be/5ce-CcYjqe8

Step for step, these two made it pretty easy, but there's one issue which I found out that needed to be outlined in my setup as well as possibly using my AMD GPU.

One thing is that they don't include the amdgpu in the blacklisting part. So when it gets to the blacklisting drivers part you can just
nano /etc/modprobe.d/blacklist.conf
and then add these line by line:

blacklist radeon
blacklist nouveau
blacklist nvidia
blacklist nvidiafb
blacklist amdgpu

And then just write it out and that should be set to also blacklist the drivers for any AMD based GPU.

The other thing that I was bashing my head against a wall with was the video=vesafb:off,efifb:off in the GRUB config. After setting this all up, I found out that what it needs to be is this instead:

video=vesafb:off video=efifb:off

That magically enables it to not have it crash and burn when it tries to load the GPU for the frame buffering.

The one other thing two that's key to point out is when setting up a VM is to NOT have the Pre-Enrolled Keys checked when selecting the BIOS in the system tab (I'll add a screenshot later). When that's checked, it borks the loading the VM and just hangs. This happens regardless of GPU pass through as well. Leaving that uncheck from the get go, will save yourself a migraine.

But the one last and big hurdle that I needed (wanted) to accomplish was to install macOS Big Sur and attempted to get that to run. The two resources for that was the Nicholas Sherlock guide, but the one that really helped with another YouTube guide from here.

I had already had a Big Sur iso made from when I was testing out running Big Sur on VirtualBox (which was...ok, but not that optimal) so I already had that part done and ready to go. I downloaded the OpenCore iso from the repo that was listed in the steps from the YouTube video (which I think was the Nicholas Sherlock one anyway) and was able to set up the VM with those steps. And in the end, I could get it to load and boot properly within the VNC viewer within the proxmox web UI. But I wanted to get the GPU to pass through and that's where I kept having issues.

Within the macOS boot, if you don't select the OS on boot, then it won't do anything. So you need to at least have a keyboard plugged in to select. Problem was that when I was passing through a USB device, I had the "enable USB 3.0" checked and that was causing and issue. After digging through the logs I realized it was crashing during the process to load. So, I took those out and ended up passing through the USB controller via a PCI express which seemed to help, but it was still freezing on boot. Then I thought it was the keyboard causing an issue, so I logged in via VNC viewer and edited the config.plist to automatically boot without needing to hook up a keyboard, but that wasn't working either. So I was stumped. I posted up on the proxmox forum and I also went through the process of fixing the AMD reset bug as well. But still no go. After digging through the internet I stumbled upon a Reddit post with someone that seemed to be having the same issue I was. Turns out there was one piece missing in the VM config which was this:
-global ICH9-LPC.acpi-pci-hotplug-with-bridge-support=off

Once I added that to the VM config file, it was able to boot with GPU pass through.

So, mission accomplished, right? Ehhhhhh...

While I have Big Sur loading up, there's still some other "bugs" that I need to iron out. iMessage isn't working and at the time of writing this, the ethernet connection isn't working either for some strange reason. So I need to go through the OpenCore docs and try to hash out some of these lingering issues if I want to try and at least make it my "daily driver". I also noticed in proxmox when I do a shutdown from the macOS, it seems to go haywire a bit and not fully close off the process. Not sure what the problem is there, but I'll have to look into that a bit more as well.

Needless to say, running macOS on non-apple hardware is tricky and while you can have some success with it, it's never really a sure thing. I had run a hackintosh a few years ago and while that was good and somewhat stable, it too was touchy, and doing updates was always hit or miss. So am I surprised this was also a PITA? No, but I am glad that it's through virtualization so even if I don't have it 100%, I can fall back to something else.

Aside from that, I want to get another GPU and be able to split that off so I can run two monitors but running two VMs within one card. But seeing as I've already invested a good amount of money and time into this build already, I'm going to hold off on that for now.

Anyways, if anyone happens to come across this while trying to setup VM's in proxmox, I hope this helps! If anything it was a way for me to remember what I did.