Freebsd Virtualization With Bhyve
2023-12-11
When I rebuilt my home server, I mentioned that I wanted to use FreeBSD’s native hypervisor, B-Hyve.
I have wanted to do this for about… 11 years? When I was at Bay Photo Lab, running Xen and Convirture I wrote blog post about it. At a BSDCon Michael Dexter was asking around about me/that post because I mentioned that I wished I could have used bhyve since I prefer the stable nature of FreeBSD.
Well Mr. Dexter, I finally got around to it.
At home I tend to have a mix of some automation and some “click-ops”, because I just don’t have the time to automate when there is literally no scale here.
And I’m super duper lazy, so I looked for something like Proxmox, just on FreeBSD.
As it turns out, someone felt the same need and wrote BVCP
BVCP is really nice and easy to use, and you can get up and running in just a few minutes. It does contain a binary, it is not an opensource project, to be warned. For all I know, its siphoning off cpu time to mine btc. The maintainer is very helpful though, I emailed them over a broken URL and they got back right away. So take and use this information as you see fit.
/etc/rc.conf
:
kld_list="nmdm vmm"
vm_enable=”YES”
vm_dir=”zfs:data/virt”
bvcp_enable="YES"
classic ifconfig output
igb0: flags=8863<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=4e527bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,WOL_MAGIC,VLAN_HWFILTER,VLAN_HWTSO,RXCSUM_IPV6,TXCSUM_IPV6,NOMAP>
ether 00:25:90:be:1f:ae
inet 192.168.1.13 netmask 0xffffff00 broadcast 192.168.1.255
media: Ethernet autoselect (1000baseT <full-duplex>)
status: active
nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
igb1: flags=8963<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=4a520b9<RXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,WOL_MAGIC,VLAN_HWFILTER,VLAN_HWTSO,RXCSUM_IPV6,NOMAP>
ether 00:25:90:be:1f:af
inet 192.168.1.14 netmask 0xffffff00 broadcast 192.168.1.255
media: Ethernet autoselect (1000baseT <full-duplex>)
status: active
nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
options=680003<RXCSUM,TXCSUM,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
inet6 ::1 prefixlen 128
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x3
inet 127.0.0.1 netmask 0xff000000
groups: lo
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
bridge300: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
ether 58:9c:fc:10:b3:3f
id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15
maxage 20 holdcnt 6 proto rstp maxaddr 2000 timeout 1200
root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0
member: tap309 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
ifmaxaddr 0 port 14 priority 128 path cost 2000000
member: tap304 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
ifmaxaddr 0 port 11 priority 128 path cost 2000000
member: tap303 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
ifmaxaddr 0 port 10 priority 128 path cost 2000000
member: tap302 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
ifmaxaddr 0 port 9 priority 128 path cost 2000000
member: tap300 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
ifmaxaddr 0 port 8 priority 128 path cost 2000000
member: tap305 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
ifmaxaddr 0 port 7 priority 128 path cost 2000000
member: tap306 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
ifmaxaddr 0 port 6 priority 128 path cost 2000000
member: tap301 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
ifmaxaddr 0 port 5 priority 128 path cost 2000000
member: igb1 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
ifmaxaddr 0 port 2 priority 128 path cost 20000
groups: bridge
nd6 options=9<PERFORMNUD,IFDISABLED>
BVCP can be pointed to any directory on your system. I’m using ZFS so I just made a new data/virt
volume. Here is what the structure looks like:
tree /data/virt
/data/virt
├── iso_images
└── vm_images
├── 389-2_.img
├── 389_root.img
├── docker-1_data.img
├── docker-1_root.img
├── docker-2_data.img
├── docker-2_root.img
├── docker-3_data.img
├── docker-3_root.img
├── git_root.img
└── jenkins_root.img
3 directories, 10 files
$ sparseseek /data/virt/vm_images/docker-1_data.img
/data/virt/vm_images/docker-1_data.img:
Data: 1051197 kB 1026560 KiB
Holes: 106322984 kB 103831040 KiB
Total: 107374182 kB 104857600 KiB
FreeBSD reports if its under a hypervisor with kern.vm_guest
:
root@git-1:/var/log/gogs # sysctl kern.vm_guest
kern.vm_guest: bhyve
Pretty much everything worked right away. My little t2 vm here only had about 512mb ram, which is plenty for FreeBSD. I was scratching my head why Linux would crash during anaconda’s start up, and I finally gave the VM 2GB and it worked.
I’m not sure why, but only FreeBSD VM’s could use the nvme express disk driver. Linux had to use the Virt I/O device type.
ps output to see what thats all about:
root 63342 11.2 2.0 16927748 2715340 - SC 2Dec23 2862:27.24 bhyve: docker-1 (bhyve)
root 2998 6.4 0.8 8484336 1012892 - SC 28Nov23 1379:18.80 bhyve: 389 (bhyve)
root 2364 4.2 2.2 16927492 3000996 - SC 28Nov23 1055:14.66 bhyve: docker-3 (bhyve)
root 2469 1.6 7.8 16927492 10405056 - SC 28Nov23 1148:25.73 bhyve: docker-2 (bhyve)
root 2717 0.0 1.4 2197104 1840940 - SC 28Nov23 98:37.59 bhyve: git (bhyve)
root 2838 0.0 0.6 16889840 742428 - SC 28Nov23 213:56.95 bhyve: jenkins (bhyve)
root 58937 0.0 1.9 8484336 2574640 - SC 2Dec23 139:59.97 bhyve: 389-2 (bhyve)
Honestly its very informative. Accurate cpu utilization and memory allocation with the vm name. perfect.
So what do I have? What am I even doing?
I don’t know! Nothing really, its just fun!
My current VM’s are:
- 389 LDAP server. I’ve always used OpenLDAP and I wanted to see what 389 was like
- Jenkins. I have a few scripts that do things, and they sometimes fail. I’d like to have Jenkins do it. Like package builds.
- Gogs git server. Just for private infrastructure only repositories
- Docker. I have 3 currently running as a k3s 3 node cluster
All of these VM’s use DHCP. Since my Kea setup is wonderfully integrated with my home’s Bind servers, with dynamic dns, all of my VM’s are usable by hostname. The LDAP server has an SSH key extension, so while I only have one user and group (me), it lets me ssh with public key authentication.
I have salt internally that wires my internal certificate authority, ssh, sssd, etc…
But most importantly, I wanted a way to use containers at home. Playing around with kubernetes is a bonus since it lets me do all the fun ingress stuff.
For my home, bhyve is fantastic and so is BVCP. It works very well in my opinion. Would I run a company off of it? No, absolutely not. Do I think bhyve has a future? Yes, there is nothing wrong with another completely opensource hypervisor in the mix.
What would I live to see
- Terraform support. If I were to adopt this in a business (big if, purely hypothetical. FreeBSD’s lack of commercial support has always been a big problem), I would not use BVCP, but I also would not want to write my own templating and deployment pipeline for vm-bhyve. Terraform would be best.
- Something like v-motion
- openstack support
- secure boot, in case I want to create a new Windows VM (nevermind, this appeared in 14.0-RELEASE)
I think it already has pci device pass through, and GPU is very close. Speaking of windows, I did provision a 2022 server, but without secure boot support it would be a challenge booting Windows 12.
Okay, that is all. I’ll be sure to follow up over the next year to see what has worked and what has not worked.
I also still have a few jails (rsyslog, minecraft, some other test database) and I’m still wondering if I should move those or not, Jails are so lightweight and easy to forget, but iocage isn’t really being maintained so I should just get off.