Setting up the Proxmox Nodes
On the first page of this chapter, we defined our requirements and made some decisions. Now lets setup the Proxmox Nodes in VirtualBox and install Proxmox on them.
When designing a network for a Proxmox cluster, relying on a single network connection is not recommended because it can easily become saturated by different types of background traffic. To ensure cluster stability, you should configure a minimum of three separate network interfaces for your test environment. You will need to assign specific IP addresses for management, core sync (heartbeat), and replication
- Management Interface: This connection serves as the primary way to access and configure your Proxmox hosts. For this specific setup, the management interface will simply sit on your existing internal network
- Cluster Communication Heartbeat: The cluster communication heartbeat is essentially the lifeline of a Proxmox host. This traffic is incessant, particularly when High Availability is enabled and the nodes are constantly voting on the status of virtual environments. To prevent this constant chatter from bombing your main network link, the heartbeat traffic must be placed on its own isolated network segment, such as a VLAN or a physically isolated switch.
- VM Migration and Storage Replication: Moving machines between nodes and synchronizing data requires significant bandwidth. When you enable Storage Replication, the nodes will copy data across the network every 15 minutes. Combining this heavy file synchronization with the standard cluster chatter can quickly saturate a single network link, which might cause the system to incorrectly determine that cluster nodes are offline. Therefore, setting up an additional, dedicated interface specifically for VM migration and storage replication is required.
Note: While dedicating a network segment for NAS storage is also a highly recommended practice in production environments, it is excluded from the scope of this specific virtualized test exercise.
Creating the Master Virtual Machine (Hardware Mode)
To avoid missing important settings and having to start over, it is recommended to build a single VirtualBox Virtual Machine (VBVM) to act as a template. You can name this master image "proxtpl" so that it stays at the bottom of your VM list. Ensure that you never actually start up the master image; it should always remain powered off.
- Name the master VM "
proxtpl" so it appears at the bottom of your VM list. - Set it to Linux, Debian 64bit
- Set RAM to 2 GB minimum (4 GB or 6 GB are better if you have)
- Create a 32 GB SATA0 boot drive where we will install Proxmox to.
- Create two 32 GB dynamically allocated disks. SATA1 and SATA2 will be used for our ZFS storage pool. You can also go for 64 GB if you have the storage available.
- Change boot-order and set HARD Disk first and Optical second.
- Make sure to mount the Proxmox ISO to the optical drive.
- Attach 3 network adapters:
- nic0 - BRIDGED (management interface)
- nic1 - HOST-ONLY (cluster interface / core sync) and disable DHCP
- nic2 - HOST-ONLY (replication / migration interface) and disable DHCP
- Before finishing, make sure to change boot order as the clones will be booted in headless-mode and forgetting to eject virtual CD-ROM is annoying.
Once done you should have something like this:
Once the master is configured, make three clones of it, naming them prox01, prox02, and prox03.
Installing Proxmox VE (Software Mode)
With your three clones created, start them up to begin the installation process.
- Ignore KVM Virtualization errors: You will likely see an error stating that KVM Virtualization is not detected, which is expected since VirtualBox may not pass it through to the PVE Linux environment. This is not a problem because the test cluster will run Linux Containers instead of full VMs.
- Pay close attention to hostnames: Ensure you correctly name each node (e.g.,
prox01.local,prox02.localprox03.local) during setup, as fixing a misnamed node after the cluster is built is a difficult procedure. - Select the 32GB boot drive for the installation.
- Configure sequential IP addresses based on the hostnames to make management easier (e.g.,
prox1– 192.168.1.101,prox2– 192.168.1.102,prox3– 192.168.1.103) - Set Netmask to
255.255.255.0or in use CIDR notation192.168.1.101/24in the previos step. - Set Gateway to your networks gateway. Mostly likely something like
192.168.1.1. - Set DNS Server to your networks DNS configuration. Must likely also
192.168.1.1. - After the installation is complete, shut down the hosts and reboot them in headless mode. This is where the boot order change from earlier comes in handy.
Unfold to see predefined IP Configuration
Remember the IP Configuration from the Overview-Page:
- Node 1:
- nic0
192.168.0.201(hostnameprox01.local) - nic1
172.20.1.201 - nic2
172.20.2.201
- nic0
- Node 2:
- nic0
192.168.0.202(hostnameprox02.local) - nic1
172.20.1.201 - nic2
172.20.2.201
- nic0
- Node 3:
- nic0
192.168.0.203(hostnameprox03.local) - nic1
172.20.1.201 - nic2
172.20.2.201
- nic0
After the reboot you should be able to access the web-interfaces on port 8006. You will likely be prompted to accept unsecure/unknown certificates.
About the Network Interface Design
A successful Proxmox cluster requires careful network segregation to prevent traffic saturation. At a minimum, you will need three separate network interfaces for your VirtualBox test cluster:
- Management Interface: This interface connects to your internal network and serves as the primary way to access and manage the Proxmox hosts.
- Cluster Communication Heartbeat: This interface acts as the lifeline for the Proxmox hosts to communicate. It must be placed on its own isolated network segment (or physically isolated switch) because the constant cluster chatter and High Availability (HA) voting can easily overwhelm a shared network link.
- VM Migration & Storage Replication: A dedicated interface is needed for migrating machines and handling Storage Replication, which copies data across nodes every 15 minutes. Combining this heavy file synchronization traffic with standard cluster communication can quickly saturate a single network connection.

