Setting up the Proxmox Nodes

On the first page of this chapter, we defined our requirements and made some key decisions. Now, let’s set up the Proxmox nodes in VirtualBox and install Proxmox VE on them.

proxmox-logo

To ensure cluster stability and prevent traffic saturation, your test environment requires a minimum of three separate network interfaces:

Note: While dedicating a network segment for NAS storage is a best practice in production, it is excluded from the scope of this virtualized test exercise.

Creating the Master Virtual Machine (Hardware Mode)

To avoid missing critical settings or having to restart the process, it is recommended to build a single VirtualBox Virtual Machine (VBVM) to act as a template. You can name this master image proxtpl so that it remains at the bottom of your VM list. Ensure that you never actually start the master image; it must remain powered off.

Once configured, your setup should look like this:

{5D424EF4-3EE5-4FC9-81CC-4CE2B4252D91}.png

Once the master is configured, create three clones and name them prox01, prox02, and prox03.

Installing Proxmox VE (Software Mode)

With your three clones created, start them up to begin the installation.

proxmox installation network settings

Unfold to see predefined IP Configuration

Remember the IP Configuration from the Overview-Page:

  • Node 1:
    • nic0 192.168.0.201 (hostname prox01.local)
    • nic1 172.20.1.201
    • nic2 172.20.2.201
  • Node 2:
    • nic0 192.168.0.202 (hostname prox02.local)
    • nic1 172.20.1.201
    • nic2 172.20.2.201
  • Node 3:
    • nic0 192.168.0.203 (hostname prox03.local)
    • nic1 172.20.1.201
    • nic2 172.20.2.201

After the reboot, access the web interface on port 8006 (HTTPS). You will likely be prompted to accept a self-signed certificate. Once you accept the security risk, you should see the Proxmox dashboard.

proxmox web interface after fresh installation


Post-Installation

Once your Proxmox nodes have successfully booted, there are several essential post-installation tasks to complete, such as managing repositories and removing subscription warnings. The most efficient way to handle this is by using the Proxmox VE Community Post-Install Script. This interactive script automates the removal of the "No Valid Subscription" nag-screen, configures the correct non-subscription repositories, and optimizes system settings.

bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/tools/pve/post-pve-install.sh)"

These are my recommended choices:

Install on an Air-Gapped environment

For an air-gapped (offline) environment, the standard script cannot reach external Proxmox servers. To resolve this, you can host a local version of the script within your network.

1. Host the Script Locally

Download the script to a machine that has network access to your Proxmox nodes. Navigate to the folder containing the script and start a temporary web server using Python:

# Start a local web server on port 8000
python3 -m http.server 8000

2. Execute on the Proxmox Node

Log into your Proxmox node's console and run the following command to pull and execute the script from your local server.

Note: Replace 192.168.0.20 with the actual IP address of the machine running the Python server.

bash -c "$(curl -L http://192.168.0.10:8000/post-pve-install.sh)"

This method allows you to maintain a consistent configuration across all nodes in your cluster without requiring direct internet access for each individual machine.

Setting up ZFS Storage


Updating and final reboot



About the Network Interface Design

A successful Proxmox cluster requires careful network segregation to prevent traffic saturation. At a minimum, you will need three separate network interfaces for your VirtualBox test cluster:

  1. Management Interface: This connects to your internal network and serves as the primary way to access and manage the Proxmox hosts.
  2. Cluster Communication Heartbeat: This interface acts as the lifeline for the Proxmox hosts to communicate. It must be placed on its own isolated network segment (or physically isolated switch) because constant cluster "chatter" and High Availability (HA) voting can easily overwhelm a shared network link.
  3. VM Migration & Storage Replication: A dedicated interface is needed for migrating machines and handling Storage Replication, which copies data across nodes every 15 minutes. Combining this heavy file synchronization traffic with standard cluster communication can quickly saturate a single network connection, potentially leading to false-positive node failures.

Revision #15
Created 2026-03-26 18:30:01 UTC by Carsten
Updated 2026-03-31 20:39:57 UTC by Carsten