Skip to main content

Reinstalling Proxmox VE Without Losing VM/LXC Data (ZFS Only)


TODO – WORK IN PROGRESS

Warning: This method applies only works if yourwhen VMs and LXCs are stored on a separate ZFS pool fromthat is not the Proxmox boot pool (rpool).
NeverSelecting selectthe your data poolwrong disks during reinstallation --- doing soinstallation will permanently erase all VM/container datadata.

Scope: Designed for single-node Proxmox systems using ZFS. Multi-node clusters require additional steps for cluster recovery and are not covered here.


Baseline Scenario

  • Host Type: Single Proxmox VE node using ZFS

    *
    • Boot Pool: rpool --- ZFS Mirror (2× SSD)
    • *
    • Data Pool: zfs-data --- ZFS RAIDZ2 (4× NVMe) --- contains all VM and VM/LXC disks
    • *
    • Goal: Reinstall Proxmox VE without backing up full VM disk images


Step 1: Backup Critical Configuration Files

The VM and LXC "shells"metadata (metadata)"shells") are stored inin:

/etc/pve
.

These files define names, hardware,hardware settings, disks, and settings.resources.

VM Configuration Files

/etc/pve/qemu-server/*.conf

Example:

root@pmx02:/etc/pve/qemu-server# ls
100.conf  101.conf  102.conf  104.conf  106.conf  119.conf  132.conf

LXC Configuration Files

/etc/pve/lxc/*.conf

Example:

root@pmx02:/etc/pve/lxc# ls
103.conf

Optional but

/etc/network/interfaces
/etc/hosts
/etc/pve/storage.cfg

Tip: Use scp, rsync, or WinSCP to copy these to a safe location (e.g., NAS, laptop).location.


Step 2: Gracefully Shut Down All Resources

EnsureEnsures data consistency before reinstall:consistency:

# Stop all VMs
qm stop $(qm list | awk 'NR>1 {print $1}')

# Stop all LXCs
pct stop $(pct list | awk 'NR>1 {print $1}')

Then shutShut down the host:

shutdown -h now


Step 3: Reinstall Proxmox VE (Carefully!)

Carefully)
  1. Boot from Proxmox VE ISO

    (via USB or IPMI).\
    • During disk selection:

      • OnlySelect selectonly the SSDs used for rpool
      • *
      • Do Do NOTnot select NVMe drivesdisks from zfs-data\
        • Complete installation normally.
    • Complete installation normally

Critical: Selecting the wrong disks will wipepermanently yourdestroy VM data.data.


Step 4: Import the Existing ZFS Data Pool

After first boot:

zpool import

If zfs-data appears imported automatically, continue.

If not:

zpool import -f zfs-data

Verify import:Verify:

zpool status zfs-data
zfs list

Confirm

Thethat -fdatasets flagcontaining forcesVM importdisks ifare the pool was exported uncleanly.present.


Step 5: Restore VM and LXC Configuration Files

Using SCP/WinSCP/rsync, copyCopy the backed-up .confsaved files back:

# VM configs
scp qemu/*.conf root@<new-proxmox-ip>:/etc/pve/qemu-server/
# LXC configs
scp lxc/*.conf root@<new-proxmox-ip>:/etc/pve/lxc/

They

Filesshould appear instantly in the web UIinterface after placement.immediately.


Step 6: Re-Add the ZFS Pool as Storage

(GUI or Config)

Option A: Via Web GUI (Recommended)

  1. Log in to Proxmox web interface\
    • Datacenter → Storage → Add → ZFS\

    • Select pool:Select: zfs-data\
    • Set content:Content: Disk image, Container\
    • Enable if needed

Option B: Restore from Backup (if you saved storage.cfg)

cfg
scp storage.cfg root@<new-proxmox-ip>:/etc/pve/storage.cfg

Restart pve-cluster service if changes don't reflect:

systemctl restart pve-cluster

Step 7: Verify /etc/pve Availability (Important)

If /etc/pve appears empty:

systemctl status pve-cluster
systemctl status corosync
journalctl -xe

The cluster filesystem must be active before configuration files are usable.


Final Check: Start a Test VM/LXC

qm start 100
pct start 103

If theyboth bootstart successfullycorrectly, ---the you'resystem done!is operational.


Recovery Note

Summary:
If configuration backups are missing, metadata may still be recoverable by inspecting:

/zfs-data

or:

zfs list

Dataset names often correspond to VM IDs.


Summary

By preserving only the configuration files and re-importing the existing ZFS data pool,pool, you can fully reinstall Proxmox VE without migrating terabytes oflarge VM disk images., significantly reducing downtime and storage requirements. The added safety checks help ensure a reliable and recoverable process.