Skip to content

ProxmoxVE

Initial Installation / Configuration

Proxmox Virtual Environment is an open source server virtualization management solution based on QEMU/KVM and LXC. You can manage virtual machines, containers, highly available clusters, storage and networks with an integrated, easy-to-use web interface or via CLI.

Note

This document assumes you have a storage server that hosts both ISO files via CIFS/SMB share, and has the ability to set up an iSCSI LUN (VM & Container storage). This document assumes that you are using a TrueNAS Core server to host both of these services.

Create the first Node

You will need to download the Proxmox VE 8.1 ISO Installer from the Official Proxmox Website. Once it is downloaded, you can use Balena Etcher or Rufus to deploy Proxmox onto a server.

Warning

If you are virtualizing Proxmox under a Hyper-V environment, you will need to follow the Official Documentation to ensure that nested virtualization is enabled. An example is listed below:

Set-VMProcessor -VMName <VMName> -ExposeVirtualizationExtensions $true # (1)
Get-VMNetworkAdapter -VMName <VMName> | Set-VMNetworkAdapter -MacAddressSpoofing On # (2)

  1. This tells Hyper-V to allow the GuestVM to behave as a hypervisor, nested under Hyper-V, allowing the virtualization functionality of the Hypervisor's CPU to be passed-through to the GuestVM.
  2. This tells Hyper-V to allow your GuestVM to have multiple nested virtual machines with their own independant MAC addresses. This is useful when using nested Virtual Machines, but is also a requirement when you set up a Docker Network leveraging MACVLAN technology.

Networking

You will need to set a static IP address, in this case, it will be an address within the 20GbE network. You will be prompted to enter these during the ProxmoxVE installation. Be sure to set the hostname to something that matches the following FQDN: proxmox-node-01.MOONGATE.local.

Hostname IP Address Subnet Mask Gateway DNS Server iSCSI Portal IP
proxmox-node-01 192.168.101.200 255.255.255.0 (/24) None 1.1.1.1 192.168.101.100
proxmox-node-01 192.168.103.200 255.255.255.0 (/24) None 1.1.1.1 192.168.103.100
proxmox-node-02 192.168.102.200 255.255.255.0 (/24) None 1.1.1.1 192.168.102.100
proxmox-node-02 192.168.104.200 255.255.255.0 (/24) None 1.1.1.1 192.168.104.100

iSCSI Initator Configuration

You will need to add the iSCSI initiator from the proxmox node to the allowed initiator list in TrueNAS Core under "Sharing > Block Shares (iSCSI) > Initiators Groups"

In this instance, we will reference Group ID: 2. We need to add the iniator to the "Allowed Initiators (IQN)" section. This also includes the following networks that are allowed to connect to the iSCSI portal:

  • 192.168.101.0/24
  • 192.168.102.0/24
  • 192.168.103.0/24
  • 192.168.104.0/24

To get the iSCSI Initiator IQN of the current Proxmox node, you need to navigate to the Proxmox server's webUI, typically located at https://<IP>:8006 then log in with username root and whatever you set the password to during initial setup when the ISO image was mounted earlier.

  • On the left-hand side, click on the name of the server node (e.g. proxmox-node-01 or proxmox-node-02)
  • Click on "Shell" to open a CLI to the server
  • Run the following command to get the iSCSI Initiator (IQN) name to give to TrueNAS Core for the previously-mentioned steps:
    cat /etc/iscsi/initiatorname.iscsi | grep "InitiatorName=" | sed 's/InitiatorName=//'
    

Example

Output of this command will look something like iqn.1993-08.org.debian:01:b16b0ff1778.

Disable Enterprise Subscription functionality

You will likely not be paying for / using the enterprise subscription, so we are going to disable that functionality and enable unstable builds. The unstable builds are surprisingly stable, and should not cause you any issues.

Add Unstable Update Repository:

/etc/apt/sources.list
# Add to the end of the file
# Non-Production / Unstable Updates
deb https://download.proxmox.com/debian bookworm pve-no-subscription

Warning

Please note the reference to bookworm in both the sections above and below this notice, this may be different depending on the version of ProxmoxVE you are deploying. Please reference the version indicated by the rest of the entries in the sources.list file to know which one to use in the added line section.

Comment-Out Enterprise Repository:

/etc/apt/sources.list.d/pve-enterprise.list
# deb https://enterprise.proxmox.com/debian/pve bookworm pve-enterprise

Pull / Install Available Updates:

apt-get update
apt dist-upgrade
reboot

NIC Teaming

You will need to set up NIC teaming to configure a LACP LAGG. This will add redundancy and a way for devices outside of the 20GbE backplane to interact with the server.

  • Ensure that all of the network interfaces appear as something similar to the following:

    /etc/network/interfaces
    iface eno1 inet manual
    iface eno2 inet manual
    # etc
    

  • Adjust the network interfaces to add a bond:

    /etc/network/interfaces
    auto eno1
    iface eno1 inet manual
    
    auto eno2
    iface eno2 inet manual
    
    auto bond0
    iface bond0 inet manual
            bond-slaves eno1 eno2
            bond-miimon 100
            bond-mode 802.3ad
            bond-xmit-hash-policy layer2+3
    
    auto vmbr0
    iface vmbr0 inet static
            address 192.168.0.11/24
            gateway 192.168.0.1
            bridge-ports bond0
            bridge-stp off
            bridge-fd 0
    #        bridge-vlan-aware yes # I do not use VLANs
    #        bridge-vids 2-4094 # I do not use VLANs (This could be set to any VLANs you want it a member of)
    

Warning

Be sure to include both interfaces for the (Dual-Port) 10GbE connections in the network configuration. Final example document will be updated at a later point in time once the production server is operational.

  • Reboot the server again to make the networking changes take effect fully. Use iLO / iDRAC / IPMI if you have that functionality on your server in case your configuration goes errant and needs manual intervention / troubleshooting to re-gain SSH control of the proxmox server.

Generalizing VMs for Cloning / Templating:

These are the commands I run after cloning a Linux machine so that it resets all information for the machine it was cloned from.

Note

If you use cloud-init-aware OS images as described under Cloud-Init Support on https://pve.proxmox.com/pve-docs/chapter-qm.html, these steps won’t be necessary!

Change Hostname
sudo nano /etc/hostname
Change Hosts File
sudo nano /etc/hosts
Reset the Machine ID
rm -f /etc/machine-id /var/lib/dbus/machine-id
dbus-uuidgen --ensure=/etc/machine-id
dbus-uuidgen --ensure
Regenerate SSH Keys
rm -f /etc/machine-id /var/lib/dbus/machine-id
dbus-uuidgen --ensure=/etc/machine-id
dbus-uuidgen --ensure
Reboot the Server to Apply Changes
reboot

Configure Alerting

Setting up alerts in Proxmox is important and critical to making sure you are notified if something goes wrong with your servers.

https://technotim.live/posts/proxmox-alerts/