ZFS Over iSCSI
Purpose: There is a way to incorporate ProxmoxVE and TrueNAS more deeply using SSH, simplifying the deployment of virtual disks/volumes passed into GuestVMs in ProxmoxVE. Using ZFS over iSCSI will give you the following non-exhaustive list of benefits:
- Automatically make Zvols in a ZFS Storage Pool
- Automatically bind device-based iSCSI Extents/LUNs to the Zvols
- Allow TrueNAS to handle VM snapshots directly
- Simplify the filesystem overhead of using TrueNAS and iSCSI with ProxmoxVE
- Ability to take snapshots of GuestVMs
- Ability to perform live-migrations of GuestVMs between ProxmoxVE cluster nodes
Environment Assumptions
This document assumes you are running at least 2 ProxmoxVE nodes. For the sake of the example, it will assume they are named proxmox-node-01
and proxmox-node-02
. We will also assume you are using TrueNAS Core. TrueNAS SCALE (should work) in the same way, but there may be minor operational / setup differences between the two different deployments of TrueNAS.
Secondly, this guide assumes the ProxmoxVE cluster nodes and TrueNAS server exist on the same network 192.168.101.0/24
.
ZFS over iSCSI Operational Flow¶
sequenceDiagram
participant ProxmoxVE as ProxmoxVE Cluster
participant TrueNAS as TrueNAS Core (inc. iSCSI & ZFS Storage)
ProxmoxVE->>TrueNAS: Cluster VM node connects via SSH to create ZVol for VM
TrueNAS->>TrueNAS: Create ZVol in ZFS storage pool
TrueNAS->>TrueNAS: Bind ZVol to iSCSI LUN
ProxmoxVE->>TrueNAS: Connect to iSCSI & attach ZVol as VM storage
ProxmoxVE->>TrueNAS: (On-Demand) Connect via SSH to create VM snapshot of ZVol
TrueNAS->>TrueNAS: Create Snapshot of ZVol/VM
All ZFS Storage Nodes / TrueNAS Servers¶
Configure SSH Key Exchange¶
You first need to make some changes to the SSHD configuration of the ZFS server(s) storing data for your cluster. This is fairly straight-forward and only needs two lines adjusted. This is based on the Proxmox ZFS over ISCSI documentation. Be sure to restart the SSH service or reboot the storage server after making the changes below before proceeding onto the next steps.
All ProxmoxVE Cluster Nodes¶
Configure SSH Key Exchange¶
The first step is creating SSH trust between the ProxmoxVE cluster nodes and the TrueNAS storage appliance. You will leverage the ProxmoxVE shell
on every node of the cluster to run the following commands.
Note: I will be writing the SSH configuration with the name 192.168.101.100
for simplicity so I know what server the identity belongs to. You could also name it something else like storage.bunny-lab.io_id_rsa
.
mkdir /etc/pve/priv/zfs
ssh-keygen -f /etc/pve/priv/zfs/192.168.101.100_id_rsa # (1)
ssh-copy-id -i /etc/pve/priv/zfs/192.168.101.100_id_rsa.pub [email protected] # (2)
ssh -i /etc/pve/priv/zfs/192.168.101.100_id_rsa [email protected] # (3)
- Do not set a password. It will break the automatic functionality.
- Send the SSH key to the TrueNAS server.
- Connect to the TrueNAS server at least once to finish establishing the connection.
Install & Configure Storage Provider¶
Now you need to set up the storage provider in TrueNAS. You will run the commands below within a ProxmoxVE shell, then when finished, log out of the ProxmoxVE WebUI, clear the browser cache for ProxmoxVE, then log back in. This will have added a new storage provider called FreeNAS-API
under the ZFS over iSCSI
storage type.
keyring_location=/usr/share/keyrings/ksatechnologies-truenas-proxmox-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/ksatechnologies/truenas-proxmox/gpg.284C106104A8CE6D.key' | gpg --dearmor >> ${keyring_location}
#################################################################
cat << EOF > /etc/apt/sources.list.d/ksatechnologies-repo.list
# Source: KSATechnologies
# Site: https://cloudsmith.io
# Repository: KSATechnologies / truenas-proxmox
# Description: TrueNAS plugin for Proxmox VE - Production
deb [signed-by=${keyring_location}] https://dl.cloudsmith.io/public/ksatechnologies/truenas-proxmox/deb/debian any-version main
EOF
#################################################################
apt update
apt install freenas-proxmox
apt full-upgrade
systemctl restart pvedaemon
systemctl restart pveproxy
systemctl restart pvestatd
Primary ProxmoxVE Cluster Node¶
From this point, we are ready to add the shared storage provider to the cluster via the primary node in the cluster. This is not strictly required, just simplifies the documentation.
Navigate to "Datacenter (BUNNY-CLUSTER) > Storage > Add > ZFS over iSCSI"
Field | Value | Additional Notes |
---|---|---|
ID | bunny-zfs-over-iscsi | Friendly Name |
Portal | 192.168.101.100 | IP Address of iSCSI Portal |
Pool | PROXMOX-ZFS-STORAGE | This is the ZFS Storage Pool you will use to store GuestVM Disks |
ZFS Block Size | 4k | |
Target | iqn.2005-10.org.moon-storage-01.ctl:proxmox-zfs-storage | The iSCSI Target |
Target Group | <Leave Blank> | |
Enable | <Checked> | |
iSCSI Provider | FreeNAS-API | |
Thin-Provision | <Checked> | |
Write Cache | <Checked> | |
API use SSL | <Unchecked> | Disabled unless you have SSL Enabled on TrueNAS |
API Username | root | This is the account that is allowed to make ZFS zvols / datasets |
API IPv4 Host | 192.168.101.100 | iSCSI Portal Address |
API Password | <Root Password of TrueNAS Box> | |
Nodes | proxmox-node-01,proxmox-node-02 | All ProxmoxVE Cluster Nodes |
Storage is Provisioned
At this point, the storage should propagate throughout the ProxmoxVE cluster, and appear as a location to deploy virtual machines and/or containers. You can now use this storage for snapshots and live-migrations between ProxmoxVE cluster nodes as well.