Skip to content

Deploy Failover Cluster Node

Purpose: Deploying a Windows Server Node into the Hyper-V Failover Cluster is an essential part of rebuilding and expanding the backbone of my homelab. The documentation below goes over the process of setting up a bare-metal host from scratch and integrating it into the Hyper-V Failover Cluster.

Prerequisites & Assumptions

This document assumes you are have installed and are running a bare-metal Hewlett-Packard Enterprise server with iLO (Integrated Lights Out) with the latest build of Windows Server 2022 Datacenter (Desktop Experience).

This document also assumes that you are adding an additional server node to an existing Hyper-V Failover Cluster. This document does not outline the exact process of setting up a Hyper-V Failover Cluster from-scratch, setting up a domain, DNS server, etc. Those are assumed to already exist in the environment. Your domain controller(s) need to be online and accessible from the Failover Cluster node you are building for things to work correctly.

Download the newest build ISO of Windows Server 2022 at the Microsoft Evaluation Center

Enable Remote Desktop

Enable remote desktop however you can, but just be sure to disable NLA, see the notes below for details.

Disable NLA (Network Level Authentication)

Ensure that "Allow Connections only from computers running Remote Desktop with Network Level Authentication" is un-checked. This is important because if you are running a Hyper-V Failover Cluster, if the domain controller(s) are not running, you may be effectively locked out from using Remote Desktop to access the failover cluster's nodes, forcing you to use iLO or a physical console into the server to log in and bootstrap the cluster's Guest VMs online.

This step can be disregarded if the domain controller(s) exist outside of the Hyper-V Failover Cluster.

# Enable Remote Desktop (NLA-Disabled)
Set-ItemProperty -Path "HKLM:\System\CurrentControlSet\Control\Terminal Server" -Name "fDenyTSConnections" -Value 0
Set-ItemProperty -Path "HKLM:\System\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp" -Name "UserAuthentication" -Value 0 Enable-NetFirewallRule -DisplayGroup "Remote Desktop"

Provision Server Roles, Activate, and Domain Join

# Rename the server
Rename-Computer BUNNY-NODE-02

# Install Hyper-V, Failover, and MPIO Server Roles
Install-WindowsFeature -Name Hyper-V, Failover-Clustering, Multipath-IO -IncludeManagementTools

# Change edition of Windows (Then Reboot)
irm https://get.activated.win | iex

# Force activate server (KMS38)
irm https://get.activated.win | iex

# Configure DNS Servers
Get-NetAdapter | Where-Object { $_.Status -eq 'Up' } | ForEach-Object { Set-DnsClientServerAddress -InterfaceIndex $_.InterfaceIndex -ServerAddresses ("192.168.3.25","192.168.3.26") }

# Domain-join the server
Add-Computer BUNNY-LAB.io

# Restart the Server
Restart-Computer

Failover Cluster Configuration

Configure Cluster SET Networking

Disable Embedded Ports

We want to only use the 10GbE Cluster_SET network for both virtual machines and the virtualization host itself. This ensures that all traffic goes through the 10GbE team. Disable all other non-10GbE network adapters.

You will need to start off by configuring a Switch Embedded Teaming (SET) team. This is the backbone that the server will use for all Guest VM traffic as well as remote-desktop access to the server node itself. You will need to rename the network adapters to make management easier.

  • Navigate to "Network Connections" then "Change Adapter Options"
    • Rename the network adapters with simpler names. e.g. (Ethernet 1 becomes Port_1)
    • For the sake of demonstration, assume there are 2 10GbE NICs (Port_1 and Port_2)
# Create Switch Embedded Teaming (SET) team
New-VMSwitch -Name Cluster_SET -NetAdapterName Port_1, Port_2 -EnableEmbeddedTeaming $true

# Disable IPv4 and IPv6 on all other network adapters
Get-NetAdapter | Where-Object { $_.Name -ne "vEthernet (Cluster_SET)" } | ForEach-Object { Set-NetAdapterBinding -Name $_.Name -ComponentID "ms_tcpip" -Enabled $false; Set-NetAdapterBinding -Name $_.Name -ComponentID "ms_tcpip6" -Enabled $false }

# Set IP Address of Cluster_SET for host-access and clustering
New-NetIPAddress -InterfaceAlias "vEthernet (Cluster_SET)" -IPAddress 192.168.3.5 -PrefixLength 24 -DefaultGateway 192.168.3.1
Set-DnsClientServerAddress -InterfaceAlias "vEthernet (Cluster_SET)" -ServerAddresses ("192.168.3.25","192.168.3.26")

Configure iSCSI Initiator to Connect to TrueNAS Core Server

At this point, now that we have verified that the 10GbE NICs can ping their respective iSCSI target server IP addresses, we can add them to the iSCSI Initiator in Server Manager which will allow us to mount the cluster storage for the Hyper-V Failover Cluster.

  • Open Server Manager > MPIO

    • Navigate to the "Discover Multi-Paths" tab
    • Check the "Add support for iSCSI devices" checkbox
    • Click the "Add" button
  • Open TrueNAS Core Server

    • Navigate to the TrueNAS Core server and add the "Initiator Name" seen on the "Configuration" tab of the iSCSI Initiator on the Virtualization Host to the Sharing > iSCSI > Initiator Groups > "iSCSI-Connected Servers"
  • Open iSCSI Initiator

    • Click on the "Discovery" tab
    • Click the "Discover Portal" button
      • Enter the IP addresses of "192.168.3.3". Leave the port as "3260".
      • Example Initiator Name: iqn.1991-05.com.microsoft:bunny-node-02.bunny-lab.io
    • Click the "Targets" tab to go back to the main page
      • Click the "Refresh" button to display available iSCSI Targets
      • Click on the first iSCSI Target iqn.2005-10.org.moon-storage-01.ctl:iscsi-cluster-storage then click the "Connect" button
        • Check the "Enable Multi-Path" checkbox
        • Click the "Advanced" button
        • Click the "OK" button
    • Navigate to "Disk Management" to bring the iSCSI drives "Online" (Dont do anything after this in Disk Management)

Initialize and Join to Existing Failover-Cluster

Validate Server is Ready to Join Cluster

Now it is time to set up the Failover Cluster itself so we can join the server to the existing cluster.

  • Open Server Manager
    • Click on the "Tools" dropdown menu
    • Click on "Failover Cluster Manager"
    • Click the "Validate Configuration" button in the middle of the window that appears
      • Click "Next"
      • Enter Server Name: BUNNY-NODE-02.bunny-lab.io
      • Click the "Add" button, then "Next"
      • Ensure "Run All Tests (Recommended)" is selected, then click "Next", then click "Next" to start.

Join Server to Failover Cluster

  • On the left-hand side, right-click on the "Failover Cluster Manager" in the tree
    • Click on "Connect to Cluster"
    • Enter USAGI-CLUSTER.bunny-lab.io
    • Click "OK"
  • Expand "USAGI-CLUSTER.bunny-lab.io" on the left-hand tree
    • Right-click on "Nodes"
    • Click "Add Node..."
      • Click "Next"
      • Enter Server Name: BUNNY-NODE-02.bunny-lab.io
      • Click the "Add" button, then "Next"
      • Ensure that "Run Configuration Validation Tests" radio box is checked, then click "Next"
      • Validate that the node was successfully added to the Hyper-V Failover Cluster

Cleanup & Final Touches

Ensure that you run all available Windows Updates before delegating guest VM roles to the new server in the failover cluster. This ensures you are up-to-date before you become reliant on the server for production operations.