After squeaking by on standalone hosts that were running at 95% Memory utilization and idling at 30% CPU usage for years, I finally got approval to implement a Hyper-V cluster at work. However, given that we have no money for IT, it had to be done on the cheap. This meant an entry-level SAN, no 10GBe, and using the existing gear we had, where possible.

This meant building a cluster based on our existing best node, a Dell R630 with 128GB RAM; dual Xeon E5-2630s, a QP Intel NIC, and software RAID for the local install; re-purposing some old hardware into a physical DC; and adding relatively cheap shared storage. All the existing instructions for configuring this sort of solution were about how to configure the setup with iSCSI and while that helped me figure out stuff when it went wrong, it didn't help with the initial configuration of everything.

Since we've got one host already in production I'm creating a two-node cluster with a disk witness. We're using an existing Dell switch to handle Live Migrations, and heartbeat traffic on a dedicated NIC on each node. I'll be creating a 3 NIC LACP team on each host, and connecting that to our existing infrastructure.

I'm going to assume that if you're doing this, you know how to install Windows Server, configure the base install; aren't using SCCM/SCVMM; are not planning on using Storage Spaces; have all the cabling done to allow for a redundant MPIO connection from your server to your SAN. Instructions for how to cable things can be found HERE. After that's all done, you need to install the SAN management tools on some server, keep it off the Hyper-V hosts, and configure the SAN.

Part 1 - SAN Configuration
Part 2 - Cluster Configuration