Posey's Tips & Tricks
How To Refresh Hyper-V Hardware, Part 2: Managing Replication Traffic
The final part of Brien's Hyper-V upgrade involves using a 10GbE connection for replication traffic -- without having access to enterprise-class hardware.
In Part 1, I talked about some of the challenges that I ran into while trying to get Hyper-V replication to work during a recent hardware refresh. To quickly recap, I removed a Hyper-V host from my network and replaced it with a new host. This host had been acting as a Hyper-V replica server, and I was eventually able to use Hyper-V replication to replicate my production virtual machine (VM) to the new host. So at that point, my production network contained one old host and one new host.
My plan for replacing the remaining legacy server was to perform a planned failover of the VM to the new replica server. Once the VM was running on that server, I planned to remove replication, then remove the old host from my network and replace it with another new host. Once that was done, my plan was to set up replication once again, thereby replicating the VM to the newly added host.
I will go ahead and tell you up front that this process worked exactly as intended. However, there was one aspect of the replication process that was a bit of an unknown.
As I mentioned in my previous column, my product VM contains many terabytes of data and I had previously been replicating it across a gigabit Ethernet connection. That meant that the initial replication process took days to complete.
In an effort to reduce the initial replication time and to avoid replication failures, I decided to use a dedicated 10-gigabit connection for replication traffic.
There were a couple of reasons why replicating a Hyper-V VM across a 10-gigabit Ethernet connection was an unknown. First, because I work out of my home, I do not have access to enterprise-class hardware. I have, of course, read about 10GbE, but I had never actually worked with 10-gigabit hardware before. Thankfully, everything worked perfectly.
Both of my new servers were equipped with a single gigabit Ethernet port and two 10GbE SFP+ ports. My plan was to use the gigabit port for general network traffic, and to use one of the 10-gigabit ports for replication traffic and the other 10-gigabit ports for iSCSI connectivity to a storage array.
The cool thing about SFP+ ports is that they are auto-sensing. Because I was connecting two servers to one another, I was able to simply connect a cable between the two servers, without the need for a 10GbE switch. IP address assignments work the same way that they would on any other network adapter.
Once I had established connectivity between the two servers, the next challenge was getting Hyper-V to use the 10-gigabit connection for replication traffic. Unfortunately, Hyper-V does not provide an option to use a specific network adapter for replication traffic. There are lots of resources explaining how to work around this limitation if you need to perform inter-cluster replication, but I couldn't find any explaining how to force Hyper-V to use a specific adapter for host-to-host replication.
Thankfully, this ended up being a non-issue. Windows Server is designed to use the fastest available connection between hosts. When I assigned IP addresses to the various adapters, Windows automatically created DNS entries for each adapter. As such, Windows knew that there were two different paths that could be used for communications between the Hyper-V hosts, and was smart enough to direct Hyper-V replication traffic across the 10GbE connection. Although the Hyper-V Manager does not explicitly tell you which connection it is using, I was able to use the Windows Performance Monitor to confirm that replication traffic was flowing across my 10GbE connection.
I used a completely different subnet for my server backbone connection than I use for general network traffic. That allowed me to force general network traffic to flow across my gigabit connection, thereby reserving the 10GbE connection for inter-server traffic. Incidentally, I used yet another subnet for connectivity to my storage arrays, thus ensuring that a 10GbE port would be reserved solely for iSCSI use.
So far, all of the new hardware is working flawlessly and my VM is performing better than it ever has before.
About the Author
Brien Posey is a 22-time Microsoft MVP with decades of IT experience. As a freelance writer, Posey has written thousands of articles and contributed to several dozen books on a wide variety of IT topics. Prior to going freelance, Posey was a CIO for a national chain of hospitals and health care facilities. He has also served as a network administrator for some of the country's largest insurance companies and for the Department of Defense at Fort Knox. In addition to his continued work in IT, Posey has spent the last several years actively training as a commercial scientist-astronaut candidate in preparation to fly on a mission to study polar mesospheric clouds from space. You can follow his spaceflight training on his Web site.