Posey's Tips & Tricks
A Better Way To Upgrade Hyper-V Storage
It's time again for Brien to perform a major storage upgrade on his Hyper-V hosts. But this time, he's taking a new approach.
Throughout my years in IT, there are two things that I have constantly heard people say about storage:
- Storage is expensive.
- Storage consumption increases exponentially over time.
Personally, I think that there is a lot of truth to both statements.
Barely a year ago, I performed a major storage upgrade on my Hyper-V hosts. As luck would have it, I have already outgrown my available storage, and so another storage upgrade became necessary. This time, I decided to rethink my approach to storage.
Before I tell you what I did, I want to address the obvious issue. I'm sure that there are those who would tell me that I should be using cloud storage rather than constantly upgrading my on-premises storage. However, I hate the idea of paying for storage consumption month after month. I would much prefer just purchasing storage and being done with it, rather than paying a per-gigabyte per-month fee.
My other issue with using cloud storage is that because of where I live, Internet connectivity is slow and somewhat unreliable. Having a local copy of my data is the only way to ensure that the data will be available when I need access to it.
With that said, let me tell you a little bit about my existing infrastructure. My production environment consists of two custom-built Hyper-V hosts. I have one production virtual machine (a virtualized file server) and Hyper-V is configured to replicate that VM to the secondary host.
Both of my hosts are configured with internal storage arrays. Although this approach has worked well enough in the past, the hosts have a finite number of drive bays. This means that every time I need to upgrade my available storage, I have to purchase larger and larger disks.
The most obvious problem with this is that eventually I will get to the point at which my server is provisioned with the largest possible disks. The other problem is that large disks are expensive. The biggest consumer-grade hard disk that is readily available today is 14TB in size. Most retailers sell 14TB disks for somewhere between $500 and $600.
While that price might not necessarily be a deal-breaker, it is important to remember that hard disks have a limited lifespan. When a disk fails, it is far less expensive to replace a small disk than a large one. That being the case, I chose to use external storage arrays filled with a large number of small hard disks rather than loading up my Hyper-V hosts with the largest disks that money can buy.
I have to admit that I had seriously considered replacing my production Hyper-V hosts with a couple of standalone NAS servers. However, I did not want to give up native Hyper-V replication or the ability to failover a VM whenever I need to perform maintenance.
Ultimately, I did choose to use NAS appliances for my storage upgrade, but rather than making them accessible to every computer on my network, I am going to be connecting each Hyper-V host to a NAS server by way of iSCSI. That way, I can still use the Hyper-V replica feature and I can maintain storage redundancy by using two separate NAS appliances.
In other words, each of the two NAS boxes will function as dedicated storage for a Hyper-V host.
I gave serious thought to doing something similar a few years ago, but was concerned about the possibility of inadequate performance. In order to ensure a decent level of performance, I am implementing redundant 10 gigabit links between each host and its NAS box. I also plan to implement redundant 10 gigabit links between the two Hyper-V hosts. Those links will be used solely for carrying replication traffic. In addition, both NAS boxes will be configured to use storage tiering as a way of improving performance.
My new approach to storage also provided one more benefit. Because the NAS boxes contain so many drive bays, I am going to be able to designate one disk in each array as a hot spare. That way, the RAID rebuild process can begin immediately any time that a disk fails.
I plan to begin the build at some point in the next few days. Although it will likely take weeks for me to complete the project, I plan to blog about any lessons that I learn along the way. Stay tuned!
About the Author
Brien Posey is a 22-time Microsoft MVP with decades of IT experience. As a freelance writer, Posey has written thousands of articles and contributed to several dozen books on a wide variety of IT topics. Prior to going freelance, Posey was a CIO for a national chain of hospitals and health care facilities. He has also served as a network administrator for some of the country's largest insurance companies and for the Department of Defense at Fort Knox. In addition to his continued work in IT, Posey has spent the last several years actively training as a commercial scientist-astronaut candidate in preparation to fly on a mission to study polar mesospheric clouds from space. You can follow his spaceflight training on his Web site.