Barney's Blog

Blog archive

Q&A with Rick Vanover: Avoiding Excessive Storage Consolidation

While bringing together disparate storage systems can reduce complexity, it's important to remember that you can overdo it. Rick Vanover, a software storage strategist at Veeam Software and upcoming sessions speaker at this year's Live! 360 event, recommends separating storage resources when provisioning vSphere and Hyper-V environments.

Q: Are virtual environments more complex when it comes to storage?
A:
Absolutely. Virtualization introduces abstraction and consolidation. Both of these factors, one good and one bad, introduce complexity to the design, troubleshooting and provisioning processes. In fact, many of us didn't likely spend much time using the IOPS term before virtualization. There's a phenomenon called an I/O tipping point, which many people find when critical workloads become virtualized.

Q: If I have physical servers and move to virtual, shouldn't my existing storage infrastructure just work?
A: There's no general rule for guidance here. It's quite possible it will, but it's also possible it won't. The ability to virtualize all servers should be attainable, but there needs to be visibility into what's actually being used on the virtualized storage infrastructure. There are a number of ways to go about this task for both vSphere and Hyper-V environments, and this is the classic case where a tool needs to be in place to execute correctly.

Q: Does the dynamic nature of VMs with features such as live migration pose a challenge? How is that dealt with?
A: Migration technologies have a serious impact on storage for virtualization. Specifically, I/O channel and pathing may become saturated on a single host but then mysteriously not saturated. The opportunity for correlation of different events can make this additional abstraction a manageable effect.

Q: What features must I demand of a storage vendor as I move to virtual servers?
A:
Simply put, today's storage decision isn't necessarily what we would've chosen just a few years ago. There's a new wave of features to consider: tiering, use of flash and SSD [solid-state drive] technologies, and virtualization awareness. This is an assessment process that looks into what the stakeholders of the infrastructure are. Meaning, if most or all of the storage will be used by vSphere or Hyper-V VMs, you should select a storage product built for that technology.

Q: In essence, with VM sprawl you're backing up more and more servers. How does one manage this additional work? Does this have to add additional cost?
A:
Not necessarily. In fact, many virtualization-specific backup approaches can actually significantly reduce the data-protection burden. This can be done with agent-less backup technologies built specifically for vSphere, Hyper-V or other virtualization platforms. Consolidation comes into play again here. Done well, VM backups can actually be more efficient when leveraging the platform.

If heading out to Orlando for this year's Live! 360 event in December, make sure to catch Rick's workshop, "Storage Best Practices for Virtualized Environments."

Posted by Doug Barney on 10/24/2012 at 1:19 PM


Featured

comments powered by Disqus

Subscribe on YouTube