Q&A: Microsoft Program Aims To Ease Windows Server 2016 Software-Defined Complexities
This week, Redmondmag.com talked with Microsoft about the Windows Server Software Defined (WSSD) program.
WSSD is a fairly new Microsoft validation program for OEM hardware makers, with the first partners announced this week. The program aims to help enterprises and service providers take advantage of the software-defined technologies in the Windows Server 2016 Datacenter edition via partner expertise.
In the following edited Q&A, Siddhartha Roy, group program manager for Windows Server, tells us about the program's background, its scope and how it's different from Azure Stack, which is Microsoft's Azure datacenter-in-a-box for organizations. In essence, Microsoft has already done the work to enable software-defined technologies in the Windows Server 2016 Datacenter edition. Now, it has its first batch of partners ready to lend a helping hand so that organizations don't have to "reinvent the wheel" deploying it.
Redmond: What does the WSSD partner program consist of, and when did it start?
Roy: We put this in place in the summer of last year and we announced it at Microsoft Ignite of 2016. The goal of the program is a partnership between Microsoft and the OEM partner. And the goal is to make sure, when we are putting out a software-defined offer based on Windows Server 2016 jointly, that we basically are putting out a very prescriptive hardware offer that has a seamless Day Zero time to solution. So, basically making sure that when the partner deploys this system at the customer's site, we have a good Day Zero experience. The program really expands around a few stages. The key pillars are "design," "validate," "deploy" and "operate."
With regard to "design," what we do is we work actively with the partner on their hardware roadmap and advise them on what is the best hardware configuration to choose in terms of the workload they want to put on that -- capacity-optimized or low-latency-optimized workloads. And we actively work on things like the hard drive and SSD ratio for example. So that's the design phase.
With regard to "validate," the partner has to run a very stringent set of tests that essentially validates their system as a software-defined offer.
And then with regard to "deploy" and "operate," we work with them. We offer scripts and maybe make sure the Day Zero experience at the customer site is very smooth, and the bits are laid out, the system is stood up, both for greenfield and brownfield. In the "operate" field, we give them best practices guidance on how to manage that system with System Center.
What kinds of customers are supported under the WSSD partner program?
The segments we target are nothing specific. We can go small to large. We have seen a lot of deployments in the enterprises, the departments within enterprises, service providers for their private clouds and also branch offices and remote offices, especially with two nodes.
What kinds of deployments are supported?
Broadly speaking, there are two modes of deployment. One is hyperconverged, which includes scenarios where compute, storage and networking are on one node. And then there is disaggregated, where it's a storage-only system.
What do the three partner scenarios in the program signify?
So we have three types of configurations. We talk about "Hyperconverged Standard," "Hyperconverged Premium" and "Storage only."
Hyperconverged Standard is for scenarios where you are trying to consolidate your work and storage infrastructure, for cost reasons, into one layer. And Hyperconverged Standard basically gives you Hyper-V as our work platform, and all of the goodness of Storage Spaces Direct, and top of that you can use Storage Replica for DR [disaster recovery], Storage Qual [Quality of Service] -- all those things. It comes with a little bit of networking for storage networking pieces. So that's kind of one configuration and the scenario for that. We might see scenarios where you need a 2-, 4-, 8- or 16-node Hyperconverged offer because you are typically going from Hyper-V with the most storage and you are trying to consolidate that down to a simple offer, where it's combined compute and storage together for cost-reduction reasons.
The second hyperconverged offer is Hyperconverged Premium. If you want to go beyond storage and compute, and you want to have additional richer Windows Server 2016 features like software-defined networking, like the software load balancer, a network controller, the edge gateway -- all of the goodness of SDN -- then you go for Hyperconverged Premium. And that also has Shielded VMs, which is one of the key differentiators we have in Windows Server 2016 where you can take the Hyper-V VM and shield it, and make sure that it is rooted in hardware, rooted integrity around TPM 2.0 [Trusted Platform Module 2.0]. So, Hyperconverged Premium gives you an additional set of functionality and features above Hyperconverged Standard.
The third one is Storage only (Software Defined Storage or "SDS"). So think of Storage only as where you have your Hyper-V compute infrastructure today talking to, let's say, a NAS or SAN storage remotely over file or block. And you want to retire that SAN storage and you're looking to still keep your Hyper-V compute and roll in a new replacement for the SAN storage over remote. So you could take that "SDS offer," as we call it. It's just storage. It doesn't have Hyper-V. It doesn't have networking. And that basically is [Storage] Spaces Direct in the remote mode.
Are the partners specialized on Hyperconverged Standard, Hyperconverged Premium and SDS technologies?
It's not something we control. We do work with the partners … to help them position it right in their portfolio for the right workload and right deployment and verticals. But inherently, right now, there is no difference in terms of the consulting and sales motion. The partner or the affiliate basically helps fulfill, making sure the customer gets the offer and deploys it. All of the partners we announced yesterday were Hyperconverged Standard or Premium offers.
What's the value of software-defined infrastructure?
Basically, we want to enable as much of the server ecosystem to adopt software defined. We are software. Our main charter is to light up and lighten some of the hardware capabilities. So when we are working on the configuration, we have detailed reference architectures and hardware design guides we give to our partners. And those are very prescriptive. For example, when you are looking at hyperconverged, that has to have a certain amount of latency between the nodes in terms of the East-West network traffic. So we are very prescriptive [such that] when you are designing the system, you have use this kind of network adapter or this kind of RDMA network adapter depending on the workload, or this kind of SSD form factor or NVMe can only be used. Those are the values of the program where we are very tightly controlling, with the partner together, the hardware enlightenment experience. Now, when it comes time to deploy this in an enterprise or a service provider, we are making sure that when the bits are being deployed, at the end of the standing up of the storage and compute cluster, standing of the East-West traffic, all those are pretuned, preoptimized, coming out of the box, and that's either in a greenfield or a brownfield deployment.
Could organizations enable the software-defined capabilities themselves?
Windows Server 2016 goes out to everybody. We will obviously not block people from deploying this on their own, but our advice to customers is, if you are doing steady-state, robust, software-defined deployments, then WSSD is the recommended place to start.
Does software-defined technology take functionalities away from hardware?
This is the way the industry is moving. I wouldn't say software is taking away functionality. For example, the next generation of this option is going to be around storage-class memory, and that's a hardware artifact. The hardware will be there, but really … software will enlighten that up much more efficiently than a proprietary hardware ASIC or a proprietary hardware controller.
Isn't Azure Stack a better example of the software-defined approach?
When you are wanting Azure consistency of workloads and management on your premises, and wanting to run the workload on premises that you run on Azure, then that's Azure Stack. But if you are just wanting a traditional private cloud or departmental work storage solutions to host 100, 200, 500, or 1,000 VMs … then that's WSSD. The interesting thing is that the Windows Server software defined is also what powers Azure Stack's infrastructure. So, if you are wanting Azure consistency, WSSD is not the offer for that. If you are just wanting traditional VM hosting to consolidate your existing work infrastructure and storage together with hyperconverged, WSSD is the answer to that.