In-Depth

Inside Microsoft's Embrace of Container-Fueled Automation

The introduction of containers to Windows Server, Hyper-V and Microsoft Azure tied with Windows PowerShell automation paves the Microsoft path to DevOps and support for cross-platform modern apps.

If the revelation of Microsoft's plans to develop container technology for Windows Server that would be compatible with Linux sounded like a far-out technical blueprint, you'll be surprised to hear the future is arriving. At least the early components of Microsoft's new journey to bring more automation and cross-platform support to its infrastructure software and tools are coming this month as Microsoft releases the second Technical Preview of Windows Server, code-named "vNext."

The new Windows Server preview introduces the support Microsoft promised last fall for Linux-based Docker Inc. containers in a once-unlikely partnership between Microsoft and a major open source player. The new Windows Server Technical Preview also will support recipes from Chef Software Inc. that enable automation of datacenter and desktop configuration management using the Desired State Configuration (DSC) enabled in the newest version of Windows PowerShell. Containers in Windows Server will function as lightweight runtime environments with many of the core components of a virtual machine (VM) and isolated services of an OS designed to package and execute as so-called micro-services.

Microsoft sees containers and the automation of routine processes as the key to providing the Web scale required for modern applications, while blurring the lines between what compute services are running in the on-premises datacenter and in Microsoft Azure and other providers' public clouds. This is not lost on Microsoft's rivals including Google Inc., Red Hat Inc. and VMware Inc. For example, Google just last month partnered with CoreOS Inc., operator of the default hub for all Linux containers including Docker. CoreOS, with help from a $12 million investment from Google Ventures, is building a new platform called Tectonic, which will host the Google-developed and now open source Kubernetes. Tectonic aims to provide a fault-tolerant platform to deliver containers that provide APIs that manage such infrastructure services as search, load balancing and service discovery (Microsoft has also said it will support Kubernetes).

Shift in Computing
Bringing containers to the Windows platform is important because they provide lightweight services that can scale and transcend the functions of any one OS or VM. The quest to deliver automation is coming together in lockstep with the move to hybrid cloud architectures. Enabling both is critical so these new modern applications can scale.

"Right now, the biggest technology shift in the cloud is a rapid evolution from simple virtual machine (VM) hosting toward containerization, an incredibly efficient, portable, and lightweight cloud technology that saves significant operating system overhead costs and dramatically improves application time-to-market," wrote Mark Russinovich, the CTO for Azure and one of the most knowledgeable engineers on the internals of the Windows kernel, in a March blog post. "On top of that, we're seeing orchestration technologies emerge for VMs and containers."

"We're seeing orchestration technologies emerge for VMs and containers."

Mark Russinovich, Microsoft Azure CTO

This is all beginning to "blur the lines between traditional [Infrastructure-as-a-Service] (IaaS) and [Platform-as-a-Service] (PaaS) approaches to cloud computing, making it easier for customers to scale quickly and easily without sacrificing security or control," Russinovich explained. Russinovich noted this is a major departure from the way developers have historically developed applications and maintained them in separate siloes. While each app could communicate across a network from a client, they couldn't easily share with other apps. Now, developers are building applications that can be built, tested, deployed and monitored in a more "holistic" way.

"Everyone is driving toward a more simplified streamlined, deployment of resources," says IDC analyst Al Gillen. "The nice thing is if you can get to a model where you use a really thin OS you can have a great deal of consistency from one image to another. Which means that you don't have a lot of management overhead and you don't have a lot of variable configurations that cost a lot of money to support. So the Holy Grail is to get to a one-image operating scenario, and the advent of the thin OS promises to remove the complexity from all the layers of the stack."

Enabling DevOps
The benefit of getting to a single image for all of your workloads is that the management of the OS is minimized and potentially can go away because you're only going to manage that one single image and only have to qualify the patches once because they're insulated in the containers above the OS, he adds. "You need to have more automation simply because you need everything to just take care of itself. You can't really do it manually, or you end up with this massive number of installations." Getting to this stage is much more difficult than it may sound, Gillen warns. "It works great for net-new applications but for all the old stuff, it's a lot harder to get a new thin OS implementation simply because you have to do a lot of code migration to get the old apps to run in that environment."

Mark Bowker, a senior analyst at Enterprise Strategy Group Inc., agrees, noting containers will allow developers to build applications that are more portable. The growth of environments that use containers also could benefit shops that are moving to a DevOps model, which involves processes where developers and IT pros work together so applications can become more infrastructure-aware. "So if I'm a developer and I need more capacity of some sorts, which could be compute or maybe it's tied to networking, I may be able to have some PowerShell capabilities to include in my application and have that incorporated now instead of having to break through walls inside my IT org to get that capacity," Bowker says.

Cooking Automation with Windows PowerShell
Indeed, Windows PowerShell and the DSC built into the newest version of the Microsoft Windows scripting tool promise to play a key role as containers become more mainstream. The inventor of Windows PowerShell Jeffrey Snover, a Microsoft distinguished engineer for Windows Server, has invested much of his efforts of late with the open source community, notably Chef, which has its own cross-platform architecture for automation using what the company calls Chef recipes.

DSC is a distributed, heterogeneous configuration management platform, as Snover described it to attendees at last month's ChefConf in Santa Clara. "We work with all of the partners in the industry so if they have anything that needs to be configured, they write a [DSC] resource, and then we work with the configuration management tool industry to get their tools to consume the [DSC] management APIs, and when they do that, they can manage every element that plugs into the [DSC] platform."

Adam Edwards, a general manager for the Windows vertical at Chef, explains the integration between DSC and Chef Server. "We essentially now use it as a pass-through where you can offer Chef recipes that will talk to their configuration management system and it exposes their configuration points as if they are part of Chef," Edwards says. "Essentially we multiply. Microsoft has over 100 new configuration points that they've added to their configuration management system. We work closely with them, they've taken our feedback on some of the design of that configuration system so that we can make it work better with Chef. We actually work closely with their engineering team on that."

Chef and Microsoft have worked closely together over the past year, according to Edwards. "Microsoft has been trying to make Windows more automatable. They wanted feedback from us and they've taken our feedback and as a result, we got pretty direct support for configuring Windows," he adds.

These configuration capabilities enabled by DSC, tied to the release of the new Microsoft container technology, will bring about new requirements for automation, Russinovich tells Redmond. "PowerShell is great as a scripting tool for automation," Russinovich says. "Our Azure Automation service executes user-supplied PowerShell scripts for imperative workflows. However, we believe that declarative, rather than imperative, automation allows the infrastructure to take over activities that users would otherwise have to manage themselves. For example, declaring that there should be three instances of a micro-service allows an orchestrator to take over the job of deploying those instances, determining on which servers to place them, and repairing or moving them when there are failures. Such tasks are very hard to script and lead to fragile systems that are hard to diagnose and change. "

David Messina, vice president of marketing at Docker, says the automation enabled by its container platform is bringing operational benefits to systems administrators by simplifying the deployment of distributed applications on any type of infrastructure, whether on-premises or in the cloud. "The added benefit is that Docker is helping ops teams streamline the development process," Messina says. "We see operations teams setting up self-service capabilities for their development teams to rapidly build, ship, and run their apps and move them through all the stages of the application lifecycle seamlessly. Today this is true for Linux and moving forward, with all the great work that Microsoft is doing, it will be true for Windows Server, as well."

Simplified Operational Model
Bringing containers to Windows Server, Hyper-V and Azure will usher in new capabilities that will enable automation of many mundane processes now performed by IT pros. Russinovich says containers specifically enable "the decomposition of applications into containerized micro-services. Running in containers creates a stable environment for micro­services instances, which in turn results in a simplified operational model that's amenable to automation. Such an architecture enables an orchestrator to automatically scale up and down micro-services, as well as to track their health and perform automated recovery."

Containers also enable automation in that they allow for the sharing of kernel and critical system components. This provides quick startup times and lower resource overhead, which make them suitable for cross-platform, services and application communications. The new Windows Server will enable sharing, publishing and shipping of containers "anywhere the next wave of Windows Server is running," Microsoft Corporate VP Jason Zander explained last fall. "With this new technology millions of Windows developers familiar with technologies such as .NET, ASP.NET, PowerShell and more will be able to leverage container technology. No longer will developers have to choose between the advantages of containers and using Windows Server technologies."

"Microsoft has over 100 new configuration points that they added to their configuration management system."

Adam Edwards, General Manager, Chef

Docker Integration
Extending beyond Windows, though, Microsoft says the new Windows Server will support a native Docker client container. The Docker Engine for Windows Server, developed under the auspices of the Docker open source project, will be able to run Linux container-based applications available in the Docker Hub where Windows Server container images will also be available.

Meanwhile, the introduction of Hyper-V containers will offer a deployment option to running applications on Windows Server, which in the new version will also support containers. The Hyper-V containers will support the same development and management tools as those designed for Windows Server Containers, wrote Mike Neil, Microsoft general manager for Windows Server, in a blog post last month. Moreover, he said developers don't need to modify applications built for Windows Server Containers in order to run in Hyper-V containers.

The addition of Hyper-V containers will offer a deployment option that offers extended isolation utilizing the attributes of not just the Windows OS, but Hyper-V virtualization, according to Neil.

"Virtualization has historically provided a valuable level of isolation that enables these scenarios, but there is now oppor­tunity to blend the efficiency and density of the container model with the right level of isolation," Neil said. "Microsoft will now offer containers with a new level of isolation previously reserved only for fully dedicated physical or virtual machines, while maintaining an agile and efficient experience with full Docker cross-platform integration. Through this new first-of-its-kind offering, Hyper-V Containers will ensure code running in one container remains isolated and cannot impact the host operating system or other containers running on the same host."

Microsoft MVP Aidan Finn believes running containers in Hyper-V appears to be a more secure option than using the OS. "Hyper-V provides secure isolation for running each container, using the security of the hypervisor to create a boundary between each container," he wrote in a blog post last month. "How this is accomplished has not been discussed publicly yet. We do know that Hyper-V containers will share the same management as Windows Server containers and that applications will be compatible with both."

Neil pointed out that Microsoft also has made it easier to deploy to the newest Docker engine released earlier this year using Azure extensions to setup a Docker host on Azure Linux VMs and to deploy Docker-Managed VMs from the Azure marketplace.

Windows Nano Server
Container support is also coming to a scaled-down Nano Server for modern application scenarios where Hyper-V and Windows Server would be overkill, Neil said. Neil described the new Nano Server as "a minimal footprint installation option of Windows Server that is highly optimized for the cloud, including containers. Nano Server provides just the components you need -- nothing else, meaning smaller server images, which reduces deployment times, decreases network bandwidth consumption, and improves uptime and security. This small footprint makes Nano Server an ideal complement for Windows Server Containers and Hyper-V Containers, as well as other cloud-optimized scenarios."

Finn warns the Nano Server is a bare-bones system. "The OS is beyond Server Core," he says. "It's not just Windows without the UI; it is Windows without the I [interface]. There is no log-on prompt and no remote desktop. This is a headless server installation option."

Future Role of Automation
Automation is certainly not a new concept for IT, of course. It's a matter of bringing it to new levels. In the end, though, the arrival of thin OSes and micro-services is sure to shed new light on automation. It's a question of whether the drive for automation will usher in these new technologies or will be a mere outgrowth of them.

"It's safe to say the concept of these thin OSes go hand in hand with the container strategy, which then enables the automation, and certainly you want to get to a model where you have more automation and it's made possible by this," IDC's Gillen says. "As organizations wind up with more implementations of thin OSes, you need to have more automation simply because you need everything to just take care of itself. You can't really do it manually or you end up with this massive number of installations."

Featured

comments powered by Disqus

Subscribe on YouTube