Q&A

How Microsoft Is Building Windows Server and Service Fabric To Address Developer Needs

Microsoft experts on Windows and Azure responded to a few questions post-Build about how the company has been simplifying the developer experience, from spinning up containers to tapping microservices.

Microsoft experts recently talked with Redmondmag.com about options for developers using new Windows Server capabilities.

The following Q&A, conducted by phone after Microsoft's Build developer event this month, describes the kinds of options available to developers using Windows Server, as well as Azure Service Fabric, Microsoft's cloud-based microservices solution.

The themes of the talk include the predominance of container use in application development, along with Microsoft's attempts to simplify things for developers. Those attempts on the server side include making more Kubernetes-friendly products for container orchestration with the Windows Server 1803 semiannual channel release and the coming Windows Server 2019 product. It also included talk about how the Nano Server install option of Windows Server is now reserved for hosting containers, both Linux and Windows, to better address customer needs.

Microsoft also is using its services to simplify matters for developers, regardless of whether the development happens using the cloud or via the "hybrid" model (cloud plus on-premises development). There's also an increased stress on support for open source tools at Microsoft. Azure Service Fabric was released as open source code about two months ago, although Microsoft is still working on delivering some of the tooling. Azure Container Instances, a serverless runtime for spinning up containers, was commercially released last month, which may be even easier for developers to use than the emerging Azure Kubernetes Service (AKS), formerly known as "managed Kubernetes on Azure." At Build, Microsoft announced Azure Service Fabric Mesh, which promises to remove cluster management burdens from organizations using containers on Azure infrastructure.

I spoke on May 11 with Taylor Brown, principal program manager for Windows, and Madhan Arumugam Ramakrishnan, partner director of program management for Azure compute focusing on developers and partners. What follows is an edited Q&A.

Redmondmag.com: Should developers think about using Windows Server to spin up containers or should they use Azure infrastructure?
Brown: Azure is a great place to run applications, whether they are Windows or Linux. At the same time, we have a tremendous amount of Windows Server customers who are on-prem for any number of reasons, in a hybrid scenario for any number of reasons, as well as using other clouds -- a polycloud- or multicloud-type customer. And so we do work with customers of all of those constituencies.

We're starting to see this pattern really emerge where you take your existing code, bring it into a container, and start to modernize it and take advantage of new capabilities in the platform, starting to use things like microservice patterns with a container based on Nano Server or even a Linux container running on Windows, but also starting to take advantage of services in Azure. Organizations can look at things like using Service Fabric or Azure Container Instances, and start to look at things like using Cosmos DB for the data stores. They can really get into this modern journey where they can go as far and fast as they want to go, or they can leverage the capabilities you have on-prem and start to bring things like Azure Stack.

Ramakrishnan: Our goal in Azure and definitely at Microsoft is to make developers more productive and meet them where they are, whether they are deploying, creating, developing an application for on-prem, on cloud, the hybrid form factor or Azure. It's about how do we help them create applications at any scale, running anywhere, regardless of the frameworks, languages, development environments, and laptops and Macs. Simply put, we want Azure and Windows Server to be the best environments for them to be productive and create enterprise-grade applications.

Why should a developer opt to use Azure Service Fabric? Is it about scale?
Ramakrishnan: You'd typically choose Azure Service Fabric if you are creating cloud-native applications. I would say that the deployment can be any size. Obviously, we use Service Fabric at really large scale, but the customer can install it locally on the Windows box. It's exactly the same runtime, whether you are running it on cloud or Windows or even the Mac.

So while Service Fabric is proven for large scale, it's really a platform for any scale, and especially for Windows Server and .NET applications. We built it for the Microsoft developer. We have first-class support for the DevOps tools from Microsoft, whether it's Visual Studio Team Services or the release management that comes from Visual Studio. Right from your laptop, you can basically do local development, testing and debug, and also deploy to the Azure managed version of Service Fabric. It's a very seamless experience from laptop to production.

Service Fabric is just available from Microsoft's Azure datacenters as a service, right? Or is it also available on-premises, too?
Ramakrishnan: Service Fabric is a microservices platform to develop applications with a microservices pattern. Is Service Fabric an Azure-only service? No. It runs on any cloud that supports VMs, any physical machine, containers, it doesn't matter. It is also open source. We open sourced it a couple of months back. The entire code base for two-and-a-half million lines of code.

For people who are deploying it on Windows Server on-premises, we call it "Service Fabric standalone on Windows Server." We are also planning to support standalone on Ubuntu Linux, but that will come later. And then the Azure implementation would be "Azure Service Fabric."

As a standalone executable microservices platform, you can basically just download the runtime, and we support it on-prem. In fact, about a good 30 percent of our customers run Service Fabric on-prem. They even do their own installation on Azure, Amazon or any cloud, for that matter, even on VMware. And then on Azure, we also offer it as a service -- the same binaries, the same runtime. We basically create a management experience for customers and so it's much easier for developers and IT pros to stand up a cluster.

And at Build, we also announced Azure Service Fabric Mesh, where basically you don't even see the clusters. We manage the clusters. Developers just have to bring their applications and containers.

Are the use of microservices and the use of containers two different application development approaches?
Ramakrishnan: I would say that containers is both a deployment packaging format and the runtime execution environment. Microservices backed by containers is totally possible. In fact, a lot of customers today on Azure do that. It's not an "either/or" scenario. It's definitely an "and."

Brown: I think about microservices really as more of a design philosophy. I equate it to object-oriented programming for the cloud, just like you'd think of object-oriented programming for C++ or C# or Java or any number of different languages. Microservices can be used in any number of different technologies or techniques behind the scenes. So you may develop microservices using containers. You could even develop microservices using virtual machines, and certainly as PaaS-type applications using things like Azure App Service and Service Fabric and those kind of approaches. I think they are a little bit orthogonal but happen to pair well together. And so we often are starting to see people take their monolithic applications and start to break those up into more of a microservice-type architecture. And that's where they start looking for tools and assistance around how do I deal with state or how do I deal with the reliable connections between these services, and that's where Service Fabric can really help developers and customers go through that journey.

How are the Windows Server products getting integrated with Kubernetes?
Brown: Our Windows Core team has been working really hard on the Kubernetes master project work to spin up the Kubernetes ecosystem on Windows Server but also on the Windows Server platform, and we've been working with a number of different distributions of Kubernetes for support. So during Build, we announced the new work with Red Hat for bringing OpenShift to Windows Server. I actually demoed it at Build.

For customers that are on-prem and looking for hybrid connectivity with Azure, we'll have the best-in-class Red Hat OpenShift service in Azure that can easily integrate with an OpenShift cluster running on-prem. That'll give customers a great Kubernetes orchestration experience both for Windows and Linux, cloud and on-prem. We also announced at Build the private preview signup for Azure Kubernetes Service (AKS), our managed Kubernetes service in Azure for Windows Server.

Docker announced that they're going to be bringing Kubernetes through Docker Enterprise Edition for Windows and Linux, as well. And we demoed an early version of Docker Desktop for developers running Kubernetes with Windows and Linux containers side by side this week.

How difficult is it to manage Kubernetes without using AKS or the Azure Container Instances serverless version?
Ramakrishnan: In general, if you think about Kubernetes as a container orchestration system, it has a master-slave architecture. You have a set of VMs that are essentially running the Kubernetes runtime that's acting as master and you set up slaves that are Kubernetes worker nodes, which is where your applications would be hosted and orchestrated. If you are doing it yourself on a bunch of VMs, say on-prem or on Azure or anywhere else, you are setting all of this up: creating a bunch of VMs, standing them up, installing the Kubernetes runtime, creating the masters, creating the cluster. With AKS, what we are doing is taking the operational pains out of setting this up. For example, the masters that are involved in setting up the cluster are all completely managed. In fact, customers don't even see the VMs that are running the Kubernetes masters. So that's one operational pain that we are taking away. And we also set up the worker node such that they can run in a customer's private virtual network. We basically apply the necessary Kubernetes configurations so a customer can get up as fast as possible from a development point of view.

Why did Nano Server get relegated to hosting containers only?
Brown: Nano Server was a particularly interesting journey we took with our customers. When we started on Nano Server, we brought it out as a host for VMs, as a container host and as a kind of general server offering for those hosting those types of operations, and at the same time as a new platform for developers to build new applications on. And the feedback we got from our customers was that on the application side, it was too big, and on the host side, it was too small. Finding that sweet spot where it's just right was what led us to make it a container runtime.

By doing that, we're able to reduce the size from around 600MB on disk down to about 100MB on disk and really make it competitive offering with Linux for those developers who are really new to .NET Core or cloud-ready or cloud-native microservices, regardless of whether they were going to run them on-prem or in the cloud or in a hybrid environment. And so we've got a lot of developers who are interested in that kind of environment. I shared some numbers at Build. Just in April alone, we saw 1.4 million people download Nano Server as a container image, for example. That is a very healthy number, and it continues to see well over 100 percent month-over-month growth since we re-released it.

About the Author

Kurt Mackie is senior news producer for 1105 Media's Converge360 group.

Featured

comments powered by Disqus

Subscribe on YouTube