Q&A: Why Patrick Chanezon Left Microsoft To Join Docker

As a director for enterprise platform evangelism at Microsoft, Patrick Chanezon played a key role in working with ecosystem partners on Docker integration until the container vendor's founder lured him away.

When Microsoft was looking for open source specialists, the company lured Patrick Chanezon -- whose community work goes back to the Java Portlets spec and, more recently, at Google Inc. where he worked on HTML5 and the Open Social effort -- from VMware Inc. Chanezon played a key role in helping bring technology partners together to support Microsoft's burgeoning effort to support Docker containers. Docker Inc. founder and CTO Solomon Hykes liked what he saw and made Chanezon an offer he couldn't refuse -- to work on the team that'll create the next generation of Docker containers. In a recent interview to discuss the new Cloud Native initiative, Chanezon discussed his stint at Microsoft and why he jumped ship to Docker. Here are some edited excerpts from that interview.

Q: The Open Container Initiative (OCI) seems to have come together pretty quickly. Explain how that happened?
Chanezon: After DockerCon (in late June), we had the Docker Maintainers summit, where all the maintainers of the Docker code base gathered to talk about the future of the project and in there the working group for the OCI spec was the larger working group with 10 to 15 people joining and we started working on the spec right away. After that, collaboration continued online, on the public GitHub repositories in GitHub.com/opencontainers. So we're using the classic open source process where people are sending pull requests and these pull requests are discussed and merged by maintainers. The runC reference implementation for the spec is tracking the spec very closely.

What is it that you've accomplished at this point?
We now we have a version of the spec that people can start commenting on and our goal is to have a first draft available [in August]. That's advancement on the specs. The last aspect is on governance, we've been working with the Linux Foundation to create a governance structure around that project. So the project is structured as a collaborative project under the umbrella of the Linux Foundation, and as part of the governance, we will decide how the technical oversight committees is working, how the conflicts between maintainers can be brought to that committee. How that committee is elected, as well as a trademark board that is part of that. So we'll have three announcements about new membership. The spec is now open for comments and there is a draft coming up soon, and the governance has made lots of progress, as well. The draft charter has been available for review and for commenting.

It sounds like things are moving quickly.
I've worked for many different companies before, including Sun Microsystems, Google, VMware and Microsoft. In the past, I participated in lots of these standards projects across the industry like portal server at Sun and Open Social and HTML5 at Google and I've never seen an industry standard being delivered so fast. To me it's a testament for the need of a standard for containers-based computing … to accelerate innovation at the higher levels like orchestration.

How would you characterize Microsoft's involvement in the Open Container Initiative?
Microsoft really went all in with Docker and containers. I used to be at Microsoft before joining Docker, and my main role there was to bring all the Docker ecosystem partners on Azure. And Microsoft loved the Docker workflow so much that it decided to implement it for Windows. What Mark Russinovich said a year ago is happening right now. Basically, Microsoft built in Windows the same kind of aggregation primitive that exists in the Linux kernel, so that Windows developers can package their applications in Docker containers and run them on Windows Server, natively in Windows containers. Because of that, Microsoft was part of the founding members of the OCI, and it has some engineers working on the spec. We're trying to specify what's common in those containers and what's OS-specific. So there's a common section for things like CPU and RAM, and then there's an OS specific section for Linux that could be name spaces and C groups and the Windows engineers are specifying what will go into the Windows-specific section in there.

As much of this becomes standardized so quickly, how do you see Microsoft and others differentiating, as well as some of the other orchestration layer players?
I'd say the differentiation is in terms of the various strategies of the different players, each player has a different strategy. Amazon has its own orchestration layer that's working only on Amazon that's called the EC2 Container Servers, ECS; Google has the Google Container Engine and the open sourced Kubernetes orchestration engine that lets you run it behind the firewall and in other clouds. And on the Microsoft side it has an all-encompassing story where it works with everybody in the Docker ecosystem to make sure that all of the orchestration engines are working really well on Azure. In terms of other orchestration engines, you have Docker Swarm, which is very popular. There's been one study that was done by ClusterHQ, where it was the most popular among developers. It's very simple to set up, it uses the same API as Docker itself, and then there's Apache Mesos, which is pretty popular, as well. It was initiated at Twitter, it became an Apache Project, and it's used by Netflix and many other companies. And recently at DockerCon, we announced integration between Docker Swarm and Methos, so you start to see some interoperability work between these different orchestration players happening.

Microsoft is among those who said they will support Kubernetes. How do the two differentiate their use of that automation layer?
If you go into the Kubernetes repository, you will see there are two installation documentations for Kubernetes on Azure, one of them is from me and a guy from Weave. We worked together to make sure Kubernetes works well on Azure. The story on the Microsoft side is that Microsoft has a very strong hybrid cloud strategy with Azure in public cloud, and then Azure Stack that you [will be able to] install behind the firewall. They're really playing a very open game there where they say all the orchestration engines, whether it's Kubernetes versus Methos, or Docker Swarm, they will make sure they work well on Azure and they are really well integrated so they don't force the choice of Azure.

Besides Azure what about the private cloud side?
The Microsoft story is Azure Stack, an evolution of Azure Pack and System Center. It's a version of Azure with a reduced set of services but things like Virtual Machine and Azure Web Sites, are running behind the firewall, using the same API that you're using to access Azure. So that means if you're using a Docker machine to provision machines in Azure, you can use the same Docker machine to provision them on Azure Stack and you just point your client to a different API endpoint, but it's the same API, the Azure Resource Manager API, introduced at Build this year.

What made you decide to leave Microsoft and join Docker?
I really had a lot of fun at Microsoft, my role there was to bring more of the open source culture and knowledge about Linux and Java developers to Microsoft and to interact with partners and customers who are using these technologies, but then I met Solomon Hykes and he offered me the opportunity to work with him directly on the future of the Docker platform, and I just couldn't refuse that. There was this side aspect, which is this guy is French, as well as I am.

Icing on the cake!
Exactly! I really like his design taste. I think with Docker, he really nailed it in terms of developer workflow, into making developers more productive, which has been one of my goals throughout my career. I've always been working for platform companies, who make a developer's life easier. Because I think developers are the ones who are inventing the future world we're going to live in and Docker's mission statement is to build tools of mass innovation, which really resonated well with me.

Given the fast pace of movement on this standard, how quickly do you see things coming together?
I think we'll have a first draft of the spec with a reference implementation of runC running right away [in August]. Then after that there will be another round of comments and discussion on the specification with all the new members. I don't know how long that will take until we decide it's a final formalized standard. We will also need on the governance side to have a round of comments on the draft charter for all the members to agree and to sign a real formal letter. Up until now, they had to sign a non-binding letter of intent. So they will need to all sign the final papers and then we'll need to issue a final spec, but I think it will be a matter of months, not years.

Once the standard is out of the way, what else is on your agenda at Docker?
The OCP is one of the projects I am working on. We have a lot of other pretty interesting projects for the future of Docker but it's too early to talk about that. Maybe the next time we meet. Containers are a very exciting space and it's moving super-fast.

Perhaps you can share what you believe is necessary to take containers to the next step?
Again, Docker's mission statement, building tools of mass innovation, one of the things that Soloman talked about during the keynote at DockerCon is going through these layers, where Layer 1 is building a standard. So that's what we're building here. Layer 2 is plumbing and one of the things we announced at DockerCon is we're going to start taking pieces out of Docker and spinning them out as separate plumbing projects that can be used independently. So we started doing that with runC, which is the OCI reference implementation. Also announced was another tool in the area of security called Notary, and I would expect in the near future there will be other pieces of Docker that would be spun out and then Docker would be rebuilt in terms of these different tools. That's the main plumbing project, which we call Layer 2 of the stack. Then at Layer 3, we have tools that developers are using directly, as well as the orchestration tools. So here we're talking about the Docker command line itself, we have Kitematic, our UI on top of Docker, that is running on Macs and Windows today and we're planning for a Linux version. There's Docker Swarm, Compose and Machine for orchestration, and then on top of that we have value added online services that people are using for connecting all the workflows and how developers are collaborating together. This is where you have Docker Hub, which is a public hub for Docker images [and] the Docker Trusted Registry that we announced at DockerCon, which you can install behind the firewall to manage your images. We also announced a new project called Ocra, which is about the runtime for your containers.

What's the goal with Ocra?
Right now with Docker we have to build your images, with Docker Trusted Registry and Docker Hub we help you ship your images. And we were kind of missing on the run part and Ocra is going to be a project on many technologies including Compose, Machine, Swarm and another project that's called Interlock -- all built together in order to let you run your containers in the cloud and behind the firewall. This view in four layers is how we think of future product development. We have our work cut out for us. One of the reasons I put in front of my talks when I explain why I am working at Docker, or why I am working in software, is that quote from William Jepson -- he's the guy who invented the term "cyberspace." He has a quote at the beginning of New Romancer when he says, "The future is already here." To me that has exactly been my experience at Sun and Google and VMware and Microsoft and now Docker, in that the most advanced developers in the world already have all these tools to build distributed applications, now it's time to democratize all of that and for the rest of the industry to catch up on that and to gain all of these productivity gains for individual developers. That's very aligned with the mission statement of Docker, which is building tools of mass innovation.

At what point do you see Docker and other modern software container environments having an impact on client OSes such as Windows, Mac OS, iOS and Android?
Right now people are mostly using Docker for server-side types of workloads, but already you can see the most advanced developers -- my colleague Jessie Frazelle gives this talk where she explains how to run desktop applications in containers. Right now this is in Linux containers. When containers are implemented on Windows you can imagine that your cloud will have the same kind of approaches for Windows. She has a lot of examples of running desktop applications in containers today.


comments powered by Disqus

Subscribe on YouTube