Rackspace Releases Free Software to Build Private Clouds

Rackspace is now offering free software that lets anyone build private clouds based on the same platform that runs its cloud hosting service.

Alamo, codenamefor the company's Rackspace Private Cloud Software, is now available as a free download. The release, issued this week, marks a key milestone in Rackspace's plan to transition its cloud portfolio from its proprietary infrastructure to OpenStack, the open-source project the company helped launch with NASA two years ago.

Earlier this month, Rackspace completed the conversion of its server compute infrastructure running its public cloud service to OpenStack.

By offering its OpenStack-based software free of charge, Rackspace is betting that it will seed enterprise deployments of private clouds based on its open source solution. In turn, Rackspace is hoping enterprise customers will subscribe to its support services while also using its public cloud infrastructure for cloudbursting, the deployment model a growing number of those running datacenters are employing when they need capacity during peek periods.

Jim Curry, general manager of Cloud Builders, Rackspace's private cloud organization, explained Alamo is geared to those looking to build such clouds to those who don't have backgrounds with OpenStack. "To date most of the market for OpenStack has been people who were experts in it," Curry said. "We wanted to make it so a systems administrator who doesn't know anything about OpenStack and maybe knows a little bit about cloud, can easily get an OpenStack cloud up and running so they can evaluate and determine if it's a good solution on the same day." Curry said the software can be deployed in an hour.

Customers can opt for additional fee-based services, starting with Escalation Support, which starts at $2,500 plus $100 per physical node per month. At the next level, Rackspace will offer proactive support, which will include monitoring, patching and upgrading. Then sometime next year, Curry said Rackspace plans to offer complete management of OpenStack-based private clouds. The company hasn't set pricing for those latter offerings.

The initial Alamo software consists of the standard Essex release of OpenStack Nova compute infrastructure services, the Horizon dashboard, the Nova Multi Scheduler, Keystone authentication and the standard APIs. It also includes the Glance Image Library (a repository of system images), the Ubuntu-based distribution of Linux from Canonical as the host operating system with KVM-based virtualization and Chef Cookbooks from Opscode, which provide various OpenStack-based configuration scenarios.

In addition to supporting the Ubuntu distribution of Linux, Rackpace indents to support Red Hat Enterprise Linux with its OpenStack release, made available for testing this week. That support will come later in the year. A later release will also add support for SWIFT-based object storage, according to Curry.

Asked if Windows Server support is in the works, Mike Aeschliman, Rackspace Cloud Builders head of engineering, said not at this point. "To be honest, I think we will stick with Linux for a while because that's what the market is asking of us," Aeschliman said.

As for Rackspace's outreach to its channel of systems integration partners, Curry said they are aware of Alamo but the company hasn't reached out further yet. "We absolutely want to do that," Curry said. Because Rackspace's Alamo software is "plain-vanilla" OpenStack, the company plans to look to its partners to customize, or fork, it, and contribute it back to the community, Curry said.

Rackspace plan is to leverage its SIs to provide customization services, consulting, application migration and API integration into billing systems he explained. "These are not things we specialize in," he said. "We don't want to be the guys that do that work. We have great partners to do that."

Posted by Jeffrey Schwartz on 08/16/2012 at 1:14 PM0 comments


Red Hat Issues OpenStack Preview

Red Hat Software released the Technology Preview of its OpenStack software targeted at service providers and enterprises looking to build infrastructure as a service (IaaS) clouds based on the open source framework.

An early supporter of the two-year-old OpenStack project, Red Hat has kept a low public profile on its commitment to the effort. In its announcement Monday, the company pointed out it was the #3 contributor to the current Essex release of OpenStack.

"Our current productization efforts are focused around hardening an integrated solution of Red Hat Enterprise Linux and OpenStack to deliver an enterprise-ready solution that enables enterprises worldwide to realize infrastructure clouds," said Brian Stevens, RedHat CTO and vice president for worldwide engineering, in a statement.

Red Hat sees IaaS and OpenStack complimenting its Red Hat Enterprise Virtualization software. While the former provides the infrastructure to manage hypervisors in a self-service provisioning cloud model, RHEV targets servers, SANs and provides typical enterprise virtualization functions, the company said in a blog post.

OpenStack is a component of Red Hat's cloud stack. While it addresses, IaaS, Red Hat's cloud portolio also includes Red Hat Enterprise Linux, RHEV, its JBoss application middleware and OpenShift, which provides the company's framework for platform as a service (PaaS).

Red Hat plans to deliver its OpenStack release next year. The Technology Preview is available for download.

Posted by Jeffrey Schwartz on 08/16/2012 at 1:14 PM0 comments


Infosys Creates Rapid Cloud Setup Hub

IT consulting giant Infosys this week launched its Cloud Ecosystem Hub, aimed at helping enterprises build, deploy and manage hybrid clouds.

The Infosys Cloud Ecosystem Hub combines the global company's consulting and development resources with the assets of more than 30 partners now in the program including Amazon Web Services, CA Technologies, Dell, Hitachi Data Systems, HP, IBM, Microsoft and VMware.

Enterprises using the Infosys service can implement cloud services up to 40 percent faster with 30 percent cost savings and a 20 percent increase in productivity versus going it alone, according to the company. Infosys said its self-service catalog allows enterprises to procure infrastructure and services to build and manage clouds running within an enterprise, through a cloud provider, while also enabling them to create a hybrid infrastructure.

In what Infosys calls a "smart brokerage" feature of the Cloud Ecosystem Hub, the system can intelligently choose and contrast cloud services from its roster of partners and takes into account various parameters such as quality of service requirements, IT assets, regulatory and compliance restrictions and cost issues, the hub procures the appropriate hardware, software and services.

"Our clients are dealing with complexities of a fragmented cloud environment," said Vishnu Bhat, Infosys VP and global head of cloud, in a statement. "The Infosys Cloud Ecosystem Hub provides organizations a unified gateway to build, manage and govern their hybrid cloud ecosystem. This solution allows clients to fully realize the benefits from the long-standing promise of the cloud."

Interarbor principal Dana Gardner described the offering as a "cloud of clouds." In a blog post he said "I think Infosys's move also shows that one-size-fits-all public clouds will become behind-the-scenes utilities, and that managing services in a business ecosystem context is where the real value will be in cloud adoption."

Posted by Jeffrey Schwartz on 08/09/2012 at 1:14 PM0 comments


Neebula Plans BSM and ITSM as a Service

Neebula, a startup provider of software that maps hardware and software with business service requirements, next week plans to release a preview of its ServiceWatch solution as a SaaS offering.

ServiceWatch is used by a number of large enterprises for so-called business service management (BSM) and IT service management (ITSM). Simply put, it models systems with business requirements to ensure organizations are meeting specific objectives.

Neebula -- not to be confused with hot private cloud infrastructure startup Nebula, the company founded by OpenStack vet and former NASA CTO Chris Kemp -- is taking on some established players in the BSM field including BMC, CA Technologies, Compuware, HP, IBM, NetIQ and SAP. Its founders, who consist of veterans from BMC, EMC, and HP, launched the company in 2009.

Early adopters of ServiceWatch include Amdocs, Bechtel, Ceva, EL AL Airlines and leading firms in the financial service market that the company is not identifying. ServiceWatch includes a dashboard to monitor the health of key IT components across silos while mapping it to business services such as CRM, billing, finance and inventory management.

ServiceWatch lets IT managers utilize administrators who may not have deep knowledge of systems, storage, network, virtualization and/or application infrastructure, while eliminating the need to manually link dependencies between those IT components, according to the company.

By moving to a software a service model, Neebula said it can reduce the time and complexity involved in installing its software on premises or via a cloud hosting provider. Also by SaaS-enabling ServiceWatch, Neebula will let customers deploy its BSM tools without having to add further infrastructure, while offering self-service deployment and administration.

To be determined is whether Neebula can take on its entrenched rivals, while also convincing large enterprises that deploying BSM and ITSM as a service is reliable and secure.

Neebula appears to be betting that the SaaS offering of ServiceWatch will appeal to those who will not or can not go through the process and cost of deploying these solutions internally or through a third-party cloud provider. Neebula is letting prospective and existing customers test the trial version for 30 days.

Posted by Jeffrey Schwartz on 08/09/2012 at 1:14 PM0 comments


Nimbula Adds Hadoop to Private Cloud

Nimbula is now offering what it claims is the first Apache Hadoop-based distribution designed to process Big Data in private clouds.

The private cloud operating system provider today said it is combining its Nimbula Director platform with MapR Technologies' M3 and M5 Hadoop distributions. The combined offering will lets organizations process and analyze large volumes of unstructured Big Data in private clouds.

Using MapR's Hadoop distros (M3 is a free edition while M5 is the subscription-based version for enterprise implementations) with Nimbula Director, a customer can set up a Hadoop cluster. The offering includes templates and test scripts.

Customers can already process Big Data using MapR-based clusters in Amazon Web Services EC2 and will be able to do the same in the recently announced Google Compute Engine.

Nimbula, a startup launched in 2010 by key architects of Amazon's EC2, offers software which runs on bare metal servers running VMware's ESXi hypervisor and creates multi-tenant pools of storage and compute that allow for self-provisioning of capacity. By adding the MapR Hadoop distribution to Nimbula Director, customers can also run automated Hadoop workloads in HA clusters. Users can provision and de-provision capacity when workload requirements dictate.

"Its all about taking unstructured data and getting meaningful results out of it in a short time," said Jay Judkowitz, Nimbula's director of product marketing. "The key thing about adding private cloud is you can add a lot more elasticity to it." The result, he explained, is rather than having a dedicated cluster assigned to a specific Hadoop job, now an individual can process data and scale as many VMs are needed for the duration of the job and then de-provision them upon completion.

Nimbula offers up to 40 cores free of charge and then offers subscriptions for larger workloads.

Posted by Jeffrey Schwartz on 08/07/2012 at 1:14 PM2 comments


Take Woz Cloud Fears with a Grain of Salt

Apple co-founder Steve Wozniak's prediction this past weekend that cloud computing may cause "horrendous problems" has gone viral but with due respect to the visionary inventor of the Apple I and II PCs: take his fears with a grain of salt.

Wozniak raised his concerns during off the cuff remarks after performing in Mike Daisey's theatrical presentation The Agony and the Ecstasy of Steve Jobs, which exposes the labor conditions at Foxconn, the key manufacturer of Apple products in China.

In response to a question from an audience member after the two-hour performance, Wozniak revealed he's concerned about the growing trend toward storing data in cloud based services, reported PhysOrg.com, a news service covering science and technology.

"I really worry about everything going to the cloud," Wozniak told the audience. "I think it's going to be horrendous. I think there are going to be a lot of horrible problems in the next five years." Wozniak's remarks came following the performance, which took place at the Woolly Mammoth theater in Washington, D.C.

Sure there will be plenty of problems with cloud computing just as there are issues with all forms of computing. We're already seeing numerous outages that have raised serious concerns.

Wozniak appeared not only worried over the reliability of cloud services, but argued users risk giving up ownership of their data once they store it in the cloud. "With the cloud, you don't own anything," he said. "You already signed it away," referring to terms of service users agree to when signing on with some cloud. "I want to feel that I own things," he added.

It seems to me he was referring to social networks like Facebook and photo sharing services. Facebook raised concerns about data ownership when it changed its terms of service back in 2009, claiming it had rights to your data even after an account is terminated, a move that raised the ire of many critics. While social networks typically run in the cloud, and indeed consumers should be aware of the terms of using such services, that's where the similarities end.

Woz went on to say "a lot of people feel, 'oh, everything is really on my computer,' but I say the more we transfer everything onto the Web, onto the cloud, the less we're going to have control over it." To that point, it is indeed hard to argue that once data is in the cloud users have less control over it than if it is on their own premises but in many cases that gap is narrow. Giving up autonomous control is usually a tradeoff worth making for having data readily available and less subject to permanent loss.

Had Wozniak not chosen to use the word "horrendous" while suggesting the cloud would cause "horrible problems," his remarks probably would have gone unnoticed. But when someone of his stature predicts Armageddon it's inevitable it will spark debate.

Like any emerging technology, cloud computing will go through its fits and starts. But the cloud is not going away. Will Woz one day be able to say "I told you so?" I think not. What do you think?

Posted by Jeffrey Schwartz on 08/07/2012 at 1:14 PM6 comments


Rackspace OpenStack Cloud is Open for Business

Two years after publicly kicking off its effort to develop an open-source cloud computing platform and community, Rackspace can finally say its core infrastructure as a service runs on OpenStack.

As of August 1, Racskpace Cloud Servers, which offers Linux and Windows compute services, and Rackspace Cloud Databases, a database service powered by MySQL, run on the OpenStack platform. To administer the new open IaaS, Rackspace also released a new control panel. In the coming months, Rackspace will add an API-driven monitoring system, private software defined networks to create virtual interfaces to one's own datacenter and block storage.

Though Rackspace's Cloud Files storage service has been OpenStack-based since 2010, the availability of core compute services is a major milestone for the open source project. Rackspace has bet its business on OpenStack and with its founding co-developer NASA, the project has since brought on more than 180 member companies including household names AT&T, Cisco, Dell, Hewlett-Packard and IBM and cloud pure-plays Appfog, CloudBees, Cloudscaling, enStratus, and RightScale. Many of these companies have committed to either contributing to the project or using OpenStack in their own offerings.

While this week's launch is an important stake in the ground for the OpenStack project, the cutover of the new Cloud Servers and Cloud Databases is equally an important step forward for Rackspace, which early-on promised to revamp its entire cloud portfolio to support the OpenStack specs just as it has encouraged other cloud providers to do. Moving forward, any Rackspace customer provisioning compute instances will be doing so on the new OpenStack-based Cloud Servers.

Customers can move their existing Rackspace instances to the new OpenStack servers with minimal downtime, as explained to me by Rackspace CTO John Engates. "You want to make sure you've stopped all the databases and prepped them to be migrated but the truth is the old cloud and new cloud are similar," Engates said. "They use the same hypervisors, the same fundamental virtual machine architecture, it's just a matter of making sure as we move those workloads over and we get the data captured in a state that allows customers to bring them back up without data corruption."

Engates said customers are under no deadline to move existing workloads to the new OpenStack servers. But anyone deploying new instances will probably want to run them on the new OpenStack infrastructure, he said. In the coming months, Rackspace will offer tools that will simplify the migration.

Racskpace's OpenStack transition is a big bet that will give the cloud provider a shot to take on IaaS market leader Amazon Web Services. By gaining a coalition of other cloud providers to commit to OpenStack, Rackspace is saying that customers don't have to put all of their eggs in one basket. That's because in addition to providing portability with public clouds offered by AT&T, Dell, HP and IBM, among numerous other public cloud operators, OpenStack, if it delivers as promised, will provide compatibility with private clouds running behind a customer's firewall.

"The advantage of working with Rackspace is you're not locked into a particular provider's platform, it's a platform that is open, ubiquitous and you can pick up and take elsewhere," Engates said. Well, the ubiquitous part remains to be seen. So far, Rackspace is the first major cloud provider to roll out a substantial OpenStack IaaS but many key functions such as network and block storage are still in preview mode.

Coincidently or not, HP this week launched its OpenStack public cloud, though its compute services are still not generally available (ironically it's the block storage HP launched). And there are Amazon alternatives offered by the likes of Eucalyptus and Nimbula.

Yet a number of Amazon customers in recent days have started provisioning servers on the preview of Rackspace Cloud Servers, observed AppFog marketing director Chris Tacy. Last week 10 percent of AppFog's customers moved their workloads from Amazon to the beta of Rackspace Cloud Servers running OpenStack, according to Tacy. AppFog is a growing cloud provider that offers a platform as a service (PaaS) layer on compute services from Amazon, Rackspace, Microsoft and others.

Tacy attributes the influx of movement largely to Amazon's recent spate of outages, coupled with high hopes for OpenStack and Rackspace's reputation. "Right now OpenStack is extremely hot, a lot of developers are excited about it," said Tacy, who has spent more than 20 years as a developer. Tacy believes many of the customers who defected from Amazon will return once it resolves the issues that have afflicted its east coast datacenters.

To be determined, he said, is whether OpenStack will live up to its promise. Until the governance issues of the OpenStack Foundation are ironed out, many will continue to view OpenStack as a Rackspace-controlled effort, though Engates emphasized that while at one point Rackspace contributed 90 percent of the OpenStack code, now it comprises about 50 percent. Rackspace Cloud founder Jonathan Bryce last month told me he is hopeful that the handoff to the foundation will happen by year's end. But some have complained about the pace of the transition and there are reports that some of the wrangling taking place is bogging down that process.

And while OpenStack is positioned as an alternative to Amazon Web Services, other options include CloudStack, an effort championed by OpenStack member Citrix that has since proposed an open source cloud compute alternative, a move that has fragmented the OpenStack cause. There are other IaaS alternatives in the wings including an upgrade of Microsoft's Windows Azure service and Google Compute Engine. Though those don't promise the portability that OpenStack espouses, that doesn't mean they won't offer such compatibility in some way down the road either by joining the effort or finding other ways to bridge the gap.

Regardless, OpenStack's success hinges on proven interoperability within its own universe. And that will only come to pass once several IaaS providers build out their OpenStack clouds.

Posted by Jeffrey Schwartz on 08/02/2012 at 1:14 PM0 comments


First Pieces of HP Public Cloud Now GA

Hewlett-Packard has talked up its plans to offer a public cloud service for well over a year. Now the first two components of HP Cloud are generally available.

As of August 1, customers can purchase the company's HP Cloud Object Storage and CDN. HP is backing the service with a 99.95 service level agreement. If the service can't meet that SLA, HP will offer customers credits.

Compute services and subscriptions to its cloud-based MySQL remain in beta and while it is not generally available or backed by an SLA, customers can use them for production workloads, said Marc Padovani, director of product management for HP Cloud Services.

"We are still going though updates and hardening of the service," Padovani said. "Sometime later this year it will be at a point where it meets our levels of quality, availability and durability and we will apply the SLA and bring it to general availability status." Any customer can sign up for the compute services beta but the MySQL testing is somewhat more restrictive, Padovani said. HP will contact customers who sign up for the MySQL service beta and help set them up, he said.

As reported back in May, the HP Cloud Block Storage Service lets customers add storage volumes up to 2TB per volume. In addition to supporting multiple volumes atop of HP Cloud Compute instances, customers can create snapshots of their volumes to create point-in-time copies, allowing them to create new volumes from the snapshots, which can be backed up to HP Object Storage for added redundancy if needed.

The storage service is based on the OpenStack Swift storage system, a move that will ease portability of data to other OpenStack cloud services. For the content delivery network service, HP has created an interface to Akamai's CDN. Padovani said HP intends to contribute the code used to develop the CDN interface layer to the Swift object storage service to the OpenStack group. According to Padovani, "it eliminates the need for someone to have to go through all the integration work we did with Akamai."

Posted by Jeffrey Schwartz on 08/02/2012 at 1:14 PM0 comments


Is VMware Planning a Public Cloud?

Could yet another major public cloud be in the works? CRN is reporting VMware plans to build out a public cloud that would aim to compete with Amazon Web Services, Rackspace, Microsoft and Google.

While VMware said its policy is not to comment on speculation, the report cites multiple unnamed sources who say VMware has acquired significant datacenter facilities in Nevada for what is known as Project Zephyr. VMware has gone this route to light a fire under its hosting partners to build out public cloud services based on vCloud, according to the report.

The move is surprising to hosting providers that have committed to offering their own public cloud services based on VMware's hypervisor and vCloud platforms. Hosting providers say VMware had given assurances it does not intend to compete with them, setting it apart from the company's arch-rival Microsoft, which is investing heavily in expanding its Windows Azure service.

"They have iterated and re-iterated that they had no plans to go into the cloud and infrastructure as a service themselves," said Aaron Hollobaugh, VP of marketing at Hostway. "It's a big surprise to me, but it's also an inevitable change in their desire to grow within the cloud marketplace because they are not having the traction they want from their service providers."

Hostway is not a VMware partner – it has aligned itself with Microsoft's cloud platform -- but the hosting provider faces similar competition from Redmond. Nevertheless, Hollobaugh believes Hostway is poised to address customers who require more higher levels of support. If the CRN report is true, his onetime employer, Denver-based Hosting.com, may face similar competition from VMware.

"This rumor has been around for a long time and I don't even know if it's real," said Hosting.com CTO Craig McLellan. "There's been no official communications with me. I think that in general, every technology manufacturer has embraced the channel while at the same time competing with the channel. The real emphasis has to be on working well together if this is in fact the case."

Despite VMware's insistence that it wasn't planning to offer a public cloud service, the company has made some moves in the past that could be construed as steps toward doing so. For example VMware acquired a facility in Washington that the company ultimately used to build a green data center, a move that sparked some scuttlebutt that it may be a front to operate a public cloud.

Another move that raised some speculation that VMware may be looking for its own public cloud came that same year in 2009 when the company took a 5 percent stake worth $20 million in Terremark, which was later snapped up by Verizon. Though VMware's stake in Terremark, which not surprisingly uses VMware's vCloud platform, was rather small, some wondered if the company wasn't positioning itself to buy the company outright before Verizon came in.

If in fact VMware decides to launch a public cloud, it makes one wonder if the change in heart comes from parent company EMC, the management changes at VMware and the company's efforts to bolster its cloud infrastructure with the acquisitions of DynamicOps and Nicera.

VMware customers and partners will be anxious to hear the company's public cloud intentions or lack thereof at this month's VM World conference, if not sooner.

Posted by Jeffrey Schwartz on 08/02/2012 at 1:14 PM0 comments


Oracle Joins SDN Party with Xsigo Deal

On the heels of VMware's bold $1.26 billion deal last week to acquire software-defined networking pioneer Nicira, Oracle Monday made its own SDN play with its agreement to purchase Xsigo for an undisclosed amount.

Venture-backed Xsigo, a San Jose, Calif.-based company founded in 2004, offers its Data Center Fabric portfolio. Simply put, Xsigo's network software and hardware connects virtual servers and storage with virtualized network infrastructure.

The two back-to back SDN deals by two key IT infrastructure suppliers are "strong testament that the next optimization battleground in virtual environments is the network," said Forrester research senior analyst Dave Bartoletti, recalling networking virtualization as a tough three years ago. Now with a slew of storage options optimized for virtual workloads, Bartoletti pointed out the network still lags.

"You can provision new VMs and the storage they require in seconds, but reconfiguring the network to let them move easily and connect on the fly is still often a manual, cumbersome process that can take days or weeks," he added. These deals are "another step toward making the entire datacenter 'software-defined,' ie, based on a software layer the provisions compute, storage and network on demand for each workload - in just the right amounts."

Bartoletti's colleague, Forrester principal analyst Andrew Reichman, added Xsigo will help Oracle extend its push into the datacenter. "This will augment their portfolio of servers and storage acquired from Sun, and allow Oracle to better compete with VMware and Microsoft as the datacenter moves more towards application/software based infrastructure management," Reichman said.

Expect Oracle to optimize Xsigo for its own software and Sun server and storage environments. In a FAQ posted by Oracle, the company said it plans to optimize Xsigo for the Oracle VM, though it will continue to support other virtual machines. And Oracle emphasized its goal of integrating Xsigo to enable customers to dynamically reallocate resources on demand. Oracle said Xsigo will also reduce the cost and simplify management of datacenter and cloud-based systems.

Posted by Jeffrey Schwartz on 07/31/2012 at 1:14 PM0 comments


Can VMware Redefine Network Virtualization?

VMware's striking move Monday to acquire SDN pioneer Nicira caps a string of events this month that piece together the company's go-forward mission of automating the datacenter and creating next-generation clouds.

Kicking off what could prove to be key milestones for VMware was its July 2nd announcement it is acquiring DynamicOps, a cloud automation provider renowned for its support of multiple virtual environments. Then came last week's shocking news that VMware CEO Paul Maritz, who has led the company through stellar growth during his four-year tenure, is stepping aside to become chief strategist of the company's parent EMC to be replaced by veteran Pat Gelsinger.

Topping off this month's buzz was the Nicira deal, VMware's largest acquisition to date. The amount VMware is paying for Nicira, $1.26 billion, is eye-popping. Though Nicira is a hot software defined networking startup with some highly regarded engineers and executives whose Network Virtualization Platform (NVP) software is used by the likes of AT&T, eBay, Fidelity Investments and Rackspace, it only came to market six months ago.

The premium price tag notwithstanding, the reaction to VMware's move has raised eyebrows as the company shows its determination to not extend further into virtual networking but to lead in it. Cisco's stock on Tuesday dropped nearly 6 percent in reaction to the move. Adding insult to injury was Cisco's announcement that it is laying off 1,300 people on top of the more than 6,000 jobs that were already phased out this year.

Fear by Cisco investors that VMware's move to marginalize its hardware by virtualizing it is "way overblown," said Forrester Research analyst Andre Kindness. "The purchase puts another nail in the Cisco VMware relationship but networking is more than the data center and more than layer 2 and layer 3 switches," Kindness said. VMware has yet to address the Layer 4 to Layer 7 arena."

Kindness said Cisco has been down this road before, first when Juniper Networks entered the carrier routing market and later when it moved into enterprise switching, as well as when Hewlett-Packard acquired 3Com and VMware's launch of vSwitch. But vSwitch itself wasn't enough for VMware to bring forth a strong enough virtual networking story, Kindness noted, hence the Nicira deal.

While vSwitch was a worthy start, VMware was held back by its hardware partners' agendas, Kindness said, noting Cisco, Dell, HP and IBM all released converged solutions which embedded their own virtual networking technologies. And those that don't have the necessary piece-parts are partnering, such as HP's announcement in May that it will use F5 Networks Application Delivery Networking (ADN) to deliver automated policy-based automated networking.

"Basically, the big hardware vendors are developing their software and hardware solutions that would be controlled by their own management solutions," Kindness said. "This would minimize VMware's influence on the management stack and open the door to other hypervisors. Thus VMware needed a solution that would ride over any of the hardware vendors who themselves are fighting over virtual transport protocols between switches, between data centers and between data center and public cloud offerings."

Whether VMware can pave its own path remains to be seen but by acquiring DynamicOps and Nicira in tandem, VMware is taking some bold steps to lead the next phase of cloud and datacenter virtualization by evolving from core server pooling to incorporating the network gear.

Others that have jumped on the SDN bandwagon say VMware's move validates software defined networking as the next big trend in the evolution of the datacenter and cloud computing infrastructure. "This underscores just how phenomenal the surge is that's powering interest in SDN," said Kelly Herrell, CEO of networking software provider Vyatta, in a blog post. "The simple facts are irrefutable: virtualization and cloud have fundamentally altered compute and storage architectures, and networking now must adapt."

Jaron Burbidge, a business development manager for Microsoft New Zealand's Server and Cloud Platform business, pointed out in a blog post that Windows Server 2012 and System Center 2012 SP1 will allow for the creation of SDN-enabled clouds. "In many ways, VMware's acquisition of Nicira is a late acknowledgement of the importance of SDN as a critical enabler for the cloud," Burbidge noted. "However, I think our approach is substantially more mature, delivers end-to-end capability, and provides an open platform for partners to innovate. Most important, our implementation is available broadly for customers to begin deploying today."

At first glance, one might wonder why VMware needed to shell out so much money for Nicira. After all, as noted by Kindness and his Forrester colleague Dave Bartoletti, VMware has already moved down the road of SDN with vSphere, which provides virtual switching capabilities via vShield Network and Security services and support for the VXLAN protocol. "These go a long way to virtualizing networking hardware and put them under the hypervisor domain," Bartoletti said in a blog post, adding VMware's vSphere Storage Appliance and various array integrations simplify and automate the allocation of storage to virtual workloads.

While vSwitch was an appropriate entre for VMware, the company until now has had a reputation for focusing on its own world. "The DynamicOps acquisition changed this conversation," Bartoletti added. "DynamicOps already manages non-VMware hypervisors as well as workloads running on open virtualization platforms in multiple clouds. (Read: heterogeneous VM support.) And now with Nicira, VMware owns a software defined networking solution that was designed for heterogeneous hypervisor and cloud environments."

Moreover, Nicira's NVP is a pure network hypervisor, effectively a thin software layer between the physical network and the virtual machine which enables network virtualization to work with existing physical datacenter network devices, IDC said in a research note.

"The Open vSwitch runs natively in the hypervisor and is managed by Nicira. Nicira is attractive in that it is designed to enable any hypervisor on the same logical network, providing a common network experience across the datacenter. The virtual networks bring flexibility and agility to datacenter designs while enabling isolation to support multi-tenancy," the note said.

"VMware clearly recognized the need for more advanced networking years ago and has been actively working with its networking partners to advance the network functionality in the virtual datacenter. To date, however, the company has not been perceived as a leading voice in the broader networking community. The acquisition underscores the fact that VMware can no longer rely on partners for networking expertise. Networking is a critical pillar in private cloud delivery, and Nicira gets VMware closer to having a full solution."

There are still many open questions. For one, what impact will VMware's move ultimately have on its alliance with Cisco and VCE, the venture the two companies and EMC have to develop converged cloud infrastructure. Just last week VCE named 19-year Cisco veteran Praveen Akkiraju as its CEO.

Another burning question is whether VMware will make a push into the OpenStack consortium. Nicira is a key contributor to the evolving Quantum networking component of OpenStack. Will VMware support other components of OpenStack?

As these and many other questions play out, a clearer picture of where VMware is headed has unfolded. And VMworld is still a month away.

Posted by Jeffrey Schwartz on 07/26/2012 at 1:14 PM0 comments


Amazon Unleashes Its Cloud Security Controls

While most cloud service providers have been reluctant to publicly disclose details that would better explain how they secure and ensure availability of their datacenters, some have pointed to Amazon Web Services as the most distinguished holdout. But Amazon this week took an important step toward discrediting that claim by documenting its security practices.

Amazon submitted a 42-page page document to the Cloud Security Alliance's (CSA) Security, Trust & Assurance Registry (STAR) that details its security practices. That Amazon has made its security practices is significant in its own right. However the fact that Amazon did so in line with the CSA's detailed questionnaire and filed it in the registry could motivate numerous other holdouts to answer the same questions. Cloud providers have dragged their feet on publicly disclosing their security controls seemingly because they didn't want to give away any competitive secrets. But the largest public cloud provider taking this step diminishes that rational.

Since it was launched last year, only a handful of companies have submitted profiles to STAR, among them Box, Microsoft, SHI International and Verizon's Terremark unit. Jim Reavis, the CSA's executive director, recently told me while he was hoping to see more participation by now, he anticipates most major cloud providers will start to contribute later this year.

The CSA believes STAR will be a key step forward, though not the end-all, in adding transparency on a level playing field how providers are implementing security controls. The appeal of STAR is it's a publicly available registry available to any prospective customers and cloud providers are all expected to answer the same 140 questions.

"In being added to the registry, Amazon Web Services is now among those recognized as a security conscious organization, and will gain added exposure to information security, assurance and risk management professionals who are a key part of the cloud service procurement process," the CSA said in a statement.

The document discloses Amazon's approach to risk management. For example, Amazon re-evaluates its controls to mitigate risks at least twice per year. Security policies are based on the Control Objectives for Information and related Technology (COBIT) framework and ISO 27001/27002 controls, PCI DSS, the National Institute of Standards and Technology (NIST) Publication 800-53 Rev 3 (the latter is recommended security and controls for federal information systems).

On a general level, Amazon said it addresses risk management by providing security training to employees and conducting app security reviews aimed at ensuring confidentiality, integrity and data availability. Moreover, according the filing, Amazon regularly scans all IP addresses that connect to the Internet for vulnerabilities, though the scans do not include customer instances.

When it discovers and remediates threats, Amazon alerts third parties. The company conducts its assessments both internally and uses outside security firms. Amazon warned that the security scans are intended to address the health of its infrastructure and should not replace the need for customers to conduct their own inpsections.

Instead of the common SAS 70 Type II audit, the report noted Amazon now publishes the accepted evolved replacement Service Organization Controls 1 (SOC 1), Type II report, performed via the Statement on Standards for Attestation Engagements No. 16 (SSAE 16) and the International Standards for Assurance Engagements No. 3402 (ISAE 3402) standards.

The SOC 1 section of the report describes a number of controls including user access, logical security, data handling, physical and environmental safeguards, change management, data integrity and availability and incident handling.

The STAR filing also outlines key compliances issues including control ownership, how IT can audit the cloud provider, how Sarbanes-Oxley compliance is achieved, HIPAA, data location, e-discovery, multi-tenancy, hypervisor vulnerabilities, encryption, data ownership, server security, identity and access management, availability, denial of service attacks, business continuity and access to the datacenter. To that last point (and fairly well-known), Amazon does not allow customers to tour its datacenters and third-party and internal employee access is limited.

Posted by Jeffrey Schwartz on 07/26/2012 at 4:59 PM0 comments


Subscribe on YouTube