CA To Offer Cloud Backup Via Windows Azure

CA Technologies on Tuesday said it will offer its popular ARCserve backup and recovery solution as a service using Microsoft's Windows Azure cloud platform.

The move makes CA the latest traditional backup and recovery software company to launch its Software as a Service (SaaS). Just last week, CA's key rival in the backup and recovery field, Symantec, said it will offer a version of its popular Backup Exec as a cloud-based offering. The new Symantec service, called Backup Exec.cloud, will allow customers to stream their backups to Symantec datacenters.

CA's ARCserve will be available as a subscription service using Azure as the cloud platform and storage repository. It is the first time CA has offered ARCserve on a subscription basis. Azure will be the exclusive cloud platform for ARCserve's SaaS solution, though CA is developing other non-SaaS technologies that will work with other cloud providers.

"We think that their SLAs, their security, everything about the Azure service really services our market well, so we're excited to partner with those guys," said Steve Fairbanks, CA's VP of product management for data management. "What's unique here is you're getting the combined local backup capabilities and the cloud storage capabilities all sold as a convenient service."

CA last year released the latest version of its ARCserve software, called ARCserve D2D, which the company says allows backup and bare metal restores from physical and virtual servers. It lets customers take a complete snapshot of a server or desktop and store a copy of that locally, enabling faster recovery time objectives, according to Fairbanks.

Customers will "have all of those very fast recovery capabilities locally and then we give them the ability to specify the critical files that they would like to store in the Azure cloud for a complete disaster scenario," he said. "Heaven forbid if they were to lose their entire site."

Enterprise Strategy Group analyst Lauren Whitehouse said the CA offering will appeal to shops that want to back up data both locally and offsite. "The ARCserve-Azure solution can be a hybrid approach where there is data stored locally (at the client's site) to facilitate day-to-day operational recoveries and data stored in the cloud for longer-term retention and, in some cases, the doomsday copy should the client's primary location or data copies not be available for recovery," Whitehouse said in an e-mail.

The service will use secure socket layer (SSL) connections and employ 256-bit AES encryption for security.

CA has not disclosed pricing but ARCserve subscriptions will include fees for using Azure. Setting up an ARCserve account will automatically establish an Azure subscription. The service is expected to go into beta this summer and will be commercially available this fall.

Posted by Jeffrey Schwartz on 05/10/2011 at 1:14 PM1 comments


Internap Plans Dual VMware-OpenStack Cloud Services

Internap plans to launch a dual-hypervisor compute cloud service giving customers the choice of a VMware-based stack or the open source OpenStack platform.

The move marks the company's first foray into the public cloud. Its current portfolio consists of colocation service, managed hosting (including dedicated private clouds) and a content delivery network (CDN).

Atlanta-based Internap offers a proprietary network backbone it calls Managed Internet Route Optimizer, or MIRO, which the company claims boosts network performance while reducing latency. The company also runs redundant carrier backbones throughout the world. Internap, which grossed $244 million in revenues last year, says it serves 2,700 enterprises worldwide.

As for its new offering, which will also use MIRO, customers can choose between a more costly cloud service with a VMware hypervisor or the less expensive OpenStack-based cloud service powered by Xen. The company said it is providing the ability to transition from the VMware stack to OpenStack.

"What we find when we talk to CIOs, almost all have done server consolidation and virtualization using VMware internally, so they are very comfortable with VMware, but they realize that open source hypervisors are likely the future just the way Linux sort of supplanted Solaris," explained Paul Carmody, Internap's senior vice president of business development and product management.

"They're interested in going with a partner that offers them a cloud that mimics what they've done internally with server consolidation, with the option to test out and eventually migrate into more of an open source hypervisor model, for better cost profiling," Carmody said.

Internap announced support for OpenStack in January, when it launched a public cloud storage service. The OpenStack-based service went into beta at the time and will be generally available by the end of June.

The cloud-based compute services will be generally available in the third quarter. Carmody said there will be a private beta with existing customers beforehand.

Posted by Jeffrey Schwartz on 05/09/2011 at 1:14 PM0 comments


Microsoft Argues Hidden Costs of Google Apps

With its beta of Office 365 now out, Microsoft has once again come out swinging against its arch rival Google, this time arguing that the lower cost of Google Apps for Business may be a mirage.

Google Apps for Business costs $50 per year. Microsoft's equivalent offering, the forthcoming Office 365 Plan P1, which includes Exchange Online, calendaring and Office Web Apps, will cost $72 per year.

In a new whitepaper titled "Counting the Hidden Costs of Google Apps," Microsoft points to the following Google add-ons and their associated price tags:

  • Postini, which offers security and Gmail retention: $33 per year
  • Power Panel, which provides delegation of administrative tasks: $8 per year
  • Google Apps help desk support: $360 per year

There's a laundry list of other potential costs, but you get the point. Tom Rizzo, Microsoft's senior director of online services, calls it the "The Hidden Google Tax."

"The 'Google Tax' is unnecessary and can add up quite quickly," Rizzo said in a blog post. "This is especially true when running Google Apps alongside Microsoft Office. On the surface, Google Apps may seem like acceptable replacements for enterprise-grade products such as Microsoft Exchange Server or Microsoft Office. But many IT organizations have found that Google Apps bring extra hidden costs."

Opinions will vary as to whether this is Microsoft's latest attempt at FUD, or if these add-ons really add up. A commenter on Rizzo's own post raised the question of how many shops will opt to pay the $360 per year for Google Apps help desk support. Another said Rizzo's post was spot-on.

For shops where cost will dictate which vendor to go with, this debate will rage on and require companies to look at their needs and actual migration costs.

What's your take on Microsoft's latest assault? Fact or FUD? Drop me a line at [email protected].

Posted by Jeffrey Schwartz on 05/05/2011 at 1:14 PM5 comments


HP Exec Leaks Cloud Plans

More information on Hewlett-Packard's public cloud strategy came to light on Tuesday when The Register's Cade Metz stumbled upon a key HP executive's LinkedIn profile.

The exec, Scott McClellan, chief technologist and interim VP of engineering and cloud services at HP, has since removed the info from his profile. But The Register captured the information, which reveals that McClellan was responsible for helping build a distributed object storage business from scratch, a service that offers compute, networking and block storage, and what appears to be a Platform as a Service (PaaS) offering that is optimized for Java, Ruby and other open source languages.

According to The Register, HP plans to announce more about its cloud services at VMworld this August, suggesting it will be based on VMware's technology. Here's the LinkedIn profile information captured before McClellan removed the data:

HP "object storage" service: built from scratch, distributed system, designed to solve for cost, scale, and reliability without compromise.

HP "compute," "networking" and "block storage" service: an innovative and highly differentiated approach to "cloud computing" -- a declarative/model-based approach where users provide a specification and the system automates deployment and management.

Common/shared services: user management, key management, identity management & federation, authentication (incl. multi-factor), authorization, and auditing (AAA), billing/metering, alerting/logging, analytics.

Website and User/Developer/Experience. Future HP "cloud" website including the public content and authenticated user content. APIs and language bindings for Java, Ruby, and other open source languages. Full functional GUI and CLI (both Linux.Unix and Windows).

Quality assurance, code/design inspection processes, security and penetration testing.

HP CEO Leo Apotheker disclosed the company's plan to offer a cloud service during its analyst meeting back in mid-March. At the time, the company said it will first offer a storage service toward the end of this year or early next year. That would be followed by a compute service and, ultimately, a PaaS offering.

The LinkedIn entry appears consistent with that plan, though it suggests HP is favoring an open source approach to its PaaS. It remains to be seen whether HP will support Microsoft's .NET Framework or move forward with plans to deliver technology based on Microsoft's Windows Azure platform.

Last summer, HP, along with Dell and Fujitsu, announced plans to deliver technology based on Microsoft's Windows Azure Appliance. So far, none of the companies have delivered the appliances or services based on the platform.

Posted by Jeffrey Schwartz on 05/05/2011 at 1:14 PM2 comments


Rackspace To Shut Down Slicehost

Rackspace Hosting is shutting down its Slicehost service within the next 12 months, the company said in a letter to customers.

Acquired by Rackspace in 2008, Slicehost is a managed hosting provider that Rackspace maintained as a separate business unit. The move is likely to be unwelcome news to those who must migrate from the Slicehost service.

"With two brands, two control panels and two sets of support, engineering and operations teams, it has been a challenge to keep development parity between the products," wrote Mark Interrante, Rackspace VP of product.

The company's emphasis on its OpenStack open source project and the need to convert to IPv6 are the two primary reasons the company has decided to shut down Slicehost. Rackspace plans to convert all Slicehost accounts to Cloud Servers over the next year.

By making the conversion to Rackspace Cloud Servers accounts, the company said that customers will be better positioned for IPv6 and have access to Cloud Files, the Cloud Files content delivery network and the recently released Cloud Load Balancers.

"Naturally, this decision has not been easy," Interrante said. "There has been extensive planning, and will continue to be more, to ensure this change is as seamless as possible for everyone."

Following the company's announcement, Interrante posted a detailed Q&A outlining the transition, and Rackspace's rationale for shutting down Slicehost. "We truly believe this change will be in the best interest of Slicehost customers over the long term," he said. "A big reason we purchased Slicehost was to learn from their technology and their customers so we could build up the Rackspace Cloud solutions to the Slicehost level of excellence. We want to retain or improve your product experience, not make it worse."

Posted by Jeffrey Schwartz on 05/04/2011 at 1:14 PM0 comments


Symantec Takes Backup to the Cloud

One of the most popular PC and server backup and recovery software products is Symantec's Backup Exec and the company said this week that cloud-based support is on the way.

Symantec announced Backup Exec.cloud at its annual Vision conference in Las Vegas. The new offering is targeted at small and medium businesses and branch offices of larger enterprises that want to offload backups to a cloud-based service.

Backup Exec.cloud will complement Symantec's plans to offer expanded Software as a Service (SaaS) solutions for security, e-mail management and data protection, the company said. The new service, due out later this year, will allow customers to stream backups over SSL connections to Symantec datacenters.

While Symantec will compete with a slew of other cloud providers offering such services, Backup Exec's strong installed base should give it an advantage to those shops looking to migrate their backups off site.

The service will allow individuals to restore individual files, the company said. Pricing was not announced but it will be subscription-based.

Posted by Jeffrey Schwartz on 05/04/2011 at 1:14 PM0 comments


Amazon's Big Mistake

[UPDATE: Amazon released a detailed report explaining the cause of the outage on Friday. Read the story here.]

Amazon Web Services' four-day outage was a defining moment in the history of cloud computing -- not only for its impact but for the company's deafening silence.

The widely reported outage at Amazon's Northern Virginia datacenter left a number of sites crippled for several days, though Amazon most recently reported that service has been restored. However, the company has acknowledged that .07 percent of the Elastic Block Storage (EBS) volumes apparently won't be fully recoverable.

"Every day, inside companies all over the world, there are technology outages," Rackspace Chief Strategy Officer Lew Moorman told The New York Times. "Each episode is smaller, but they add up to far more lost time, money and business."

As for the Amazon outage, he added: "We all have an interest in Amazon handling this well." Did Amazon handle this well? Let's presume the company did everything in its power to remedy the problem and get its customers back online. Amazon has promised to issue a post-mortem once it gets everyone restored and figures out what went wrong.

But the company went dark from a communications perspective. Sure, it posted periodic updates on its Service Health Dashboard, but the company issued no other public statements on the situation as it was unfolding (though it was in direct communication with affected customers). Considering how visible Amazon technologists are on social media, including Twitter, a mere reference to the dashboard felt shallow.

"Most customers are saying today they have not been very transparent and open about what has exactly happened," Forrester analyst Vanessa Alverez told Bloomberg TV. "Their public relations to date has not been up to par."

Consider the communiqué of one of Amazon's customers affected by the outage. In a blog post called "Making it Right..." HootSuite explained to customers what happened and how it was going to make good on the downtime it experienced. Although its terms of service require reimbursement after a 24-hour outage and it was down for only 15 hours, HootSuite said it would offer credits.

"We acknowledge users were inconvenienced and we want to make things right," the company said.  "We are taking steps to increase redundancy of our services and data across multiple geographic regions. This was a bit of a unique outage which is highly unlikely to occur again, but we'll be even more prepared for future emergencies."

During the outage and as of this writing a week after it first hit, no such communication has come from Amazon. PundIT analyst Charles King said in a research note that datacenter failures, even major ones, are inevitable, but communication is critical. He wrote:

"The fact that disaster is inevitable is why good communications skills are so crucial for any company to develop, and why Amazon's anemic public response to the outage made a bad situation far worse than it needed to be. Yes, the company maintained a site that regularly updated how repairs were progressing, and, to its credit, Amazon says it will publish a full analysis of the outage after its investigation is complete.

"But while the company has been among the industry's most vocal cloud services cheerleaders, it seemed essentially tone deaf to the damage its inaction was doing to public perception of cloud computing. At the end of the day, we expect Amazon will use the lessons learned from the EC2 outage to significantly improve its service offerings. But if it fails to closely evaluate communications efforts around the event, the company's and its customers' suffering will be wasted."

I remember during the dotcom boom over a decade ago when companies like Charles Schwab, E-Trade and eBay had highly visible outages that affected many thousands of customers. They took big PR hits for their lack of availability but their Web businesses prospered nonetheless.

While Amazon's outage will upgrade the discussion to the importance of resiliency and redundancy (those discussions were already happening), it seems highly unlikely that it will alter the move to cloud computing, even if it serves as a historic speed bump. "We shouldn't let Amazon off the hook and should expect a very thorough postmortem. But in no way does this change the landscape for the age-old public-private debate," writes analyst Ben Kepes.

While Amazon's outage was a black eye for cloud computing, providers of all sizes, including Amazon, will undoubtedly learn from the mistakes that were made, both technical and procedural. Hopefully, that will include better communications moving forward.

Posted by Jeffrey Schwartz on 04/28/2011 at 1:14 PM4 comments


Dell Boomi Upgrades Cloud Middleware

Boomi, the provider of cloud integration software acquired by Dell late last year, has upgraded its AtomSphere software with improved middleware connectivity, support for large datasets and extended monitoring capabilities.

AtomSphere is designed to connect Software as a Service cloud offerings from the likes of Salesforce.com, NetSuite and others to on-premises systems.

"Larger enterprises are continuing to adapt SaaS, and as a result the integration requirements are growing in scale and complexity," said Rick Nucci, Boomi's founder and now CTO of the Dell business unit. "We are seeing enterprises look at cloud and look at Boomi to help them integrate and then proceed to fit them into their environment as efficiently as possible and adhere to current investments that they've made."

AtomSphere Spring 11 includes a new middleware cloud gateway based on a Java message service connector that links to existing middleware offerings from IBM, Progress Software, Tibco and webMethods. The gateway connects to more than 70 SaaS applications, Nucci said.

Previously, AtomSphere connected directly to the apps but Nucci said customers wanted the ability to link to their existing middleware "because they've built intelligence or logic or validation routines into that middleware."

The new release also adds support for change data capture, or CDC, as well as large data processing in the form of hundreds of gigabytes per atom. For Salesforce.com shops, AtomSphere now offers optimized integration as a result of support for that company's Bulk API.

"It's a pretty complex API," Nucci said. "The approach we've taken abstracts a lot of those technical details and allows the user to give the data set to our connector and have our connector optimize and transmit the data up to Salesforce."

A new AtomSphere API allows customers to integrate its monitoring capabilities with their own systems management consoles.

The company also has launched a partner certification program. Dell Boomi has 70 partners now, many of which are SaaS providers and systems integration implementation providers. Nucci said the company is looking to bolster that number since partners are its primary route to market.

"As part of that scale and growth comes the need to ensure quality and make sure we have a very scalable and reliable and consistent means to acknowledge and accredit a partner who is investing in learning Boomi and really demonstrating that they get it," Nucci said.

To attain certification, partners will need to complete two implementations, pass an exam and commit to annual recertification.

Posted by Jeffrey Schwartz on 04/27/2011 at 1:14 PM0 comments


BMC Targets Cloud Lifecycle Management

BMC Software has upgraded its Cloud Lifecycle Management platform to support creation and management of complete private and hybrid cloud stacks.

The introduction of CLM 2.0 comes a year after the first release, which focused on virtualization management and datacenter automation, thanks to the company's $800 million acquisition of BladeLogic. BMC describes CLM 2.0 as a cradle-to-grave cloud provisioning and management platform.

BMC said it made architectural improvements to CLM with two key new features. One is "service blueprints," which are geared to enable administrators to create configurable, full-stack, multi-tiered cloud offerings for their users.

"They have been designed to be incredibly flexible and support a really broad range of cloud services being delivered through the environment," said Lilac Schoenbeck, BMC's senior manager for solutions marketing and cloud computing.

The second key feature is a service governor which lets customers set policies for which cloud services are configured and managed, Schoenbeck said.

CLM 2.0 also includes a planning and design tool that helps determine capacity needs. It allows the use of BMC's ProactiveNet Performance Management (BPPM) tool to monitor public cloud services running in Amazon's EC2 and Microsoft's Windows Azure environments. Schoenbeck said the company is working with other service providers to create adaptors but BMC also has an API that will work with any cloud provider.

Posted by Jeffrey Schwartz on 04/27/2011 at 1:14 PM0 comments


Is Microsoft Betting Too Much on the Cloud?

Microsoft International President Jean-Philippe Courtois earlier this month told Bloomberg that the company will spend a whopping 90 percent of its $9.6 billion research and development budget on cloud computing this year.

That brings up the question: Is Microsoft putting all its eggs in one basket? Sourya Biswas asks that same thing in a blog post this week. A proponent of cloud computing and, according to his LinkedIn profile, an MBA student at the University of Notre Dame and a former risk analytics manager at Citigroup,  Biswas wonders if Microsoft is throwing the baby out with the bathwater. He writes in his blog:

Make no mistake; I believe that cloud computing is the technology of and for the future. But allocating 90 percent of the research budget on an emerging technology without paying adequate attention to established products in which it has dominance is too big a risk in my book. Especially since that dominance is under threat, with the rise of Firefox and Chrome against the Microsoft Internet Explorer, and the growing popularity of Linux versus Microsoft Windows.

I believe there may be a sense of hubris in the way Microsoft is neglecting its established revenue lines. While its Windows still powers more than 80% of the computers in the world, there are several complaints against the operating system. In fact, many would argue that a lot of that $9.6 billion R&D should have been allocated to making the next edition of Windows bug-free, resource-light and malware-resistant.

Despite Microsoft's preaching that it is "all in" the cloud, the company has taken a measured approach at emphasizing that users will continue to work on local client devices and have access to their data offline.

While keeping its eye on rivals such as Google, Salesforce.com and Amazon Web Services, Microsoft needs to keep investing in technologies such as Windows, Office, SharePoint and Lync. Even if they all ultimately have substantial cloud components, the offline world will remain a critical component to users and Microsoft customers will expect significant investments in technologies that support the local device. I think Microsoft knows and understands this.

Time will tell what Microsoft's R&D emphasis will bring. But Biswas' point that Microsoft needs to invest in Windows and Internet Explorer is important. Do you think that Microsoft's plan to invest 90 percent of its R&D budget on cloud computing is going too far? Or is the company just putting a cloud tag on everything it does? Drop me a line at [email protected].

Posted by Jeffrey Schwartz on 04/21/2011 at 1:14 PM2 comments


HP Upgrades Cloud Automation Software

Hewlett-Packard Co. last week released Cloud Services Automation 2.0, an upgraded version of its toolset aimed at simplifying the transformation of premises-based apps to those that can run in the cloud.

CSA 2.0 not only accelerates the deployment of cloud infrastructure but it expedites the deployment and configuration of the applications, said Paul Muller, VP of strategic marketing for HP Software products.

"Most applications take a considerable amount of manual time and effort to tune and configure," Muller said. "Even if the imaging of that application is being automated, it's often the configuration and tuning of that application to get it ready for production workloads that is the last mile required to make an application run in an optimal fashion in a cloud environment. That's exactly what we've done with Cloud Service Automation 2.0, is package up everything from infrastructure through platform through application deployment."

One of the key capabilities in CSA 2.0 is over 4,000 new or updated workflows and best practices for the deployment of infrastructure and applications and middleware, or Platform as a Service (PaaS), according to Muller. Enabling that capability was the acquisition of Stratavia back in August.

Stratavia offers deployment, configuration and management software for databases, middleware and packaged apps. HP now calls that technology Database Middleware Automation, or DMA.

CSA 2.0 also includes service request catalog capability aimed at minimizing the need to utilize multiple service providers' portals, providing a more simplified consumer-like interface for selecting and requesting services.

"Once the service is requested, the deployment is seamlessly automated behind the scenes," Muller said. CSA 2.0 employs new intelligent resource management and policy enforcement that can address the need for highly available infrastructure, least expensive service or infrastructure that's pinned to a specific geography.

Pricing for the software starts at $35,000.

Posted by Jeffrey Schwartz on 04/20/2011 at 1:14 PM0 comments


Rackspace Adds Load Balancers

Rackspace Hosting this week added a new load balancing service aimed at letting customers rapidly scale capacity.

Called Rackspace Cloud Load Balancers, the service is intended for those with mission-critical Web apps. It lets customers configure cloud servers or dedicated hosts with more capacity as workloads require.

"We designed it in a way where a load balancer is provisioned for a customer in literally a matter of seconds, always under a minute," said Josh Odom, a product line leader at Rackspace. "It's designed to be highly configurable."

Rackspace designed the product to be interoperable with its RackConnect solution, which allows Rackspace cloud customers to mix and match dedicated server infrastructure with cloud servers, according to Odom.

Upon establishing an account with a Rackspace Cloud Server, a customer can log into the control panel and select a cloud load balancer from the Hosting menu. Customers can add a cloud load balancer via the API.

The service is powered by Cambridge, U.K.-based Zeus Technology, and includes static IP addresses, built-in high availability, support for multiple protocols and algorithms, an API and control pane access and session persistence, Rackspace said.

Pricing for the load balancing service starts at 1.5 cents an hour, or $10.95 per month. Customers are only charged for the Cloud Server if they build the server.

Posted by Jeffrey Schwartz on 04/20/2011 at 1:14 PM0 comments


Subscribe on YouTube

Upcoming Training Events

0 AM
TechMentor @ Microsoft HQ
August 11-15, 2025