Cloud provider Skytap is looking to simplify use of its service, particularly as it applies to providing compatibility with in-house datacenters.
Skytap said it is providing support for the Open Virtualization Format (OVF), a Distributed Management Task Force (DMTF) standard for packaging and distributing virtual machines.
By supporting OFV, users of Skytap's cloud service will have an efficient and flexible way to import and export existing virtualized configurations without making changes to them, said Brett Goodwin, the company's VP of marketing and business development. That means it will support the VHD format in Microsoft's Hyper-V, Amazon Web Services' Amazon Machine Image (AMI), Xen Disk Image and the QEMU format (qcow2) associated with KVM.
Until now, Skytap users were confined to using VMware's VMDK file format. "It [OVF] improves the portability and decreases the platform dependence, and it also allows IT to leverage a common set of tools when they are working with VM workload software configurations on their end in the private infrastructure and on the hybrid and public cloud," Goodwin said.
In addition, Skytap has added advanced notification, aimed at alerting both end users and IT if thresholds are exceeded such as compute or storage usage. The advanced notification capability is intended to avoid surprise bills, Goodwin explained. IT can set customized alerts to inform administrators if users are approaching certain usage thresholds, such as 90 percent of budgeted storage quotas.
The company has also added self-healing network automation to its service for those running hybrid cloud deployments, which is quite common among its customer base, Goodwin said. The self-healing features include auto-detecting VPN connection failures and automatically re-establishing those links.
Goodwin said its service is primarily used by those who develop and test applications, though it is also used for product and proof-of-concept demonstrations, as well as for IT and technical training.
Posted by Jeffrey Schwartz on 11/29/2011 at 1:14 PM0 comments
AT&T has extended its cloud portfolio with a Platform as as Service offering (PaaS) aimed at letting business users, enterprise developers and ISVs build, test and run their apps in the telco's hosted environment.
Launched this week, AT&T Platform as a Service will allow application developers and tech-savvy business people to build and deploy apps using either AT&T-provided tooling or Eclipse-based development tools. Those using AT&T's Web-based tools and templates don't require coding expertise, according to AT&T.
The tools consist of templates that allow either customer development or the use of 50 pre-built apps. The tools also allow developers to configure their apps for mobile devices and add social networking features. The service is built on LongJump's platform. LongJump's PaaS is a Java-based platform that provides the templates enabling non-technical users to build line-of-business apps, or developers using Eclipse-based tools to build custom applications.
AT&T's entrée suggests the PaaS market is poised to mature, according to Forrester analyst Stefan Ried. "AT&T has the potential to get into a real volume business with this offering bridging the gap between consumer style services and corporate usage of PaaS -- similar to what Google managed around email and the rest of Google's applications," Ried wrote in a blog post.
Will telecom giants such as AT&T and Verizon ultimately seize a big piece of the PaaS pie? While they have the advantage of their robust network infrastructures, players such as Google, Microsoft, Red Hat and VMware have aggressive plans with their own PaaS offerings. But the telcos promise to make it an even more heavily contested battle in 2012.
Posted by Jeffrey Schwartz on 11/17/2011 at 1:14 PM0 comments
ScaleXtreme, a company that lets IT administrators and service providers manage public and private clouds, this week updated its service to allow customers to model, configure and launch servers.
The company's new Dynamic Server Assembly lets IT pros who use ScaleXtreme's Web-based Xpress and Xpert services build templates that represent how a machine is built, rather than binding it to a specific cloud provider or virtual machine stack. Administrators can use those templates to manage systems and apps running on multiple public and private clouds.
"We are talking about a new way of modeling and templating machines that allows you to build a canonical expression of a machine and instantiate that on one or more cloud providers so the machine effectively gets built on demand," said ScaleXtreme CEO and Co-Founder Nand Mulchandani.
ScaleXtreme competes with cloud management providers such as RightScale, though Mulchandani argues that his company is better suited for managing both internal private clouds and public clouds. ScaleXtreme itself is a cloud-hosted service and puts agents on internal servers, allowing IT admins or service providers to create and manage virtual machine templates and VMs; start and stop VMs; and, at the OS layer, configure, patch, audit, monitor and remotely access the system.
The service provides consolidated views of multiple cloud services and internal servers and allows admins to browse the file system; monitor and graph OS metrics; and store, edit and run automation scripts in the cloud.
ScaleXtreme offers a free version of its service, which is limited to one administrator and one cloud. A paid service, which costs $15 per month for each server, provides management of an unlimited number of clouds and administrators.
ScaleXtreme manages clouds from Amazon Web Services, Rackspace and those based on OpenStack and VMware's vCloud. The company last month added support for Citrix Systems' CloudStack. With its support for CloudStack, which Citrix picked up earlier this year with its acquisition of Cloud.com, ScaleXtreme claims it can now manage most public and private clouds.
"We probably cover 80 to 90 percent of the footprint of public clouds or semi-private clouds that you can buy capacity from," Mulchandani said. Among those they don't cover are Eucalyptus and Microsoft's Windows Azure and Hyper-V.
"What Microsoft does not have that the other players have in the market have is a templating, cataloging API layer that allows you to programmatically access all the functions of Hyper-V so you can do things like provision machines and manage machines through the APIs," Mulchandani said. He believes once Microsoft delivers a new capability in its System Center 2012 called System Center App Controller 2012, code-named "Project Concero", that those barriers to managing Azure and HyperV will be lifted. Microsoft released System Center App Controller to beta late last month and said it expects it to be commercially available in the first half of 2012.
Posted by Jeffrey Schwartz on 11/16/2011 at 1:14 PM0 comments
CA Technologies wants to help enterprises determine what applications may be suited to move to the cloud.
The company launched Cloud 360 at its annual CA World conference, which took place this week in Las Vegas. Cloud 360 is a portfolio of consulting services bundled with CA's software to model and perform cost-benefit and performance analyses of moving apps to the cloud. It also is intended to help customers develop migration plans.
"It lets CIOs determine which apps or services they want to move to the cloud and which cloud they want to move them to, if any," said Andi Mann, CA's VP of strategic solutions. "Some apps and some services will never go to the cloud. This gives CIOs a real deterministic model for understanding what's in their portfolio that they might be able to get a benefit from moving to the cloud."
Among other things, Mann explained Cloud 360 will let CIOs understand what sort of performance, service levels, security and cost and reliability criteria they need to consider. It will match that up against different cloud options -- such as public clouds, private clouds and Software as a Service -- and it will let customers simulate and model their chosen apps and cloud environments to determine if the service they're considering is suited for their requirements, Mann said.
Since the outcome of this will ultimately result in customers buying CA's various software offerings, this service will appeal to customers comfortable with going that route. The offering starts off with an app portfolio analysis consisting of a one-day workshop followed by CA's Application Discovery and Portfolio Analysis conducted by the company's consultants using CA's Clarity PPM On Demand tooling.
Once it is determined what apps will be moved to the cloud, CA will help determine service-level agreement requirements using its CA Oblicore On Demand service-level management software. Among other CA wares to be used in helping simulate and determine capacity and virtualization requirements are CA Capacity Management and Reporting Suite, CA Virtual Placement Manager and CA LISA Suite.
The company also launched two new identity and access management (IAM) security services aimed at providing single sign-on to internal and cloud-based applications. Both are cloud-based services that provide access to apps delivered by online providers such as Salesforce.com as well as premises-based systems.
CA IdentityMinder as-a-Service offers password management, user provisioning, management of access requests and reporting and auditing. CA FedMinder as-a-Service offers the cross-domain single-sign-on capability. It supports the SAML 2.0 standard and has policy management capabilities.
Also at CA World, the company launched the Cloud Commons Marketplace and Developer Studio. The Cloud Commons Marketplace is a portal that lets ISVs put their applications up for sale. "This is essentially going to be an app store for the enterprise," Mann said. "Enterprises can go up onto the cloud commons marketplace and buy them and service providers can host them."
The Cloud Commons Developer Studio is a free service that allows developers to build and test apps using the CA AppLogic platform.
Posted by Jeffrey Schwartz on 11/16/2011 at 1:14 PM0 comments
Amazon Web Services has opened its seventh global datacenter and its second on the west coast of the United States. The new facility in Oregon offers a lower-cost alternative to the cloud computing provider's Northern California location.
Like Amazon's other datacenters throughout the world, the Oregon facility will offer multiple Availability Zones, Amazon said on Wednesday. The addition of another datacenter should appeal to customers who want further redundancy, an issue that has come up more after the spate of Amazon outages this year.
Nevertheless, the company is emphasizing that the new Oregon datacenter has lower usage fees than its location in Northern California, on the order of about 10 percent.
"Launching this new lower-priced U.S. West Region today is another example of our commitment to driving down costs for our customers," said Amazon senior VP Andy Jassy in a statement. "Now developers and businesses with operations or end users near the west coast of the United States can use our U.S. West Infrastructure at an even lower cost than they could before."
While it's true that Amazon has frequently reduced its fees, the prices associated with the Oregon facility is the same as its East Coast datacenter in Virginia, Amazon said.
The new datacenter offers the portfolio of Amazon's cloud offerings, including Elastic Cloud Compute (EC2), Simple Storage Service (S3), Elastic Block Store (EBS), a variety of database services, Virtual Private Cloud (VPC) and Elastic Load Balancing (EBS).
Among those services not yet available in Oregon are Premium Support, HPC, CloudFront, ElastiCache, Beanstalk, Simple Email Service, Route 53, Direct Connect and Import/Export services, noted Gartner analyst Kyle Hilgendorf in a Tweet. "If AWS follows past trends, these missing services will show up in Oregon over coming days/weeks/months," Hilgendorf added.
Posted by Jeffrey Schwartz on 11/10/2011 at 1:14 PM0 comments
Google said it is pulling support for the native Gmail app for the BlackBerry, a move not likely to be popular among users of that smartphone. But it doesn't mean Google is walking away from providing connectivity to the BlackBerry for enterprises users.
In a brief blog post on Tuesday, Google said it will no longer support the Gmail App for BlackBerry effective Nov. 22. Users can still run the existing app but it will no longer be supported, Google said. It will be available for download for the next two weeks. Google said BlackBerry users can still access their Gmail through the mobile Web app via the device's Web browser.
Even with a declining share of the overall smartphone market, the BlackBerry still has a sizeable chunk of business users that plan on sticking with the device. For users of Google Apps for Business, the company continues to offer support for the BlackBerry Enterprise Server, through a connector that provides synchronization. It would behoove Google to continue support for that connector if it wants to see more enterprise wins.
So far, Google has had mixed success on getting those huge enterprise wins. While its partner CSC has struggled to bring a large chunk of the city of Los Angeles online two years after winning that contract, it looks like Google is on the cusp of a huge win with General Motors; the auto giant is reportedly looking at Google Apps for up to 100,000 employees. Gartner also recently said Gmail is now viable for enterprises.
While not at the top of the list for those considering Google Apps or Office 365, BlackBerry support is important, especially among government users.
Also, Microsoft's lack of BlackBerry Enterprise Server integration with Office 365 at launch this summer did not go unnoticed by customers and partners, many of whom have said they would delay upgrading from the older Business Productivity Online Services (BPOS) until the BlackBerry support was made available.
Research In Motion two weeks ago released the beta of a service that will link the BlackBerry service to Office 365. As long as Microsoft continues to provide enterprise support for the BlackBerry service through Office 365, I can't imagine Google will kill off its own connector any time soon.
Posted by Jeffrey Schwartz on 11/09/2011 at 1:14 PM1 comments
Rackspace is a company synonymous with dedicated hosting and cloud computing services. While hosting and cloud services are different, the company's business model over the past 13 years has been predicated on customers using Rackspace's datacenters.
That has changed this week with the launch of Rackspace Cloud: Private Edition, an offering by which the company will help customers build clouds within their own datacenters. The debut of this new offering has been anticipated for some time.
The company launched Rackspace Cloud Builders back in March, a program aimed at providing training, education and certification to those who want to build clouds based on the OpenStack open source platform. Rackspace gained the capability to offer Cloud Builders from its acquisition of Anso Labs, a professional services firm that has helped several large organizations build OpenStack-based clouds.
In fact, the rationale behind Rackspace's decision to team up with NASA and create the OpenStack Project was to build an ecosystem that it hoped would give it an edge over Amazon Web Services and VMware. "Anyone who wants to build and run OpenStack clouds the way we do will have access to that technology," said Jim Curry, general manager of Rackspace Cloud Builders.
With this new private cloud offering, customers also will have the option of building their own OpenStack-based private clouds using a reference architecture published by Rackspace, which will offer optional remote administration. Alternatively, Rackspace is certifying its partners to build OpenStack-based private clouds for their customers. So far, Rackspace has certified Cloud Technology Partners (cloudTP), MomentumSI, and China's TeamSun.
The current hardware architecture is built on Cisco network switches and Dell servers, though Rackspace said it will add other options next year. Infrastructure automation and management vendors Opscode and RightScale also said they are supporting the new Rackspace private cloud offering.
Rackspace this week also upgraded its RackConnect service, which allows customers to securely link systems running in its public cloud with its managed hosting service. Launched a year ago, 10 percent of Rackspace customers are using RackConnect, said Toby Owen, senior manager of Rackspace Hybrid Cloud Product Solutions. "It has definitely moved from a niche offering to a mainstream capability for many of our customers," Owen said.
With the new release, Rackspace has added automation to RackConnect. For example, if a hosting customer needs more capacity, they can scale using the cloud service, Owen said. "We are trying to reduce the administrative burden and all the messiness around infrastructure provisioning and make it a lot more seamless from the customer perspective," he said.
Rackspace has also added a new user interface to the RackConnect portal, giving customers visibility to both their cloud and dedicated hosting environments and RackConnect itself. Also new in RackConnect is a network security policy interface for customers. With the new interface, customers can configure firewalls, both physical and cloud-based, on their own.
Posted by Jeffrey Schwartz on 11/09/2011 at 1:14 PM0 comments
Last week a group called the Open Compute Project held its first summit in New York, where it laid out its agenda for creating power-efficient and lower-cost datacenters based on open source hardware designs.
The OCP was formed by Facebook back in April as an effort to share the hardware design of its datacenter in Prineville, Ore. The social networking giant said at the time that its datacenter improved efficiency by 38 percent and lowered costs by 24 percent. Facebook said it achieved a power usage effectiveness (PUE) ratio of 1.07, compared with 1.5 for its other datacenters.
Facebook published the specs of the datacenter's servers, power supplies, server racks, uninterruptible power supplies and building design to the OCP. Fast forward to last week's Open Compute Project Summit. "Today open source is not just something that you can use to describe software but also to describe the hardware space as well," said Frank Frankovsky, director of technical operations at Facebook, in his keynote address.
"When we looked at the results of the datacenter we designed and built in Prineville and looked at the efficiency, we said why not share this, because the aggregate impact is if everyone started designing datacenters like this, we could lessen the impact on the environment pretty tremendously," he said.
Now Facebook is trying to hand this off to an independent community, though seemingly with a firm grip. Early last month it created the Open Compute Project Foundation. A board of directors led by Frankovsky includes Jason Waxman, GM for high-density computing in Intel's Data Center Group; Mark Roenigk, Rackspace Hosting's chief operating officer; Andy Bechtolshiem, the co-founder of Sun Microsystems and current chairman of Arista Networks; and Don Duet, a Goldman-Sachs managing director.
Frankovsky told attendees that the board has developed a well thought-out intellectual property policy, co-developed with Intel and some of the other founders of the board. The OCP is modeled after the Apache Software Foundation. Everything goes through an incubation committee consisting of nine people with diverse backgrounds.
Even Facebook's contributions will go through the committee, he said. More important is getting other suppliers to contribute. It looks like some key players will including Amazon Web Services, Dell, Hyve Solutions (a new division of distributor Synnex), Intel and Red Hat, among others are on board.
"We think most suppliers will feel comfortable now contributing their IP to this project to move things forward and also to create opportunities to the supply base so we really do focus here on mutual benefit," Frankovsky said. "Not only the consumers but also the suppliers in this community."
For its part, Intel appears to be actively engaged. "What it's going to do is democratize and bring together much more choice in the industry for how people can get these efficient platforms," Waxman told attendees. "We at Intel forecast that the growth rate of server deployments is going to double over the next five years. We forecast if you don't deploy greater efficiencies in the server infrastructure, that the equivalent of 45 coal power plants will need to be deployed just to keep up with that growth of server infrastructure. This is really not just an individual problem but a collective problem that we need to address."
Brian Stevens, Red Hat's CTO and VP of worldwide engineering, told attendees that until now it was difficult to share information among hardware vendors due to confidentiality agreements. "Now we can have a much more open dialog with our developers in the open source community as we go through this process based on the OCP specifications," Stevens said.
So far, it looks like Facebook has taken an interesting first step toward getting the hardware industry to talk about sharing IP that could help reduce the cost and energy requirements of running large scale datacenters. Let's see if a broad enough set of players step up and contribute, and perhaps establish more standards in the area of datacenter design and infrastructure.
Posted by Jeffrey Schwartz on 11/03/2011 at 1:14 PM0 comments
Business continuity has long been an afterthought for many small and mid-sized enterprises. Even those that do run backup and recovery software on their networks don't have a setup that allows them to recover from a disabling disaster at a given location.
The cloud is making it easier for companies of all sizes to establish disaster recovery plans without the cost and complexity associated with it. And just about every supplier of backup and recovery software is offering services to enable customers to use the cloud to store their data.
Two companies, Axcient and KineticD, last week launched hybrid cloud-based solutions and services aimed at making it easier for small and medium businesses to backup and recover data.
Axcient is taking a unique approach to backup and recovery. The Mountain View, Calif.-based vendor's Axcient Cloud Continuity is a service that lets organizations back up data and apps running on PCs and servers, and gain virtual access via the Web in the event a site goes down.
The hybrid cloud platform allows organizations to back up all of the applications, files and folders running on PCs, as well as on servers. Rather than installing software on every device, an appliance developed by Hewlett-Packard extracts images from systems using Active Directory APIs.
When a customer site has an outage, users can access their data and applications, including Microsoft Exchange and SharePoint as well as any other business apps running on a company's network. Users can log in to the Axcient service from any location, launch their virtual office and gain VPN access to all their apps and data, said Axcient's CEO Jason Moore. "We don't just focus on the data, we focus on the applications, and that's a huge difference," Moore said.
Meanwhile, KineticD's new KineticCloud for Servers is also a hybrid cloud offering that allows SMBs and branch offices of larger organizations to back up data locally to a server or upload those backups to the cloud. The service carries a monthly cost of $19.95 per network and 50 cents per gigabyte.
It uses data de-duplication on the servers, a process of only backing up data that has changed, providing much faster backup times. "The idea is you sync to the cloud and have a full backup," said Jamie Brenzel, KineticD's CEO.
KineticD's new offering was made possible thanks to its acquisition of ROBOBAK earlier this year. That company's agentless disk-to-disk backup solution is delivered to customers via managed services providers. Slated for release early next year, the new offering is now available for beta testing.
Posted by Jeffrey Schwartz on 11/02/2011 at 1:14 PM0 comments
A group of seven Google partners this week formed an alliance that plans to work closely on helping customers to add different third-party business applications that run with Google Apps.
The Cloud Alliance for Google Apps claims it represents the most heavily deployed applications in the Google Apps Marketplace and appears to be looking to leverage its combined strength to jointly try to reach enterprise customers through outreach and ensuring interoperability where it makes sense.
Members of the alliance include Cloud Sherpas, Expensify, Insightly, Okta, RunMyProcess, SmartSheet and Spanning. Heading up the group is David Politis, chairman of Cloud Alliance and a vice president at Cloud Sherpas. Politis told me this is not a Cloud Sherpas effort, nor is Google involved.
"We believe there is an opportunity to improve the selection experience for end users and admins of Google Apps," Politis said. "As they go into the marketplace, it can be quite a process to evaluate, install, purchase, un-install and test different apps to figure out which ones to accept. We've taken the best of breed applications, we've created this alliance to go out and make that process easier for end users and administrators."
Each member will host a webinar starting Nov. 16 and running through March 14 on how to enhance Google Apps. Politis said over time, the group will add more members but first wanted to start out with a manageable number.
"I believe this alliance will evolve quite a bit," Politis said. "This ecosystem is still figuring out where it's going to go and it's still evolving."
Posted by Jeffrey Schwartz on 11/02/2011 at 1:14 PM0 comments
Application stores and marketplaces are becoming a popular means of distributing software and cloud-based apps for PCs and mobile devices.
Made popular by Apple with the iTunes App Store, now it seems every major software and cloud provider has released one or has one in the works. App stores and marketplaces will be among the top 10 strategic technologies for enterprises next year, according to a forecast released by Gartner last week.
The market researcher is predicting that app stores will facilitate 70 billion downloads of mobile apps per year by 2014. While it is primarily consumer-driven today, it will gain momentum for enterprise apps as well, according to Gartner.
"With enterprise app stores, the role of IT shifts from that of a centralized planner to a market manager providing governance and brokerage services to users and potentially an ecosystem to support entrepreneurs," according to the Gartner report. "Enterprises should use a managed diversity approach to focus on app store efforts and segment apps by risk and value."
Looking to capitalize on that trend, FullArmor on Wednesday launched the AppPortal Marketplace, which allows enterprises, ISVs, solution providers, distributors and bandwidth providers such as telcos to create their own application stores. It is a cloud-based service that lets those who want to create their own app stores to provision and host them.
It provides the functions needed to create an app store or marketplace, including setup of storefronts and catalogs, checkout and billing, explained FullArmor CEO Rich Farrell. "It's a turnkey system," he said.
For enterprises, it provides a form of governance over the use of applications that can be procured through third-party app marketplaces, Farrell said. IT organizations have lost control over the procurement of such applications since many are free or low-cost, allowing end users to bypass IT.
That creates all sorts of issues ranging from system configuration management, security and licensing. The AppPortal Marketplace creates an app store that IT can configure themselves for employees to use, while providing a chargeback and tracking mechanism, according to Farrell.
For ISVs, resellers, distributors and telcos, the marketplace allows them to host their own stores or create them for their customers. The starting price is $50,000 but costs are determined based on the number of apps that are in the marketplace and how many systems need to be integrated.
AppPortal Marketplace is currently hosted on Microsoft's Windows Azure platform, though FullArmor plans to support other cloud services, including those provided by Amazon, VMware and others, Farrell said.
It is likely FullArmor will aim to sell this platform to a larger vendor. The company's business model is that of an incubator. Among the technologies it has developed and sold off are Group Policy Administrator, acquired by NetIQ; Workflow Studio, an IT automation tool that Citrix bought; and Mail Portal Migrator, a tool for moving in-house Exchange-based systems to the cloud, which Quest Software picked up last year.
Posted by Jeffrey Schwartz on 10/27/2011 at 1:14 PM1 comments
Internap on Thursday said it is offering the first public cloud service based on the compute service of OpenStack, the rapidly growing open source platform aimed at providing interoperable public and private clouds.
More than 100 companies have signed on to the OpenStack project, founded by NASA and Rackspace. Internap's release of what it calls Open Public Cloud is a noteworthy milestone in the evolution of OpenStack, which consists of open APIs that allow portability between cloud providers.
Of course, that portability will only become a reality as more cloud providers offer compute services based on OpenStack. Rackspace is in the midst of doing so, as recently reported, and others such as Dell and Hewlett-Packard are planning to do the same. But it also allows for compatibility with private OpenStack-based clouds.
For its part, Internap will offer two cloud services: Open Public Cloud and Custom Public Cloud. The latter is based on VMware's vCloud Director platform and is likely to be preferred for enterprise applications requiring high availability, said Paul Carmody, Internap's senior VP of product management and business development.
Enterprise CIOs "are looking for a place for their internal private cloud to land, they want a VMware based platform because that's what they already virtualized on," Carmody said. "In addition, they may be running some commercial applications that are only certified to run in VMware VMs, so they need that landing place, it's a great answer for that audience."
The OpenStack-based cloud service will appeal more to startups or those looking to host Internet-based applications, according to Carmody. It will also be appropriate for those who want to use it for development and testing.
Besides portability, the appeal of Internap's OpenStack-based service will be cost. While he wouldn't discuss pricing other than to say both offerings will be based on the common usage-based model, Carmody said Open Public Cloud will be a less expensive option. That said, it will lack the performance of its Custom Public Cloud offering, he acknowledged.
"Obviously, OpenStack cloud is priced to be more of a cost-effective type cloud offering, where VMware is a more feature-rich HA-type environment," Carmody said. "Clearly, OpenStack itself is a maturing platform, it's a great starting point. There are features that will develop over time. I'd say VMware is very mature from a networking sophistication standpoint. You can construct fairly complicated application, networking topologies inside of VMware using vCloud Director that aren't currently supported by OpenStack. We think those networking complexities will mature over time with OpenStack, as well."
Indeed, the next release of OpenStack, code-named Essex, will focus on improved networking. The consortium is developing an API that dynamically requests and configures virtual networks. It will also offer advanced networking and virtualization capabilities.
Posted by Jeffrey Schwartz on 10/27/2011 at 1:14 PM0 comments