News

Report: 'Direct Match' Migrations to Cloud Miss the Mark

Most organizations end up overpaying for cloud services when they migrate their on-premises workloads to the cloud using a "direct match" philosophy, according to a recent study by migration analytics firm TSO Logic.

Among the many potential benefits of moving on-premises workloads to the cloud is cost-savings. Once in the cloud, however, many organizations find achieving that savings to be easier said than done. Attempts to wrangle spending in the cloud are frequently thwarted by cloud vendors' complex billing models, constantly changing service offerings, and increasingly decentralized cloud usage patterns.

Research indicates that 35 percent of an average company's cloud computing bill is wasted cost. Industrywide, that amounts to an estimated $10 billion per year in unnecessary cloud spending.

In a report released earlier this month, TSO Logic argued that the problem of cloud waste begins before most organizations even start their move to the cloud. The vast majority of on-premises workloads (84 percent) are already over-provisioned compared to their actual level of use, the company found. Therefore, performing a like-for-like migration of those on-premises workloads to the cloud -- without any changes to their configuration -- only results in continued wasted spend.

"Some workloads may be in use 100% of the time, some may be in use just 10% of the time. Some may reflect 5% utilization for 29 days per month, but 100% peak utilization at month's end. Organizations need to understand how they are truly utilizing each workload to accurately evaluate cloud options," the researchers said in the report.

TSO Logic recommends that organizations take the time to "right-size" their workloads for the cloud, and take advantage of the many different size options on offer from cloud megavendors like Amazon Web Services (AWS) and Microsoft Azure. Otherwise, they risk missing out on the cloud's promise of significant cost-savings.

"Organizations have tried to manually map their current environments to cloud, yet accelerated change and the sheer number of cloud compute options make that impossible now," said Aaron Rallo, CEO of TSO Logic, in a prepared statement.

Besides the over-provisioning issue, TSO Logic noted that businesses that undertake "direct match" migrations to the cloud often do so based on two other incorrect assumptions -- first, that giant cloud vendors like AWS and Azure run the same type of hardware as they do, and second, that these megavendors don't enjoy significant economies of scale.

Regarding that first point, TSO Logic pointed out that public cloud vendors typically have much newer hardware than what a lot of businesses currently have on-premises. That hardware advantage means cloud providers can often provide better and faster performance at lower costs -- as long as businesses don't insist on maintaining the same exact configurations in the cloud that they used on-premises.

"The older the hardware in the current environment, the more economical cloud becomes," the researchers said. For example, migrating 2012-era servers to more modern hardware in the cloud could slash costs by about 50 percent, according to the study. The savings is even more pronounced -- 70 percent -- when migrating 2009-era servers.

The second incorrect assumption, related to cloud vendors' economies of scale, is tied to the first. The cost of operating these newer-generation servers en masse is much more economical for cloud megavendors than it would be to run them individually inside a single organization's datacenter. The savings from using these newer hardware configurations in the cloud could potentially be passed down to customers, if they know to take advantage of the correct configurations.

"Considering that large public cloud providers use the latest-generation hardware, and receive bulk pricing for that hardware at massive scales, cloud providers are inherently positioned to deliver superior price/performance than most organizations can achieve on their own," the researchers wrote. "To realize those savings, however, organizations must be able to map current workloads to future cloud offerings."

About the Author

Gladys Rama (@GladysRama3) is the editorial director of Converge360.

Featured

comments powered by Disqus

Subscribe on YouTube