Posey's Tips & Tricks

Server Capacity Planning: How To Choose Between Current and Future Memory Needs

How much server memory is enough, and for how long? Brien works through the math as part of his ongoing hardware refresh.

I have been giving a lot of thought recently to the subject of system capacity planning. As I explained in some recent columns, I recently performed a major upgrade on my production environment. With my production environment now up-to-date, it's time to turn my attention to my lab environment.

Before I tell you about the capacity-planning factors that drove my decision-making process, let me take a step back and tell you a little bit about my lab environment. Since the mid-1990s, I have maintained a somewhat extensive lab environment in my home. I write about a wide variety of topics and I need to be able to test whatever it is that I happen to be writing about. While it is true that many of the products and services that I write about are cloud-based, I still have the need for an on-premises lab environment, as well.

For many years, my home was jam-packed with dozens of servers. I'm sure that my setup probably made my insurance agent nervous, but I think it's safe to say that the electric company loved me. Over time, I was gradually able to scale down my lab environment thanks to advances in server virtualization.

A few months ago, I further reduced the number of physical servers in my lab from seven to two -- a Hyper-V server and a VMware server. The VMware server is brand-new and, until recently, was running the bulk of my lab virtual machines (VMs), but that's only because my Hyper-V server was badly outdated. Although my Hyper-V server was running current-generation software, the underlying hardware was ancient. I built the machine way back in 2012. It is equipped with an eight-core AMD processor and 32GB of RAM.

Back in February, I nearly declared the machine to be completely useless because it had become so painfully slow. Even so, I was able to breathe some new life into the machine by installing an all-flash array.

Believe it or not, the machine's performance is acceptable for the most part. There have been a few tasks recently that have run more slowly than I would have liked, but in terms of performance, the server was getting the job done.

The underlying driver behind my decision to retire the server and replace it with something more up-to-date was the machine's lack of memory. Having 32GB of memory was fine back in 2012 when the average lab VM required a couple of gigabytes of RAM. By today's standards, though, 32GB is woefully inadequate. Consider, for example, that Microsoft's recommendation for an Exchange 2019 mailbox server is 128GB. Unfortunately, 32GB is the maximum amount of memory that the server's system board supports, so simply upgrading the server's memory wasn't an option.

After a bit of shopping, I found a server that seemed to meet my needs. At that point, though, I had to make a decision as to how much memory and storage to provision it with. In a perfect world, I would have installed the maximum supported memory and the fastest high-capacity SSDs. In the real world, though, server hardware isn't cheap, so it's important to strike a balance between meeting current and future needs.

Based on the way that I work, I knew that I wanted to outfit the machine with at least 256GB of RAM, but I also considered installing 512GB (the server supports up to 1TB of RAM). Having the manufacturer provision the server with 512GB of RAM at the time of purchase would have been far less expensive in the long-term (based on today's prices) than purchasing the server with 256GB of RAM and then replacing that memory later on.

Even so, I ultimately decided to go with the 256GB option. It was a tough decision for sure, but the choice ultimately came down to a couple of deciding factors.

First, even 256GB would be a huge step up from a system that was only equipped with 32GB of RAM. Based on the way that I use the system, I won't even come close to using the full 256GB, at least not for a while. As such, I had to question whether it really made sense to order a server with 512GB of RAM, when it will likely be quite a few years before I need that much memory. It could even be that the entire server will need to be replaced before that day ever comes.

Eventually, there will presumably come a point at which 256GB of RAM is no longer enough. At the time that I built my previous server, 32GB was plenty of memory, but that's severely lacking by today's standards.

Out of curiosity, I looked up the 2012 invoice for the parts used to build my old server. Back then, 32GB of memory cost roughly $300. Today, the price is less than half that. The point is that the price of computer hardware falls over time. Memory will likely be less expensive in a few years -- when I actually need it -- than it is today.

Today, purchasing 512GB of memory would cost thousands of dollars. The server has 16 memory slots, which means that a 512GB configuration would require me to purchase 16 32GB modules. The best price I could find was about $150 per module, which means that 512GB of RAM would cost about $2,400. In a few years, however, there is a realistic chance that the price could be half of what it is today.

About the Author

Brien Posey is a 22-time Microsoft MVP with decades of IT experience. As a freelance writer, Posey has written thousands of articles and contributed to several dozen books on a wide variety of IT topics. Prior to going freelance, Posey was a CIO for a national chain of hospitals and health care facilities. He has also served as a network administrator for some of the country's largest insurance companies and for the Department of Defense at Fort Knox. In addition to his continued work in IT, Posey has spent the last several years actively training as a commercial scientist-astronaut candidate in preparation to fly on a mission to study polar mesospheric clouds from space. You can follow his spaceflight training on his Web site.

Featured

comments powered by Disqus

Subscribe on YouTube