Random Access

The Storage Metrics That Matter

Scott Lowe outlines the three top things IT should keep an eye on when monitoring storage usage.

Storage is one of those foundational technologies that remains a mystery to many in the IT world.  For many, storage is just a bunch of high-capacity hard drives storing all of the organization's data assets, but as the layers are peeled back, it becomes apparent that storage is so much more.  Moreover, as organizations experience problems with their storage, nailing down the root cause of the problem can be an exercise in frustration, particularly when it's intermittent.

Which metrics really matter when it comes to monitoring storage?  Let's find out!

Capacity
Perhaps the most commonly considered performance metric is simple capacity.  When people talk about storage problems, they're most often discussing the fact that they're running out of disk space.  It should be a simple task to determine how much storage capacity remains.  Simply open the management platform for the storage array and look.  And, in many cases, it is that simple.

But, sometimes, it's not quite that easy.  Today's modern environments include all kinds of technologies that can complicate the capacity game and make it a bit tougher to determine how much space is being used and how much remains.  Among these technologies:

  • Thin provisioning:  Thin provisioning in an increasingly common way that organizations choose to extend the life of their storage investments.  Thin provisioning enables administrators to assign to virtual servers all of the storage they believe they will require for the life of those servers, while not using the space on the storage until it's actually needed.  This is a space-saving technology, but can complicate capacity planning efforts.
  • Deduplication:  Deduplication is a feature found in just about every enterprise-class array sold today.  This technology looks for commonality in data on the array and, if there is duplicate data, discards the duplicates and, in its place, writes a pointer to the original copy of the data.  Deduplication can have significant capacity savings.  But with it come some capacity challenges.  Moreover, these capacity challenges can be further complicated based on the deduplication method.  With "inline" deduplication, data is deduplicated before it's written to the storage.  With "post process" deduplication, the original duplicated data is written to the array and a future process takes care of deduplication. This means that, for a period of time, the storage still needs to be able to hold some duplicated data.

Even though these technologies complicate capacity monitoring at times, they serve an important purpose and can help organizations get a lot of additional capacity out of their arrays.

IOPS
Input/Output Operations Per Second (IOPS) is another common storage metric that describes how many read and write operations the storage is able to handle.  IOPS has gained new attention due to virtualization, particularly in desktop virtualization projects.

Different kinds of projects require storage of varying levels of ability to read and write data.  For example, in a general file server, IOPS isn't usually a major concern, but in a virtual desktop environment, insufficient IOPS can doom the initiative as users suffer from unacceptable levels of performance.

There are also other factors that affect IOPS:

  • Media type:  Different kinds of disks have different IOPS performance levels.  7,200 RPM SATA disks, for example, can push about 70 IOPS each.  15K RPM SAS disks can push around 175 to 200 IOPS.  Solid state disks can push hundreds, thousands or tens of thousands of IOPS per disk.
  • RAID level:  The level of RAID being used in a storage array directly impacts IOPS.  For example, with RAID 5, four IOPS are required for each write.  RAID 6 requires 6 IOPS for each write.  If you're using RAID 6 and are attempting to run a write-heavy application, you should consider a different RAID level, such as RAID 1 or 10, which imposes 1/3 the write penalty (2 I/O's per write operation).

Although IOPS are extremely important, they're just one component of the storage performance equation.

Latency
To me, if you are allowed to monitor only a single storage performance metric, it should be latency.  Latency in storage parlance latency is the amount of time it takes storage to complete an operation.  High latency has a direct and immediate impact on workloads running on that storage.
Latency takes into consideration all kinds of factors, including:
  • The amount of time it takes disks to spin around to the location on the disk.
  • The amount of time it takes for the system to read or write the data from or to the disk.
  • The amount of time it takes to transfer the data over the storage link (iSCSI or Fiber Channel).

If you're seeing ongoing latency figures above 20 to 30 milliseconds, latency is an issue.

Summary
Storage metrics are important to understand in order to organizations to ensure that mission-critical business applications are able to keep up with demand.

 

About the Author

Scott D. Lowe is the founder and managing consultant of The 1610 Group, a strategic and tactical IT consulting firm based in the Midwest. Scott has been in the IT field for close to 20 years and spent 10 of those years in filling the CIO role for various organizations. He's also either authored or co-authored four books and is the creator of 10 video training courses for TrainSignal.


Featured

comments powered by Disqus

Subscribe on YouTube

Upcoming Training Events

0 AM
Live! 360 Orlando
November 17-22, 2024
TechMentor @ Microsoft HQ
August 11-15, 2025