Posey's Tips & Tricks
Dealing With Problems Related to Expanding a Windows Volume, Part 2
Selecting the right NTFS cluster size is key to balancing storage scalability, efficiency and performance when expanding a Windows volume.
In my previous article, I explained that I recently found myself unable to expand an NTFS volume as a result of the cluster size that was used when the volume was formatted, many years ago. This raises the question of what size clusters should a volume be using.
In Windows, the larger the cluster size that you use, the larger the maximum volume size that is supported. So does that mean that you should just use the largest possible cluster size? In a word, no. There are several things that you have to consider when choosing a cluster size.
The first thing that you have to consider is the Windows version that you are using. Windows Server 2019 and Windows 10 version 1709 support a maximum NTFS volume size of 8 PB. For older versions of Windows, the maximum volume size is 256 TB. As such, when choosing the cluster size that you want to use, you will have to consider what your particular Windows version will allow.
Another thing that you have to think about is the maximum volume size that can be created using various cluster sizes. Incidentally, Microsoft refers to clusters as allocation units. Here is a chart listing the various cluster sizes and their maximum supported volume sizes. Keep in mind that these numbers are only valid for NTFS.
So what size clusters should you be using for your NTFS volumes? My advice is to estimate how large the volume might potentially become in the future and use a cluster size that will support that estimated volume size, but without using clusters that are any larger than necessary. Let me give you an example.
The volume that I had trouble expanding was initially just under 16 TB in size and used 4 KB clusters. I am increasing the volume size to about 40 TB. That means that my clusters would need to be at least 16 KB in size. That would allow the volume to be further expanded in the future to up to 64 TB in size. Given my current storage hardware configuration however, my volume might one day grow to be as large as 100 TB, though not any time soon. As such, it may make sense to use 32 KB clusters instead of 16 TB clusters.
So why not just set the cluster size to the maximum possible value? Unfortunately, there are some significant disadvantages to using a larger cluster size than what you need to.
The first disadvantage to using larger clusters than what you need to is that larger clusters tend to waste quite a bit of disk space. The cluster size reflects the minimum amount of space that a file can consume. If for example, you had a 1 KB file and a 32 KB cluster size, then that 1 KB file would actually consume 32 KB of disk space.
The same can also be said for larger files. Imagine for example, that the file in question was 40 KB instead of 1 KB. A 32 KB cluster cannot accommodate a 40 KB file, at least not by itself. As such, multiple clusters are required. The first cluster in this case, stores the first 32 KB of file data. The second cluster stores the remaining 8 KB, but there is 24 KB of wasted space within that cluster. Remember, a cluster cannot be shared among multiple files, nor can it be subdivided.
Another disadvantage to using larger cluster sizes is that storage performance will be diminished for small files. To see why this is the case, think back to my previous example in which a 1 KB file was occupying 32 KB of disk space. Even though that file is only 1 KB in size, the operating system has to read 32 KB of data to access that file. With that in mind, imagine that an application needed to read a large number of small files and you can begin to get a feel for just how big of an impact large cluster sizes can have on performance.
The bottom line is that as a best practice, you should use a cluster size that fully meets your storage needs, but its best to avoid setting the cluster size to anything larger than what is truly needed.
About the Author
Brien Posey is a 22-time Microsoft MVP with decades of IT experience. As a freelance writer, Posey has written thousands of articles and contributed to several dozen books on a wide variety of IT topics. Prior to going freelance, Posey was a CIO for a national chain of hospitals and health care facilities. He has also served as a network administrator for some of the country's largest insurance companies and for the Department of Defense at Fort Knox. In addition to his continued work in IT, Posey has spent the last several years actively training as a commercial scientist-astronaut candidate in preparation to fly on a mission to study polar mesospheric clouds from space. You can follow his spaceflight training on his Web site.