Skip to content
Home > Blog > Are unfulfilling Fill Rates hurting your budget efficiency?

Are unfulfilling Fill Rates hurting your budget efficiency?

  • JB Baker 
  • 4 min read

Storage capacity goes underutilized to overcome the limitations of ordinary SSDs 

Data Infrastructure and Operations (I&O) teams closely monitor various aspects of their storage – the total capacity, the utilization rate (or the inverse, free space), the rate of new data generation and storage consumption – to properly forecast future storage capacity purchases and plan out deployment timelines. They’re also watching for alerts on problems and potential problems with their storage. One such alert is tied to the Fill Rate Threshold (FRT) or Maximum Utilization Rate, that is “what percentage of each drive’s capacity is occupied with data?” 

“In a poll we conducted in 2024, 52% of respondents said they set the Fill Rate Threshold at 70% or 50%.“ 

Let’s assume that 95% FRT would be the highest reasonable threshold to leave time for bringing more storage online before running out of space. In comparison to that 95%, a 70% FRT would leave 25% of your capacity unused and cause you to pay 1.36x the cost to store each terabyte of data. With a more conservative 50% FRT, you’d be leaving nearly half of your potential storage unutilized and be paying 1.9x the cost to store your data, not to mention needing nearly twice as many drives

It’s no surprise that respondents called out Lower Cost and Fewer Drives to Fail as top benefits of setting the FRT higher. 

When asked what the #1 reason for choosing a specific threshold trigger was, only 32% of respondents were worried about the time to install more capacity, while the other 68% noted concerns about performance degradation, drive endurance, and cluster reliability.  

Those concerns are certainly valid since ordinary SSDs fall off in performance and chew through their endurance more rapidly as you fill them up. Without going into great depth here, as SSDs fill up, they incur more background operations which both slows down their write performance (see our description of the write cliff for more details) and consumes more of the NAND Flash endurance for each set of data you write (see write amp for more details). Testing of enterprise NVMe drives verified this phenomenon as well.   

Innovative SSDs can achieve consistent Inputs/Outputs per second (IOPs), throughput (GB/s) and latency at higher fill levels, eliminating the penalties typically associated with a high FRT.    

To validate this, we ran 3 drives against a 70/30 mix of 4KB random reads and random writes in FIO after preconditioning, which is a common means of testing steady-state performance for enterprise & data center class SSDs. The nuance in this series of tests was to run various utilization levels to mimic the FRTs from 50% to 100%. As expected, performance dropped off rapidly as the fill levels increased for the enterprise NVMe SSDs from well-known vendors. The innovative CSD 3000 from ScaleFlux maintained performance and latency consistency right up to 100% utilization (see figure 2). 

This is made possible by the transparent, hardware compression engines in the CSD. These compression engines compress the data before writing them to the NAND, resulting in fewer hot writes, more free space on the drive, and lower write amplification. That means that using a FRT of 95% on a CSD yields even better performance and endurance than a FRT of 50% on other vendors drives. You can finally use all that capacity you paid for, all without adding any complexity to your systems or worrying about drive wear out or slowdowns!  

JB Baker

JB Baker

JB Baker is a successful technology business leader with a 25+ year track record of driving top and bottom line growth through new products for enterprise and data center storage. He joined ScaleFlux in 2018 to lead Product Planning & Marketing as we expand the capabilities of Computational Storage and its adoption in the marketplace.