In a prior webinar, we discussed how NVMe SSDs are failing your applications and how you can avoid those pitfalls with some help from ScaleFlux. Today, I’m going to focus on one specific aspect: latency. In the world of technology, latency is simply the time it takes to complete a task. I’ll start by sharing various ways to spend your time while waiting. I’ll also cover why latency is terrible, why it happens, how we can address it, and some next steps.
“Time is money”, 7 things to do while you wait
Latency matters to everyone. If you don’t optimize for it, you must find something else to do with your time. Here are several examples:
- Load a different mobile app
Sometimes you click on a mobile app, and the data takes forever to load. So, you may cancel that one and load another because it’s not responding.
- Call your mom back
I’m sure she would appreciate it.
- Get a cup of coffee while your Virtual Desktop loads
You walk into the office on Monday morning, log into your virtual desktop, and wait because everyone is doing that at the same time. Brewing and sipping a cup of coffee is a better use of your time.
- Hum a song
As a keyboardist, the opening bars of Too Much Time On My Hands by Styx is appropriate to me. What is your favorite humming song?
- Reschedule decision meetings, waiting for database analysis
Here’s one you don’t want to do: reschedule your meeting because you don’t have the data you need in time. Your database report took too long to run.
- Stare blankly at the wall
Or even better, you zone out pondering why you don’t have the data and what else you can possibly do.
- Search new job listings because you can’t get your job done
Take a new headshot and update your LinkedIn profile; it’s time to move on to where you can be more productive.
The storage unit analogy
While these are all ways to fill time, they are not ways to improve your business, or any business; all of these impact productivity. There must be a better way! Picking the right storage device can help you get that costly time back.
Stay with me, as I build a real-world analogy on why storage latency exists. You can visualize an SSD as a group of storage units or garages, and writing data is similar to putting boxes and other items into that storage unit. Once you fill a storage unit, you move on to the next one. At some point, you may decide that you no longer need that box of old books buried in the middle of a storage unit. You’ll need to pull everything out to get to it, but you don’t have the time today. Instead, you make a mental note that you’ll have to get to that someday. Then, one Saturday afternoon, when you finally have the time, you decide to remove everything from the storage unit in front of that box of books to discard them. You need to regularly free up space like this, or you’ll never have sufficient space for any new items.

That’s similar to the way NAND works inside of NVMe SSDs. You fill devices (like the storage units), and that cleanup is what the industry calls “garbage collection.” Garbage collection is that process of removing useless data you’ve tagged for eventual deletion. Unfortunately, you can’t read or write data at that particular location until you finish the garbage collection process. You must go into the device, pull out data that is in the way, discard the data you no longer need, and then put everything back in its place.
Once you fill the device, you’re stuck in a situation where you must wait for garbage collection to create space to write new data. And this is when you start experiencing latency delays, your applications start slowing down, and you start facing some of the frustrating wait times mentioned earlier.
The more an application writes data to a device, the fuller it gets, the longer you’ll be waiting to read the data.
Do you want to look at what this scenario looks like? Watch here.
One way to address this is to simply add more devices into the servers, or add more servers to the rack, or add more racks to your datacenter. That’s like buying more storage units. You never run out of space, but this is both inefficient and costly.
A more common way to address this is at the device level by reserving some space. Reserved space is setting aside, or overprovisioning, storage that isn’t available for user data. For example, using a 7.68 terabyte drive and overprovisioning it to 6.4 terabyte, or even an extreme 3.2 terabytes. This practice ensures consistent high performance and lower latency of the device. However, you’re paying for all that space and only get to use a fraction of it. That’s like only filling each storage unit halfway, which is both inefficient and costly.
Not everyone can keep adding more drives and wasting time, space, and money.
The ultimate goal is to always have some free space available when we have new “stuff” to store, whether it’s items in storage or data on a device. If there was a way to shrink that “stuff”, we could create more free space and minimize the garbage collection collisions.

The ScaleFlux magic data shrinker
This is exactly what is happening on the ScaleFlux CSD 3000 NVMe SSD! We leverage transparent inline device-level hardware compression to reduce the physical footprint of the data being stored. We’re not storing less data – we’re storing the same amount of data in a smaller physical space. This compressed data minimizes the garbage collection collisions resulting in lower latency and higher performance for applications.

Some of ScaleFlux CSD 3000 NVMe SSD features:
- Transparent, Inline Compression
- Transparent, Inline Encryption
- Capacity Multiplier
- Higher Performance
- Insanely Less Latency
- Endurance Multiplier
- Tunable Overprovisioning
All of these features require zero application changes, and no host memory or CPU resources! When we say transparent, we mean transparent. Zero changes to the software stack or the OS or having to do anything else in that space. Just plug it in for instant better results.
Don’t take my word for it. Request a demo with our team to learn how you, too, can shrink your data and become the wizard of your company. Hat and wand not included.