Skip to content
Home > Blog > In-storage Transparent Compression: A Catalyst for Innovating Infrastructure Software

In-storage Transparent Compression: A Catalyst for Innovating Infrastructure Software

  • ScaleFlux 
  • 4 min read

This is the first part of our serial blogs on In-storage Transparent Compression: A Catalyst for Innovating Infrastructure Software. Part 1: prologue. Why is In-storage transparent compression our best option to kick off the journey of commercializing computational storage drives (CSDs)?

In-storage transparent compression is the best option to kick off the journey of commercializing computational storage drives (CSDs), for two main reasons: (1) Zero adoption barrier: In-storage transparent compression does not demand any changes to the existing storage I/O software stack (e.g., filesystem, block layer, and driver) and I/O interface protocols (e.g., NVMe and SATA). This ensures its seamless integration and deployment into the existing infrastructure without requiring user applications to change a single line of code. (2) Significant benefits: Besides its very wide applicability, lossless data compression (e.g., the well-known LZ77 and its variants such as lz4, zlib, and ZSTD) involves a significant amount of random data access that inevitably causes very high CPU/GPU cache miss rates, leading to very low CPU/GPU hardware utilization efficiency. Hence, it is highly desirable to relieve host CPU/GPU from executing lossless data compression. 

After 3+ years of intense R&D, ScaleFlux recently released the world-first PCIe CSD (named CSD 2000) that internally carries out zlib (de)compression on each 4KB data block along the I/O path, being transparent to the host. Its details were introduced in our previous blogs. At first glance, one may think that the full benefits of such in-storage transparent compression are transparently reducing storage cost and transparently improving IOPS, nothing more. In fact, it was exactly what we thought initially. As we carried out further research, we gradually realized that it is far from the complete picture of the full benefits of in-storage transparent compression. Beyond the obvious cost/IOPS benefits, transparent compression opens a door to very exciting system-level innovation opportunities that nevertheless remain largely unexplored. In several subsequent blogs, we will present case studies to demonstrate the innovation potential. Before that, this blog aims to explain where the innovation potential comes from. 

All data storage drives (e.g., SSD, HDD, optical disk, and tape) operate as block devices, where the data are accessed in the unit of fixed-size blocks (e.g., 4KB) through standard block I/O interface protocols (e.g., NVMe and SATA). When using normal storage drives (i.e., those without built-in transparent compression), computing systems naturally aim at filling each fixed-size 4KB block with useful data in order to minimize the storage cost. It is well known that fixed-size block I/O imposes strict constraints on the design of data management software (e.g., relational database, key-value store, object store, and filesystem). For example, to accommodate the fixed-size block I/O, the B+ tree is forced to make all the tree nodes the same size that is a multiple of the block I/O size and aims to fill each tree node as much as possible, which may not be theoretically optimal. In sharp contrast, with in-storage transparent compression, we no longer need to fill each fixed-size block with useful data because any unoccupied space can be simply compressed away inside the storage device. As a result, we could pack any arbitrary amount of useful data (e.g., 2KB, 1KB, or even 100B) into each 4KB I/O block. This leads to virtually variable-size block I/O, i.e., even though we still stick to the existing fixed-size block I/O interface such as NVMe or SATA, the useful data per-block could have a variable size, and meanwhile, the true physical storage cost is not jeopardized. This essentially breaks the conventional fixed-size block I/O constraint. The above discussion can be further illustrated in Fig. 1.  

Figure 1: Illustration of the virtually variable-size block I/O enabled by in-storage transparent compression.

Over the past 4 decades, the entire data management software infrastructure was built subject to the fixed-size block I/O constraint. Intuitively, upon the arrival of virtually variable-size block I/O enabled by in-storage transparent compression, here come unique opportunities to re-think the design and implementation of data management software. In this context, as illustrated in Fig. 2, all the innovations and modifications are confined completely within the user application domain, while the entire I/O stack remains unchanged. Clearly, it can greatly lower the development cost and adoption barrier, which is highly desirable to end-users. 

Figure 2: Innovations inside user applications only without any changes to the I/O stack

Based on how much we need to modify the application, we may categorize the entire design space into three stages as shown in Fig. 3. With different software development costs, different stages involve different benefit vs. cost trade-offs as illustrated in Fig. 3. All together they form a new and very exciting frontier for innovating infrastructure software. In subsequent blogs, we will present some case studies in the three categories, which we sincerely hope will contribute to attracting more R&D activities towards this exciting but largely unexplored territory.  

Figure 3: Illustration of the entire design space. 
ScaleFlux

ScaleFlux

ScaleFlux is the pioneer in deploying Computational Storage at scale. Computational Storage is the foundation for modern, data-driven infrastructure that enables responsive performance, affordable scaling, and agile platforms for compute and storage I/O intensive applications. Founded in 2014, ScaleFlux is a well-funded startup with leaders proven to deploy complex computing and solid-state storage solutions in volume.