A brief overview of how the ScaleFlux CSD 2000 can deliver users of NVIDIA Magnum IO GPUDirect Storage even more value.
ScaleFlux recently announced it will be supporting NVIDIA Magnum IO GPUDirect Storage with the ScaleFlux CSD 2000.
GPUDirect Storage creates a direct path between storage and GPU memory. As data sets increase in volume and velocity and the GPUs increase in performance and capability, AI/ML activities need to shovel higher and higher amounts of data from the storage media to the GPUs at ever increasing speeds. Traditionally, the data takes a detour through the CPU and host DRAM complex. This adds latency and creates constraints on the volume and throughput of the data as they are moved from storage to the GPUs (aka the dreaded I/O bottleneck that hurts efficiency). GPUDirect Storage enables data to take the express lane from the drives to the GPUs, skipping the detour through host memory.
ScaleFlux is the first “smart storage” entrant in the constellation of the GPUDirect Storage ecosystem. ScaleFlux CSDs further aid in increasing the volume and velocity of data available to the GPUs for processing. ScaleFlux CSD 2000 deploys transparent compression/decompression, which differs from alternative solutions in that it requires no code changes to the application, does not incur latency or performance penalties, reduces data movement, and scales throughput with storage capacity. The CSD 2000, with its integrated transparent compression/decompression feature, can amplify the value of GPUDirect Storage in two ways:
- Capacity Expansion: the CSD 2000 can store up to 4x the amount of data per bit of Flash (in comparison to ordinary SSDs). This capability both expands the amount of “high velocity” data that’s available locally to the GPUs and reduces the cost of storing that data (freeing up budget dollars to buy more GPUs).
- Offloading decompression: Developers often fetch compressed data from remote storage, however, this leads to a significant decompression burden in the CPU in order to continue with the preparation process to generate the appropriate training data. Offloading the decompression function to the CSD can accelerate data preparation and free up valuable GPU cycles for the actual analytics and training activities.
GPUDirect Storage, in turn, enhances the value of the CSD200. Since the compression functionality is freed from the constraint of needing the CPU on its way to a GPU buffer, GPUDirect Storage further enhances that value by enabling a direct path, e.g., through a PCIe switch, which skips the CPU altogether.
Key features of CSD 2000 with NVIDIA Magnum IO GPUDirect Storage include:
- Combines PCIe SSD performance levels with an innovative Flash mapping architecture and built-in compression/decompression engines
- Achieves “penalty-free compression,” so users can scale compression/decompression throughput as they add storage capacity and deliver compression/decompression without hurting latency — a benefit that is impossible to achieve with host-based software compression
- Enables users to take full advantage of data compressibility to reduce the cost of storing each byte of user data, shaving up to 70% off the costs of ordinary enterprise SSD storage
We’re excited to see NVIDIA Magnum IO GPUDirect Storage enter the marketplace and to add our support for this initiative to improve data center efficiency.
Helpful Links:
- Read our Press Release announcing the support of GDS with ScaleFlux CSD 2000
- Learn more about NVIDIA Magnum IO GPUDirect Storage