Skip to content
Home > In the Media > ScaleFlux Delivers Computational Storage with Support for NVIDIA Magnum IO GPUDirect Storage

ScaleFlux Delivers Computational Storage with Support for NVIDIA Magnum IO GPUDirect Storage

  • ScaleFlux 
  • 4 min read

ScaleFlux CSD 2000 allows users to leverage computational storage for ai/ml and analytics acceleration with gpus

ScaleFlux, Inc., the leader in deploying Computational Storage at scale, today announced its flagship CSD 2000 solution will support NVIDIA® Magnum IO GPUDirect® Storage, enabling users to take full advantage of Computational Storage for artificial intelligence and machine learning (AI/ML) and analytics acceleration. As NVIDIA Magnum IO™— the IO subsystem for the modern, GPU-accelerated data center — gains increasing popularity, users can combine it with ScaleFlux CSD 2000 to take their data center to the next level with Computational Storage: a new class of storage drive that brings compute right to the data.

Training with larger and more diverse data can both reduce bias and improve statistical variance overall improving the accuracy of an AI/ML model. While effective in boosting accuracy, massive increases in data set sizes introduce new challenges around storing, preparing, and delivering the data to the GPUs. Developers often fetch compressed data from remote storage, however, this leads to a significant decompression burden in order to continue with the preparation process to generate the appropriate training data.

With today’s release, ScaleFlux addresses this problem by delivering the first use of Computational Storage to advance AI/ML and data analytics with GPUs. ScaleFlux CSD 2000 deploys transparent compression/decompression, which differs from alternative solutions in that it requires no code changes to the application, does not incur latency or performance penalties, reduces data movement, and scales throughput with storage capacity. CSD 2000 offloads and improves parallelism in the processor-intensive compression/decompression functions, freeing up GPU cycles to perform analytics and model training instead of bogging down in data preparation. It also expands the capacity per flash bit by 3-5x, without the added latency or reduced throughput of other compression options.

“As our data sets grow larger and more complex, we are constantly looking for ways to improve AI initiatives with the latest technology,” said Jeff Hookailo, CEO of Middle Canyon. “We are already seeing positive results in our testing from the compression/decompression function in the ScaleFlux CSD, combined with the direct transfer of data between the CSD and the NVIDIA GPUs using Magnum IO GPUDirect Storage. We are excited to see the ways this combination will enhance how we train and work with AI moving forward.”

Modern AI and data science workloads are powered by vast amounts of data, which makes it critical to enable fast communications between GPU computing and data center storage systems,” said Kushal Datta, Senior Product Manager at NVIDIA. “ScaleFlux’s addition of NVIDIA Magnum IO GPUDirect Storage in their flagship CSD 2000 solution helps support AI deployments by significantly boosting system bandwidth while decreasing latency in the data center.”

Key features of CSD 2000 with NVIDIA Magnum IO GPUDirect Storage include:

  • Combines PCIe SSD performance levels with an innovative Flash mapping architecture and built-in compression/decompression engines
  • Achieves “penalty-free compression,” so users can scale compression/decompression throughput as they add storage capacity and deliver compression/decompression without hurting latency — a benefit that is impossible to achieve with host-based software compression
  • Enables users to take full advantage of data compressibility to reduce the cost of storing each byte of user data, shaving up to 70% off the costs of ordinary enterprise SSD storage

“Accelerated computing with NVIDIA GPUs is increasingly critical to data-driven businesses. However, data preparation and decompression consumes precious time. This is where CSD 2000 adds value to users. It handles the decompression process and eliminates up to 87% of the data loading time so the GPU can get to work faster on the training activity,” said Hao Zhong, Co-founder and CEO of ScaleFlux. “We at ScaleFlux have been collaborating with the NVIDIA team for the past year, and are thrilled to support NVIDIA Magnum IO GPUDirect Storage with the innovative capabilities of Computational Storage.”

The ScaleFlux® Computational Storage Drive CSD 2000 Series brings exceptional performance, scalability, and TCO savings to mainstream flash deployments. ScaleFlux drives combine up to 8TB of the latest 3D NAND Flash technology with hardware-accelerated compute engines, achieving incredible data read/write speeds and consistent low latency.

Visit www.ScaleFlux.com or contact us via [email protected] to arrange a demo or learn more about Computational Storage and how data center applications can benefit.

Helpful Links

About ScaleFlux, Inc.

ScaleFlux is the pioneer in deploying Computational Storage at scale. Computational Storage is the foundation for modern data center infrastructure that provides responsive performance, affordable scaling, and an agile platform for data-driven, compute and storage I/O intensive applications. Founded in 2014, ScaleFlux is a well-funded startup with a team proven to deploy complex computing and solid-state storage solutions in volume. For more information, visit www.scaleflux.com.

Contact

Shannon Campbell
Offleash for ScaleFlux
[email protected]

ScaleFlux

ScaleFlux

ScaleFlux is the pioneer in deploying Computational Storage at scale. Computational Storage is the foundation for modern, data-driven infrastructure that enables responsive performance, affordable scaling, and agile platforms for compute and storage I/O intensive applications. Founded in 2014, ScaleFlux is a well-funded startup with leaders proven to deploy complex computing and solid-state storage solutions in volume.