Not seeing your question or need clarification on an answer?
Performance and trade-offs
Yes, since each drive includes compression engines specifically designed to handle the per-drive data rates, you can scale your throughput linearly with each incremental drive you add. You’ll be distributing and parallelizing that task across the drives.
ScaleFlux frees up CPU cycles for your applications by taking on the burden of compression & decompression. With consistently lower response times from the drives, users can realize an increase in CPU efficiency (less time lost to “wait” cycles)
The ScaleFlux CSDs enable users to cache larger volumes of data in the compute nodes to reduce the need to thrash the local cache (which in turn creates more network traffic)
Applications and servers see performance gains with ScaleFlux drives vs. other enterprise NVMe SSDs. By offloading the CPU from storage processing and optimizing flash memory to reduce write amplification, latency can be greatly improved, and bottlenecks alleviated by keeping traffic off the bus. By using dedicated hardware on the drive compression can be processed nearly 100x faster than in the CPU. The latency and performance improvements are direct gains from doing hardware-based compression in the SSD controller ASIC. The benefits for network traffic are more second-order improvements – by increasing the effective storage space in the server, the server can reduce the number of times it needs to fetch data from across the network, alleviating network traffic.
In mixed read/write workloads such as OLTP, customers report 2-4x increase in transactions per second. Since the CSD’s do not use CPU resources or host DRAM to manage compression, performance scales with each drive you add… until you max out how many database transactions the CPU can handle of course!
In any workload involving a mix of read and write traffic, you can see 2x or more the performance of ordinary NVMe SSDs.
In the 4th generation of drives with our PCIe 5 ASIC, expect performance to be approximately 2x on all of the performance metrics. The current gen ship with up to 16TB of physical capacity and supports up to 24TB of data storage. Next gen includes plans for up to 32TB physical and 64TB of data storage, though higher capacities are possible as NAND densities increase.
Potentially in the future. The current drives are intended for servers.
No, though when data is incompressible (e.g., pre-encrypted), we won’t be able to leverage compression to improve performance and QoS. If you suddenly switch from sending compressible data to sending incompressible data, the write performance will eventually decline as the NAND on the drive fills. However, it’s the average compressibility of the data that will influence performance more than short-term shifts.
Performance and trade-offs
ScaleFlux frees up CPU cycles for your applications by taking on the burden of compression & decompression. With consistently lower response times from the drives, users can realize an increase in CPU efficiency (less time lost to “wait” cycles)
The ScaleFlux CSDs enable users to cache larger volumes of data in the compute nodes to reduce the need to thrash the local cache (which in turn creates more network traffic)
Applications and servers see performance gains with ScaleFlux drives vs. other enterprise NVMe SSDs. By offloading the CPU from storage processing and optimizing flash memory to reduce write amplification, latency can be greatly improved, and bottlenecks alleviated by keeping traffic off the bus. By using dedicated hardware on the drive compression can be processed nearly 100x faster than in the CPU. The latency and performance improvements are direct gains from doing hardware-based compression in the SSD controller ASIC. The benefits for network traffic are more second-order improvements – by increasing the effective storage space in the server, the server can reduce the number of times it needs to fetch data from across the network, alleviating network traffic.
In mixed read/write workloads such as OLTP, customers report 2-4x increase in transactions per second. Since the CSD’s do not use CPU resources or host DRAM to manage compression, performance scales with each drive you add… until you max out how many database transactions the CPU can handle of course!
In any workload involving a mix of read and write traffic, you can see 2x or more the performance of ordinary NVMe SSDs.
In the 4th generation of drives with our PCIe 5 ASIC, expect performance to be approximately 2x on all of the performance metrics. The current gen ship with up to 16TB of physical capacity and supports up to 24TB of data storage. Next gen includes plans for up to 32TB physical and 64TB of data storage, though higher capacities are possible as NAND densities increase.
Potentially in the future. The current drives are intended for servers.
No, though when data is incompressible (e.g., pre-encrypted), we won’t be able to leverage compression to improve performance and QoS. If you suddenly switch from sending compressible data to sending incompressible data, the write performance will eventually decline as the NAND on the drive fills. However, it’s the average compressibility of the data that will influence performance more than short-term shifts.