Not seeing your question or need clarification on an answer?
Workloads
No, there is not a dedicated video transcoding accelerator. Though this is a market we monitor closely.
The CSDs improve application performance in 2 ways: (1) the drives can respond with lower latency through higher workloads (as reported by Percona in their testing in a MySQL workload); (2) they reduce the burden on the CPU and the pollution of the host DRAM for the compression/decompression function, resulting in better system performance.
The engines and processing capability are aligned with the overall throughput of the CSD. Each CSD you add to your system adds just enough compute capability to handle the CSD’s tasks. Trying to use CPU cores to scale compression throughput to match the capability of multiple NVMe is a losing battle.
That’s one of the cool things with transparent compression – it works seamlessly with any application. The host sees a block storage device. The drive automatically compresses data on writes and decompresses the data on reads.
Across a variety of applications (databases, analytics, HPC, AI) customers report improvements in data access times (latency) and system responsiveness ( work per second). We’ve seen results up to 4x improvements.
Users will see the biggest benefits when they are using heavier workloads. In light workloads, the storage capacity gain will be the primary benefit. A Ferrari won’t get you there any faster if you don’t press the gas pedal 😊
Workloads
No, there is not a dedicated video transcoding accelerator. Though this is a market we monitor closely.
The CSDs improve application performance in 2 ways: (1) the drives can respond with lower latency through higher workloads (as reported by Percona in their testing in a MySQL workload); (2) they reduce the burden on the CPU and the pollution of the host DRAM for the compression/decompression function, resulting in better system performance.
The engines and processing capability are aligned with the overall throughput of the CSD. Each CSD you add to your system adds just enough compute capability to handle the CSD’s tasks. Trying to use CPU cores to scale compression throughput to match the capability of multiple NVMe is a losing battle.
That’s one of the cool things with transparent compression – it works seamlessly with any application. The host sees a block storage device. The drive automatically compresses data on writes and decompresses the data on reads.
Across a variety of applications (databases, analytics, HPC, AI) customers report improvements in data access times (latency) and system responsiveness ( work per second). We’ve seen results up to 4x improvements.
Users will see the biggest benefits when they are using heavier workloads. In light workloads, the storage capacity gain will be the primary benefit. A Ferrari won’t get you there any faster if you don’t press the gas pedal 😊