At FMS 2024, Kioxia had a proof-of-concept demonstration of their proposed a brand new RAID offload methodology for enterprise SSDs. The impetus for that is fairly clear: as SSDs get quicker in every technology, RAID arrays have a serious drawback of sustaining (and scaling up) efficiency. Even in instances the place the RAID operations are dealt with by a devoted RAID card, a easy write request in, say, a RAID 5 array would contain two reads and two writes to completely different drives. In instances the place there isn’t a {hardware} acceleration, the information from the reads must journey all the way in which again to the CPU and predominant reminiscence for additional processing earlier than the writes may be performed.
Kioxia has proposed the usage of the PCIe direct reminiscence entry characteristic together with the SSD controller’s controller reminiscence buffer (CMB) to keep away from the motion of knowledge as much as the CPU and again. The required parity computation is finished by an accelerator block resident inside the SSD controller.
In Kioxia’s PoC implementation, the DMA engine can entry your entire host deal with house (together with the peer SSD’s BAR-mapped CMB), permitting it to obtain and switch knowledge as required from neighboring SSDs on the bus. Kioxia famous that their offload PoC noticed near 50% discount in CPU utilization and upwards of 90% discount in system DRAM utilization in comparison with software program RAID performed on the CPU. The proposed offload scheme can even deal with scrubbing operations with out taking on the host CPU cycles for the parity computation process.
Kioxia has already taken steps to contribute these options to the NVM Specific working group. If accepted, the proposed offload scheme might be a part of a typical that would grow to be extensively obtainable throughout a number of SSD distributors.