We’ve discussed how NVMe opens several technical opportunities as well as challenges for today’s data centers. In theory, implementing NVMe unlocks the storage device from the hardware controller and performs far and above what is possible with SATA and SAS.
Aside from performance considerations, one of the biggest concerns for data center managers is redundancy. While NVMe storage can be attached to traditional hardware controllers, a more efficient approach to redundancy will be via a software-defined storage (SDS) platform
When an organisation switches to NVMe, it’s going to have to explore how it is still going to meet its high-availability practices. This is particularly true for an organisation that has very high SLAs.
Hardware-based RAID controller manufacturers will need to adapt to the emergence of NVMe and offer solutions to connect to existing U.2 server backplanes to support hardware-based NVMe RAID solutions. There are already a few RAID controller cards on the market that support NVMe, but the market is still new. With the HW -based RAID in fairly early stages of development, when organisations make the switch to NVMe, architectural design decisions will have to be considered as they will need to explore how they will still meet their high-availability practices, whether it is through SW-based HCI solutions like vSAN, Ceph, Linux SW -based RAID or LVM mirroring, and application-based high-availability replication like SQL always-on or Oracle ASM mirroring. One can argue that these SW -based design decisions should still exist with HW -based RAID controllers, since the latter only protects against a single point of failure.