Last month — last month — Samsung launched a 30TB SSD with one of the highest single-drive capacities we’ve yet seen. Seagate launched a 60TB SSD even earlier, though that appears to have been more a publicity stunt than a shipping product; the company’s Nytro enterprise SSD landing page shows no such available capacity, topping out at 15TB. But the question of whether Seagate or Samsung deserves credit for largest SSD has been rendered somewhat moot by the claims of another challenger: Nimbus Data and its ExaDrive DC100, with a whopping 100TB capacity, at least in theory (more on that in a moment).
Nimbus Data is claiming that the ExaDrive DC100 consumes 85 percent less power per TB (just 0.1W). There’s some implication that the device hits these low power targets by emphasizing affordability and capacity rather than sheer speed. Unlike the Seagate and Samsung hardware, the ExaDrive uses both conventional SATA and SAS (Serial Attached SCSI).
Nimbus has blown past its competitors in raw capacity by leveraging what it refers to as a multi-processor architecture, and what sounds like a RAID-like method of splitting data between multiple NAND controllers. The company writes:
Conventional SSDs are based on a single flash controller. As flash capacity increases, this monolithic architecture does not scale, overwhelmed by error correction operations and the sheer amount of flash that must be managed. In contrast, ExaDrive is based on a distributed multiprocessor architecture. Inside an ExaDrive-powered SSD, multiple ultra-low power ASICs exclusively handle error correction, while an intelligent flash processor provides wear-leveling and capacity management in software.
We’re not aware of any third-party analysis of this method, its performance, or its suitability for various workloads and performance metrics compared with more traditional NAND controllers and interfaces. Nimbus Data claims up to 100K IOPS in random read/write workloads, and while that’s a standard figure for random reads, it’s above-average for random writes. Overall drive throughput is listed at 500MBps, without any clarification on how these figures were measured. Nimbus is claiming that the drive is rated for an “unlimited” number of drive writes per day, but this may reflect how long it takes to actually write a full drive of data as opposed to any significant improvement in longevity.
Consider: An SSD capable of sustaining 500MBps of throughput can write a gigabyte of data every two seconds, or 30GB of data per minute. That’s 1.8TB of data per hour, or 43.2TB of data per day. But how meaningful is the notion of drive writes per day when the SSD is literally incapable of writing a full drive of data in the relevant time frame? It’s not clear that Nimbus Data has made a meaningful improvement in NAND reliability, so much as it jettisoned a metric that might not make sense if applied to its own enormous products.
We’ll be curious to see if any of these drives practically ship or win wide deployment. Announcing enormous SSDs has become something of a storage market pastime and a way for NAND manufacturers to claim advances in sheer size over spinning disks, but the massive difference in cost has thus far blunted the impact of these enormous capacities. Right now, even the cheapest 4TB SSDs are $ 1,800 (for non-enterprise models), while a 12TB WD Gold Enterprise drive is $ 489. That gives spinning disks an ongoing ~10x advantage in cost per GB, and while that’s shrunk compared to what it used to be, it’s not nothing, either. Still, the company claims some major advances in power consumption and cost per GB overall, so if this multi-controller approach bears fruit, we’ll likely see it deployed more widely going forward.
Now read: How do SSDs work?