Vendors typically only benchmark their fastest systems in any one class, which means that a bit of careful reflection is required to get a good understanding of things from a few results. The usual “nyah nyah ours is faster” kind of analysis and comment that seems to permeate the blogosphere doesn’t really achieve anything that’s for sure.
Let’s talk about benchmarking more generally…
1. Bragging Rights
There has been a lot of focus on storage product performance in recent years and now the enterprise storage benchmarks generally outstrip the performance that most customers require. I always saw benchmarks as being about transparency, but to many people they are about bragging rights. If the XIV motto is “fast enough” and benchmarking is about bragging rights, then I guess I can understand why benchmarking isn’t always a good fit.
In 2008 when Netapp benchmarked an EMC CX3-40 and published the SPC-1 result, they showed why EMC had avoided SPC-1. I guess benchmarks haven’t traditionally been considered good marketing unless you can somehow claim to be number 1 in a segment, or at least stick one to a key competitor.
2. Badly Balanced Systems
No vendor ever benchmarks a badly balanced volume layout yet most customers run just such systems because it’s just too hard to keep them balanced. What we see in the field is that when apps are migrated to XIV they go faster than when they were on competing systems. Some of that, but not all, must be better load balancing, because it happens out of proportion to any benchmark, and seems to be more noticeable the more jumbled up the workload is.
3. Breakage, Recovery & Other Admin Tasks
Benchmarks don’t test scenarios like what happens to performance when a controller fails or a drive is rebuilding. Netapp’s SPC-1 benchmarks in 2008 tried to show the difference in performance when snapshots were running, and highlighted the difference in efficiency between Redirect-on-write (Netapp & XIV) and copy-on-write (most others).
So while I still like benchmarks, I do have to concede that they are a bit of a blunt instrument.
The only benchmark that has a decent smattering of results including XIV is a Microsoft Exchange 2007 ESRP. What follows is my summary of these results. It’s worth noting that these numbers are way lower than most vendors claim for generic IOPs, way lower than equivalent SPC-1 numbers, and way lower than the real-life results most customers see in the field.
Worth noting when reading these is that vendors often stop adding drives when the bottleneck moves to someplace else on their system.
Update: Also worth noting is that vendors usually benchmark with end to end RAID10, but seldom install with end to end RAID10 – I wonder what the benchmark results would be if they used RAID5/6 like they do in real life? Perhaps only the XIV and Netapp results are realistic in that respect.
- DS8300 33,600 IOPS with replication. 640 drives RAID10
- DMX4-4500 28,800 IOPSwith replication. 480 drives RAID10
- DS5300 26,400 IOPswith replication. 252 drives RAID10
- FAS3170 23,000 estimated IOPs (28,800 w/o repl). 184 drives RAID6
- CX4-480 21,000 IOPS with replication. 186 drives RAID10
- SVC Entry Edition 2 nodes w/3xDS3400 16,128 IOPS (20,160 w/o repl). 144 drives RAID10
- XIV 16,000 IOPS w/repl & 15 nodes each with 8GB cache & 12 drives RAID10
- DS5020 11,200 est. IOPs (14,000 actual w/o repl). 112 drives RAID10
- FAS2050 3,200 est. IOPs (4,000 actual w/o repl). 44 drives RAID6
- EVA4400 2,460 est. IOPs (3,072 actual w/o repl). 46 drives RAID10
- Sun 7310 1,536 est. IOPs (1,920 w/o repl). 7xSSD/66xSATA RAID10
Filed under: XIV |