Everyone agrees that enterprise-class SSDs from companies like STEC Inc are fast, and cool, and pretty nice. Most people also realise that SSDs are an order of magnitude more expensive than SAS drives, and that there is no expectation that this will change dramatically within the next 5 years. This means we have to figure out how to leverage SSDs without buying a whole lot of them.
This is part of a broader question. How do we best leverage memory technologies to improve the performance of storage systems? Teams under Tom West at Data General (CLARiiON) and Moshe Yanai at EMC (Symmetrix) helped pioneer the use of cache in big shared storage systems, but that was 20 years ago, and in the last few years we have seen the first real hints of change in those architectures.
Examples of attempts to leverage a high speed layer include:
- IBM Storwize V7000 uses Easy Tier to automatically move sub-LUN data (typically 256 MiB chunks) to and from SSD, to improve both read and write performance. EMC CX4 FAST by comparison uses 1G chunks, so might be less efficient. (Similar technology is also available on IBM DS8800, Hitachi’s VSP, 3PAR, but interestingly still not on EMC’s VMAX)
- Netapp (IBM N Series) Flash Cache uses PCI solid state technology as a read cache. This is particularly effective for Netapp as ONTAP delivers very good write performance, but has traditionally been weaker in read performance.
- EMC Fast Cache uses SSD drives to expand both read and write cache.
- IBM XIV uses distributed read/write caches in its grid architecture. 16GB dedicated cache for each separate module of 12 drives. The key to this approach is that the caches are not centralised.
So which one is best? That’s a typical dumb question. The answer is “it depends”. It depends on the ease of use, the cost, your workload, and your requirements. If I was buying I wouldn’t really care too much which architectural approach was taken, but I’d definitely want to make sure it was easy to use.
Getting into a religious war about where best to stick your high speed layer or what to build it from isn’t ever likely to be productive. The important thing is that vendors are using innovative approaches to provide faster access, and that sounds like a good thing to me.