Where Should I Shove This Solid State Drive?

Everyone agrees that enterprise-class SSDs from companies like STEC Inc are fast, and cool, and pretty nice. Most people also realise that SSDs are an order of magnitude more expensive than SAS drives, and that there is no expectation that this will change dramatically within the next 5 years. This means we have to figure out how to leverage SSDs without buying a whole lot of them.

This is part of a broader question.  How do we best leverage memory technologies to improve the performance of storage systems? Teams under Tom West at Data General (CLARiiON) and Moshe Yanai at EMC (Symmetrix) helped pioneer the use of cache in big shared storage systems, but that was 20 years ago, and in the last few years we have seen the first real hints of change in those architectures.

Examples of attempts to leverage a high speed layer include:

  • IBM Storwize V7000 uses Easy Tier to automatically move sub-LUN data (typically 256 MiB chunks) to and from SSD, to improve both read and write performance. EMC CX4 FAST by comparison uses 1G chunks, so might be less efficient. (Similar technology is also available on IBM DS8800, Hitachi’s VSP, 3PAR, but interestingly still not on EMC’s VMAX)
  • Netapp (IBM N Series) Flash Cache uses PCI solid state technology as a read cache. This is particularly effective for Netapp as ONTAP delivers very good write performance, but has traditionally been weaker in read performance.
  • EMC Fast Cache uses SSD drives to expand both read and write cache.
  • IBM XIV uses distributed read/write caches in its grid architecture. 16GB dedicated cache for each separate module of 12 drives. The key to this approach is that the caches are not centralised.

So which one is best? That’s a typical dumb question. The answer is “it depends”. It depends on the ease of use, the cost, your workload, and your requirements. If I was buying I wouldn’t really care too much which architectural approach was taken, but I’d definitely want to make sure it was easy to use.

Getting into a religious war about where best to stick your high speed layer or what to build it from isn’t ever likely to be productive.  The important thing is that vendors are using innovative approaches to provide faster access, and that sounds like a good thing to me.

Advertisements

4 Responses

  1. Nice Blog which contain a lot of information. I hope everyone will like this blog and i will wait for next good information from you.

    Like

  2. Jim,

    good stuff here, no doubt in my mind. I wonder if you had time to answer this question. I know nothing about SSD and occasionally hear that with the passage of time (as the number of read/write increases) their capacity decreases. Is this true?

    All the best & thanks in advance,

    MarkD:-)

    Like

    • I’m not an expert at the component level and there are a lot of different SSD implementations at various price/quality points, but I have never heard of any loss of capacity on the kinds of SSDs that the big vendors use on their storage systems. I guess it’s potentially possible if you get a whole bunch of failed cells, but I expect that would entitle you to a replacement under warranty anyway.

      Like

  3. Hi StorageBuddhist, thanks for your reply!

    markD:-)

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: