Storwize V7000 Easy Tier: SATA RAID10 Vs SAS RAID6

When IBM released it’s SPC-1 Easy Tier benchmark on DS8000 earlier this year, it was done with SATA RAID10 and SSD RAID10, so when we announced Storwize V7000 with Easy Tier for the midrange, the natural assumption was to pair SATA RAID10 and SSD RAID10 again. But it seems to me that 600GB SAS RAID6 + SSD might be a better combination than 2TB SATA  RAID10 + SSD.

Let’s examine the relative pricing, and the performance as modeled in Intellimagic’s Disk Magic (licensed by IBM for marketing use and made available to IBM Business Partners) for the following example. We will consider this configuration:

  • Storwize V7000 head unit plus expansion tray
  • 44 x 600GB 10KRPM SAS configured as 2 hot spares, plus 3 arrays of 12+2 RAID6.
  • 3 x 300GB SSDs configured as 1 hot spare, plus 1 mirrored pair.
  • That’s 20 TB (18 TiB) allowing about 7% for internal overheads
  • No capacity contribution from SSD since all extents start life on the magnetic tier
  • By the way, be careful if you plan to use Capacity Magic for calculating capacity and drive counts as it tends to overdo hot spares for SSDs

And compare it to this configuration:

  • Storwize V7000 head unit plus two expansion trays
  • 24 x 2000GB SATA configured as 2 hot spares, plus 11 mirrored pairs.
  • 3 x 300GB SSDs configured as 1 hot spare, plus 1 mirrored pair.
  • That’s also about 20TB (18 TiB) according to my calculations
  • No capacity contribution from SSD since all extents start life on the magnetic tier

So firstly let us compare the relative purchase price of the two configurations (based on list price, and having it on good authority that any discounts applied would be applied evenly to both configs). Surely SATA will be cheaper, right? But no it isn’t…

  • SAS config cost approx 100,000 groats
  • SATA config cost approx 100,000 groats

I’m not disclosing details of the groat to dollar exchange rate, but I think it’s clear that the decision to use either SATA RAID10 or SAS RAID6 is not a price decision.

The reasons that SATA isn’t cheaper include:

  • SATA is only available in a 3.5″ format which requires a 12-bay disk tray (rather than the 2.5″ 24-bay)
  • Storwize V7000 has significant license charges per disk tray
  • The SATA configs I used did not have an existing 2.5″ tray to house the SSDs, so even though there were only a few SSDs they required a whole 2.5″ tray to be added and licensed.
  • Note that more trays also equals higher 7×24 maintenance upgrade costs if standard warranty (next business day) is not for you
  • Note also that if you are using remote replication, that also requires licensing per tray
  • And note finally and very importantly that Storwize V7000 is very well-priced overall – all we are talking about here is how the structure of the pricing affects your choice of configuration

So now let us compare performance, using Disk Magic. I will measure to the nearest 100 iops and work to a max I/O latency of 20 ms (MS Exchange jetstress requires sub 20ms response for example, so it’s a reasonable cut-off to choose).

First I try the OLTP workload pre-set, which is based on 50/50 read/write with 10KB I/O size and with 28% of reads being sequential. I also enable Easy Tier based on light skewing (i.e. assuming I/Os are reasonably spread across the data on the disk, rather than heavily localized to a few extents)

  • SAS config peaks at 5,800 iops (disk bound)
  • SATA config peaks at 5,100 iops (disk bound)

On that workload it’s clearly better to use SAS RAID6 rather than SATA RAID10. But my experience is that most customers run read/write ratios more like 70/30.

As an aside, one of the other workload choices in Disk Magic is SPC-1 which shows as a 40/60 read/write ratio which I can’t help feeling would favour Netapp who have some very cool write technology (WAFL and write journaling) but not so strong on reads. But I digress…

Anyway the second time I’ll manually set the I/O size to 10KB and use the basic default in Disk Magic for 70/30 and only 15% of reads being sequential.

  • SAS config peaks at 4,800 iops (disk bound)
  • SATA config peaks at 4,200 iops (disk bound)

So it does seem that the overall results are fairly consistent – 10KRPM SAS RAID6 will give you 10-15% better performance than SATA RAID10 for the same money. I tried a few other combinations, with 2 mirrored pairs of SSDs for example, higher skew on Easy Tier etc, and the relative cost and performance ratio stayed about the same.

Overall I think I’d rather have SAS RAID6 than SATA RAID10. Of course you could use SAS RAID5 but I’m not a fan of RAID5 (maybe it’s that Netapp influence coming through again).

If I was going to buy a Storwize V7000 I think I would want to embrace Easy Tier for most if not all of the pools on the system. Like my last blog post said, embrace the architecture, don’t try to meld it to your old way of thinking – Storwize V7000 is not just a box of disks – it’s much smarter than that.

15 Responses

  1. Hi Buddhist,

    RAID6 should be used for all drives larger than 500GB. Rebuild times are problematic, so dual-parity is needed for full protection.

    But I still can’t figure out why use RAID10 on SATA drives. You are losing half the capacity on very slow drives. I would rather use RAID6 for SATA drives too. SATA drives are not ment for performance storage applications in any way. This is why tiering is done in the first place.

    Instead of using 44x600SAS, it would be much better to use, let’s say 20x600SAS in RAID10 (or RAID6) for performance hungry applications and 8x2TB SATA for data storage and non-performance applications.

    But I’d avoid using RAID6 for any kind of high-performance production environment.

    NetApp’s WAFL has got a real problem after using it for some time. This is the problem with how WAFL writes and reads data to/from the array. After some time, what was supposed to be optimized for sequential writes workload becomes highly random-read/write environment.

    Like

    • @Damir,

      If you were to use a mix of SAS and SATA drives, you’d be missing out the benefits of EasyTier and the SSD layer. While EasyTier can move data between SAS and SATA, IBM advise against it.

      The 20 x 600GB SAS drives might be of a similar performance level to the 3 x 300GB SSD drives with easytier backed onto 20 SATA drives, but your problem would always be one of balancing which application should be on the fast vs slow storage, and when the application usage changes, how do you quickly move that data to faster storage without disruption?

      Like

      • Which drive technology does Storwize use as a starting point for EasyTier? SAS or SSD? If it is SAS, then you can ‘upgrade’ data to SSD’s, and ‘downgrade’ to SATA. I see no problem with that, as this is how EMC’s FAST is working.

        Is there any technology in Storwize that is able to do the LUN online migration between different drive technologies or different drive groups? Let’s say from RAID10 to RAID6, or from SATA to SAS?

        Like

        • Damir, Easy Tier will work off either SATA or SAS as a base. Easy Tier is a 2 tier system, so you can use SAS and SSD, or SATA and SSD. Of course Storwize V7000 also allows non-disruptive volume movement between tiers, but Easy Tier is about sub-lun automation.

          Like

    • Damir, the only reason for using RAID10 on SATA instead of RAID6 is performance. With Easy Tier we are trying to use large drives for a production workload that would normally require small drives. SATA RAID6 is going to be another whole step down in performance.

      Like

      • I see no point in using SATA drives for performance at all. SATA drives (7.2k) are rated at around 80 raw IOPS, while 15k SAS drives have around 180 raw IOPS. Pure mechanics, no cache involved.

        This is the reason why I can’t figure out this setup. :) But if it’s only working between two tiers, as you mentioned, then it’s understandable.

        Again, I don’t see the point in using SATA drives for performance. :)

        Like

        • C’mon. We all know why he chose this absolutely silly config: to make up something to back up his rather pointless exercise here, namely that SAS in RAID6 is faster than SATA in RAID 10 but they cost the same, thanks to IBM’s sky-high license fees.

          Moreover using data migration as an argument for SATA in RAID10 instead of RAID6 makes this article even dumber: the whole point in tiering is that you have faster and slower tiers and software takes care of data migration between them, according to (pre)set policies (frequency of access, size, age etc.)
          Setting up one fast-but-small and another-just-as-fast-but-not-quite-but-somewhat-bigger is lack of understanding, nothing else.

          Like

  2. […] good article from the Storage Buddhist (thanks Jim !) on SATA RAID10 vs SAS RAID6 on the Storwize V7000 (Easy […]

    Like

  3. Hi everyone, Dimitris from NetApp here.

    Jim, thanks for showing people RAID6…

    @Damir and Jim: Indeed, NetApp is heavily write-optimized. However, the read degradation you’re referring to was addressed long ago (and can only be reproduced today by not following best practices).

    Volumes can have automatic read reallocation, effectively keeping the system humming even with workloads that used to be hostile to NetApp systems several years back (sequential read after random write, for instance).

    So, with 44x600GB drives in RAID-DP (plus 4 spares), a NetApp 3210 (similar model to the V7000 in capacity) and Flash Cache, gets about :

    18K sustained IOPS (25% 64K large-block sequential, the rest random but 8K I/O sizes since our sizing tools don’t do 10K blocks) with 15K RPM drives, and about 12K with 10K RPM drives.

    5700 IOPS with SATA RAID-DP. Which is eerily close to the RAID-10 result for SATA for the V7000, and the reason why we keep saying RAID-DP can give you about R10 performance with better protection.

    I assumed a 1TB working set.

    I know this is not the exact same sizing Jim used, but it’s close (Jim, since you’re IBM and resell our gear you probably have access to the NetApp sizing tools, give it a whirl).

    We win performance bakeoffs all the time, even when simulating a 5-year churn as is the case with some Exchange PoCs we’ve done.

    D

    Like

  4. […] Storwize V7000 Vs the Rest – a Quick SPC-1 Performance Roundup Posted on November 18, 2010 by storagebuddhist This post is in response to the discussion around my recent Easy Tier performance post. […]

    Like

  5. This article makes ZERO sense – you are arguing SAS is the same price as SATA because of IBM’s ripoff licensing fees? Wow, talk about powerful stupid, self-defeating articles…
    In addition to this joke you are simply ignoring the elephant in the room, the OBVIOUS capacity advantage of SATA – y’know, the ONLY reason why people going with SATA, in case you haven’t heard of it -, by simply throwing in a “plus 11 mirrored pairs” super-dumb note there…
    …and finally you are concluding that SAS in RAID6 is still faster than SATA in RAID10.
    Umm, yeah… errr, thanks Cpt Obvious, here’s your cookie: http://1.bp.blogspot.com/_fWLtJmEhLG0/SWy4IyP3_yI/AAAAAAAADVs/uUcTL59byyc/s400/captain+obvious.jpg

    This piss-poor article is rather an insult to anyone’s intelligence – the reader has to be just as powerful stupid to eat up this crap than the writer to pull these cheap, lowlife tricks, I have to say.

    Like

Leave a comment