Storwize V7000 four-fold Scalability takes on VMAX & 3PAR

IBM recently announced that two Storwize V7000 systems could be clustered, in pretty much exactly the same way that two iogroups can be clustered in a SAN Volume Controller environment. Clustering two Storwize V7000s creates a system with up to 480 drives and any of the paired controllers can access any of the storage pools. Barry Whyte went one step further and said that if you apply for an RPQ you can cluster up to four Storwize V7000s (up to 960 drives).

This now seems to leave the low entry point as the only thing ‘midrange’ about the Storwize V7000 and creates an interesting situation where Storwize V7000 finds itself competing with systems like VMAX and 3PAR.

I have occasionally heard folks describing VMAX and 3PAR as being like XIV, but it never made sense to me, the architectures are miles apart. XIV is a grid where every 12 drives has its own local cache and its own local 4 Intel cores. That allows you to do stuff with cache that traditional paired controller architectures (albeit clusters of pairs) can only dream of. 300 drives behind a controller pair is a whole different game from a grid node with 12 drives. To further highlight the difference consider that most XIV systems installed are 15 nodes, but I’m guessing most VMAX and 3PAR deployments are 2 nodes.

But what struck me is that it’s not XIV but Storwize V7000 that is similar to VMAX and 3PAR.

Take a look at this simple table:

Ctlrs

Pairs

Architecture
VMAX-SE

2

1

Controller Pair
3PAR F200

2

1

Controller Pair
3PAR F400

2-4

1-2

Clustered Pairs
3PAR T400

2-4

1-2

Clustered Pairs
3PAR T800

2-8

1-4

Clustered Pairs
IBM Storwize V7000

2-8

1-4

Clustered Pairs
VMAX

2-16

1-8

Clustered Pairs
IBM XIV

6-15

Grid not pairs

Distributed Cache Grid

And to make the point pictorially, take a look at these mockups of the architectures…

1. VMAX (which by the way uses 4Gbps FC on the drive side). The interconnect is RapidIO-based (a technology IBM used to use in its POWER servers). The RapidIO spec is for absolute maximum distance of 1 metre – which means that all eight VMAX engines need to be in the same rack.

2. Storwize V7000 takes commodity interconnect one step further by using 8Gbps FC fabric as the interconnect (6Gbps SAS is used on the drive-side). Best advice is to assume iogroups are within shortwave connectivity of each other i.e. 2 x 190m = 380 metres at 8Gbps using OM4 cables and a switch in the middle. Longer distances are theoretically possible, but not currently recommended. In most cases it may just be the convenience factor of being able to put the iogroups into separate racks in your data centre as the system grows.

3. 3PAR has a 4Gbps host-side attach, and 4Gbps FCAL on the drive side. The interconnect is based on a custom ASIC. I note that 3PAR have 4 separate models to cover the 2 to 8 node range. The T Class architecture doc says that the F Class is a scaled down version of the same architecture, but I couldn’t find details and it does raise the question of what the difference is between an F400 and a T400 (apart from clock speed and drive count supported). I’m not sure what the max supported distance is for the custom ASIC interconnect.

Your well considered thoughts are welcome… Any errors please let me know.

16 Responses

  1. […] This now seems to leave the low entry point as the only thing ‘midrange’ about the Storwize V7000 now and creates an interesting situation where Storwize V7000 finds itself competing with systems like VMAX and 3PAR. Read on here […]

    Like

  2. Hi Jim, D from NetApp here.

    Maybe explain a bit what is meant by SVC clustering – at least in the older code, there were some limitations for data movement between clusters and striping between clusters, weren’t there?

    The way I understood it, each 2 nodes had the full complement of features, but between node pairs there was some stuff missing.

    Thx

    D

    Like

    • Back-end disk pools can be shared amongst all iogroups (node pairs = engines in VMAX-speak) but front-end volumes are owned by a single iogroup at any one time. Typical examples might be that you assign all your SAP volumes to one iogroup and your VMware and Microsoft volumes to another iogroup; or all your database volumes to one iogroup and all your logs to another iogroup; or OLTP volumes to one iogroup and OLAP volumes to another.

      Like

      • Hi Jim,

        Just to confirm….in a V7000 clustered system one IO Group (V7000) can create a volume with it’s space allocated from a Storage Pool on the other IO Group?

        If so, which IO Group does the RAID processing (presumably the IO Group owning the Storage Pool)?

        The IO presumably goes out the 8Gbs FC interconnect to the other IO Group. Is this shared with Host IO or do you have to dedicate FC ports for this purpose?

        Cheers
        MJG

        Like

        • Correct, in a V7000 clustered system one IO Group (V7000) can create a volume with it’s space allocated from a Storage Pool on the other IO Group.
          Correct, the RAID processing is done by the IO Group owning the Storage Pool.
          Correct, the IO goes out the 8Gbs FC interconnect to the other IO Group. All I/O traffic between iogroups, between the cluster and Its hosts, and between sites in the case of remote replication, all goes out through the shared 8Gbps ports (8 ports on each iogroup). There are no dedicated ports.
          Regards, Jim

          Like

          • Jim.

            Thanks and this is an interesting view on the similarities that I had not considered before.

            There are some differences with a V7000 Clustered System vs a 3PAR. Whilst two 3PAR Controller Nodes “own” physical disk like a V7000 IO Group “own” Storage Pools, a 3PAR volume is typically stripe’d across all 3PAR Controller Nodes and their disks to get the greatest performance and balance possible.

            The 3PAR “Custom ASIC Connect” is a proprietary backplane built into the particular 3PAR Storage Server Controller enclosure that houses the Controller Nodes. This low latency mesh interconnect presumably goes a long way to their ability to distribute a volume’s activities across multiple nodes.

            Again an interesting comparison though.

            Would you typically share Storage Pools between V7000 IO Groups? or would it be more common practice to silo the IO Groups and attempt to distribute the hosts volumes across the IO Groups to balance performance.

            Cheers
            MJG

            Like

            • The clustering feature is brand new on Storwize V7000, so questions of common practice are perhaps a little early to call. Initial best practice will be to be conservative and if you spread volumes from one iogroup around different back-ends then you wouldn’t want to do that with your tier1 volumes, not initially anyway, until you got to see how it works for you. IBM is generally conservative about things like this until they have been field proven. No amount of lab testing can compare to the variety you get in the field. We took the same approach with Thin Provisioning initially suggesting caution with tier 1 volumes, but then taking the brakes off once it was shown to perform really well in the field. So my advice is to start conservatively, but expect to deploy more freely in time.

              Like

  3. I love the pictures…. great article.

    Like

  4. F400 vs. T400 is an intresting question.

    The T-Class controllers support a lot more disks as you know, double the data cache and 2 x dual core CPU’s vs. 1 x Quad on the F Class. The T clsss also supports a lot more IO slots (64 host Ports vs. 24)

    Apart from drive count that is it really, yes there is an overlap with the F400 vs. smaller T400 configurations but the T Class scales a lot better.

    Like

  5. After a long time, I think IBM has come up with great architecture in mid range.
    HDS AMS series is rock solid for delivering great performance and availability and 3PAR is good but performance slows down once it reaches beyond 60% utilization level.

    Like

  6. That’s a really interesting post Jim.

    I’ve previously made similar observations about how both the XIV and SVC could be diagrammed in a manner similar to the VMax.

    It’s also amusing to graph the amount of CPU/Cache vs. storage capacity as the different systems scale up.

    Regards,
    Steven P.

    Like

  7. With the volume of data we are generating, the only thing I see that is going to have a a good Ghz/TB ration would be a grid type of system. The V7000, Vmax, etc…will not be able to handle the volume of IO. Therefore, a federated grid like system is the answer for me. Currently we have 120 PBs and growing daily. Expected to be a 200PBs by mid 2012. I need density and performance at the same time. We now buy storage based on TB/square foot…!!! how much can a vendor fit in a tile with high performance…! the other storage architectures are now mickey mouse..!!

    Like

  8. Hello Jim,

    It’s a very interesting article, we are on 2012 there is any updated information in relation with your article?

    I have surfed a lot on the V7000 admin guides but it’s a little bit difficult to found the pre-requisite to implement the 2 node V7000 cluster or 4 Nodes (3 nodes cluster is supported ?), is it mandatory to have an NTP server ?

    Thank you,
    Kamal M.

    Like

  9. Good questions…

    Yes, just like SVC you can have one iogroup, two iogroups, three iogroups or four iogroups (each iogroup being a pair of Storwize V7000 canisters).

    Regarding NTP, Storwize V7000 is essentially the same as SVC. SVC can use NTP or an internal cluster time set on the console. In SVC6 the console is no longer required, which raises the question of where you set the time, or do you need an NTP server if you don’t have a console. I don’t really do any hands-on these days, so these implementation oriented questions are a bit out of my zone, but if you have a genuine requirement I can make some enquiries from the folks who do have up to date hands-on skills…

    The Storwize V7000 Infocenter has some info at http://publib.boulder.ibm.com/infocenter/storwize/ic/index.jsp
    Look for “Adding another control enclosure into an existing system”.

    I agree that documentation is light. It would be a good idea to submit an RPQ (lab support request) for your planned solution via your local IBM team and the approval will often come with some guidelines.

    There will likely be some additional features announced on April 12th and I expect better documentation to follow that.

    Like

  10. Same here. We need to add another control enclosure into an existing system, but the documentation in the infocenter is very poor.
    I suppose I should add the new control enclosure as another expansion enclosure, so the SAS cabling will start from the 9th expansion eclonsure. Infocenter only reports: “Procedure
    Configure the Fibre Channel switch to allow for correct zoning between the control enclosures.
    The correct zoning provides a way for the Fibre Channel ports to connect to each other.”
    Is there any resource about this?

    Thank you.

    Like

    • Mark Eino said: “I suppose I should add the new control enclosure as another expansion enclosure, so the SAS cabling will start from the 9th expansion enclosure. ”

      Don’t do that. Each head unit is FC cabled to the FC switches, and each V7000 is self-contained physically with its own disk drawers. The clustering is a logical configuration, not a cabling configuration.

      I agree that the documentation is fairly weak on this. It is perhaps explained better in some of the powerpoints than it is in the infocenter or redbooks. I will see what I can find in the way of documentation and post a follow-up link.

      Like

Leave a comment