The new Storwize V7000 Unified (Storwize V7000U) enhancements mean that IBM’s common NAS software stack (first seen in SONAS) for CIFS/NFS/FTP/HTTP/SCP is now deployed into the midrange.
Translating that into simpler language:
IBM is now doing its own mid-range NAS/Block Unified disk systems.
Anyone who has followed the SONAS product (and my posts on said product) will be familiar with the functions of IBM’s common NAS software stack, but the heart of the value is the file-based ILM capability, now essentially being referred to as the Active Cloud Engine.
The following defining image of the Active Cloud Engine is taken from an IBM presentation:
e.g. when disk tier1 hits 80% full, move any files that have not been accessed for more than 40 days to tier2.
Importantly these files keep their original place in the directory tree.
The file-based disk to disk migration is built-in, and does not require any layered products or additional licensing.
Files can also be migrated off to tape as required without losing their place in the same directory tree, using HSM which is licensed separately.
Another important feature that IBM’s competitors don’t have is that although there are two file services modules in every Storwize V7000U operating in active/active configuration they present a single namespace to the users e.g. all of the storage can be presented to a single S: drive.
And the final key feature I wanted to mention was the unified management interface for file and block services, another feature which some of our competitors lack.
Today IBM also announces SONAS 1.3, as well as a 243TB XIV model based on 3TB drives, SVC split cluster up to 300Kms, Block replication compatibility between SVC and Storwize V7000, Snapshot-based replication option for SVC and Storwize V7000 and an assortment of Tivoli software enhancements.
Meanwhile talking about Active Cloud Engine as a kind of robot reminded me of another robot. Although I have never really been at ease with the ugly competitiveness of capitalism, I do hate losing, so perhaps this is a more apt image to show how we see the Active Cloud Engine ‘robot’ stacking up against the competition.
And here are some other Killer Robots:
The Big Bang Theory “The Killer Robot”
Jamie Hyneman’s (MythBuster) robot Blendo in action against DoMore
SAN Volume Controller
Late in 2010, Netapp quietly announced they were not planning to support V Series (and by extension IBM N Series NAS Gateways) to be used with any recent version of IBM’s SAN Volume Controller.
This was discussed more fully on the Netapp communities forum (you’ll need to create a login) and the reason given was insufficient sales revenue to justify on-going support.
This is to some extent generically true for all N Series NAS gateways. For example, if all you need is basic CIFS access to your disk storage, most of the spend still goes on the disk and the SVC licensing, not on the N Series gateway. This is partly a result of the way Netapp prices their systems – the package of the head units and base software (including the first protocol) is relatively cheap, while the drives and optional software features are relatively expensive.
Netapp however did not withdraw support for V Series NAS gateways on XIV or DS8000, and nor do they seem to have any intention to, as best I can tell, considering that support to be core capability for V Series NAS Gateways.
I also note that Netapp occasionally tries to position V Series gateways as a kind of SVC-lite, to virtualize other disk systems for block I/O access.
Anyway, it was interesting that what IBM announced was a little different to what Netapp announced “NetApp & N Series Gateway support is available with SVC 6.2.x for selected configurations via RPQ [case-by-case lab approval] only”
What made this all a bit trickier was IBM’s announcement of the Storwize V7000 as its new premier midrange disk system.
Soon after on the Netapp communities forum it was stated that there was a “joint decision” between Netapp and IBM that there would be no V Series NAS gateway support and no PVRs [Netapp one-off lab support] for Storwize V7000 either.
Now the Storwize V7000 disk system, which is projected to have sold close to 5,000 systems in its first 12 months, shares the same code-base and features as SVC (including the ability to virtualize other disk systems). So think about that for a moment, that’s two products and only one set of testing and interface support – that sounds like the support ROI just improved, so maybe you’d think that the original ROI objection might have faded away at this point? It appears not.
Anyway, once again, what IBM announced was a little different to the Netapp statement “NetApp & N Series Gateway support is available with IBM Storwize V7000 6.2.x for selected configurations via RPQ only“.
Whither from here?
The good news is that IBM’s SONAS gateways support XIV
and SVC (and other storage behind SVC) and SONAS delivers some great features that N Series doesn’t have (such as file-based ILM to disk or tape tiers) so SVC is pretty well catered for when it comes to NAS gateway funtionality.
When it comes to Storwize V7000 the solution is a bit trickier. SONAS is a scale-out system designed to cater for 100’s of TBs up to 14 PBs. That’s not an ideal fit for the midrange Storwize V7000 market. So the Netapp gateway/V-series announcement has created potential difficulties for IBM’s midrange NAS gateway portfolio… hence the title of this blog post.
HSM is essentially a way to push disk files to lower tiers, mainly tape, while leaving behind a stub-file on disk, so that the file maintains it’s accessibility and its place in the directory tree.
I say tape because there are other ways to do it between disk tiers that don’t involve stub files. e.g. IBM’s SONAS uses it’s built-in virtualization capabilites to move files between disk tiers, without changing their place in the directory tree, but SONAS can also use Tivoli Space Management to migrate those files to tape using HSM.
HSM started life as DFHSM [DFSMShsm] on IBM mainframe and I use it most weeks in that context when I log into one of IBM’s mainframe apps and wait a minute or two for it to recall my database query files to disk. That’s some pretty aggressive archiving that’s going on, and yes it’s bullet-proof.
I know of a couple of instances in the early 2000’s when companies got excited about file-based Information Lifecycle Management, and implemented HSM products (not IBM ones) on Microsoft Windows. Both of those companies removed HSM not long after, having experienced blue screens of death and long delays. The software was flaky and the migration policies probably not well thought out (probably too aggressive given the maturity of open systems HSM at the time). Being conservative, IBM came a little late to the game with Open Systems HSM, which is not necessarily a bad thing, but when it came, it came to kick butt.
Tivoli Space Management is a pretty cool product. Rock solid and feature rich. It runs on *NIX and our customers rely on it for some pretty heavy-duty workloads, migrating and recalling files to and from tape at high speed. I know one customer with hundreds of terabytes under HSM control in this way. TSM HSM for Windows is another slightly less sophisticated product in the family, but one I’m not so familiar with.
One could argue that Space Management has been limited as a product by its running on *NIX operating systems only, when most file servers out in the world were either Windows or Netapp, but things are changing. HSM is most valuable in really large file envionments – yes, the proverbial BIG DATA, and BIG DATA is not typically running on either Windows or Netapp. IBM’s SONAS for example, scalable to 14 Petabytes of files, is an ideal place for BIG DATA, and hence an ideal place for HSM.
As luck would have it, IBM has integrated Space Management into SONAS. SONAS will feed out as much CIFS, NFS, FTP, HTTP etc as you want, and if you install a Space Management server it will also provide easy integration to HSM policies that will migrate and recall data from tape based on any number of file attributes, but I guess most typically ‘time last accessed’ and file size.
Tape is by far the cheapest way to store large amounts of data, the trick is in making the data easily accessible. I have in the past tried to architect HSM solutions for both Netapp and Windows environments, and both times it ended up in the too hard basket, but with SONAS, HSM is easy. SONAS is going to be a really big product for IBM over the coming years as the BIG DATA explosion takes hold, and the ability to really easily integrate HSM to tape, from terabytes to petabytes, and have it perform so solidly is a feature of SONAS that I really like.
Tape has many uses…
I was recently mulling over some examples of OEM co-op-etition in our industry:
- During the early 00’s IBM and Compaq OEM’d each others disk systems, the MA8000 from Compaq (sold as the MSS by IBM) and the ESS from IBM (sold as the CSS by Compaq) to give each other coverage in midrange and high-end storage. The fact that so few people know this even happened tells you something about how successful it was. I know that among some IBM sellers, the MSS was certainly considered ‘last cab off the rank’ when it came to solutioning.
- Dell has a long-standing OEM arrangement to sell EMC CLARiiON and VNX products, which compete with their own Compellent and Equallogic disk systems. In fact the OEM arrangement with Dell goes right back to the Data General CLARiiON days. Dell’s acquisition of Compellent must have decreased the value of the relationship from EMC’s point of view. Sure Dell has helped EMC to penetrate the SMB market, but now Dell has a foothold, skills and credibility which they can exploit with Compellent going forward.
- Netapp had a brief OEM agreement with Dell between ’98 and ’00. I don’t know what happened there, but I do know that Netapp tries to sell value, technology, integration and innovation. Back in the late 90’s Dell was all about price and urgent delivery. That’s a pretty big culture divide. I’m guessing that Dell simply didn’t sell much of the high-priced Netapp kit.
- Again, Netapp had an OEM agreement with Hitachi between ’02 and ’04, but it was just for gateways. A gateway-only OEM agreement doesn’t really work for Netapp as a glance at their list prices will tell you that they make a lot of their margin from disk drives. I expect the agreement failed because most of the benefits fell on Hitachi’s side of the ledger.
- Most major vendors OEM low end tape products from ADIC/Quantum or similar. This has worked well for years because there is relatively minimal competition between the big vendors and their other own-branded channels. Occasionally there is disruption e.g. when Sun bought STK and then Oracle bought Sun, the STK OEMS were naturally a bit unsettled.
So what we learn from co-op-etition is that it’s designed to benefit both parties and their customers, but if it works it sometimes leads to changes in the dynamics between the three. If the relationship lasts only a couple of years it may be a sign that the dynamics weren’t right in the first place and the setup and tear-down costs are unlikely to have been recovered. If it lasts 5 or 10 years then I think you’d have to consider that a big success.
The IBM OEM agreement with Netapp dates from 2005 and continues to benefit both parties. IBM has provided Netapp with entry into large enterprises around the world and contributes about 10% of Netapp’s revenues. Netapp has leveraged IBM’s channel and benefited from the credibility endorsement. These days Netapp is on a roll fueled by VMware but they weren’t such a high profile contender back in 2005. One long-term benefit to IBM is that it now has a worldwide workforce experienced with NAS.
An example of the competition side of co-op-etition is that IBM has never taken Netapp’s Spinnaker/GX/Cluster-Mode product. Instead IBM was busy developing its own Scale-Out NAS offering which in 2010 was refined into SONAS, targeted at customers who have plans to grow to hundreds of terabytes or petabytes of file storage. In large environments the file-based ILM features of SONAS (including integration of HSM to tape) can be quite compelling.
While co-op-etition sometimes looks like a strange vendor dance to an outside observer, as long as the customers get value from the arrangements then it’s really just a practical way of doing business.
Maybe you think NL-SAS is old news and it’s already swept SATA aside?
Well if you check out the specs on FAS, Isilon, 3PAR, or VMAX, or even the monolithic VSP, you will see that they all list SATA drives, not NL-SAS on their spec sheets.
Of the serious contenders, it seems that only VNX, Ibrix, IBM SONAS, IBM XIV Gen3 and IBM Storwize V7000 have made the move to NL-SAS so far.
First we had PATA (Parallel ATA) and then SATA drives, and then for a while we had FATA drives (Fibre Channel attached ATA) or what EMC at one point confusingly marketed as “low-cost Fibre Channel”. These were ATA drive mechanics, with SCSI command sets handled by a FC front-end on the drive.
Now we have drives that are being referred to as Capacity-Optimized SAS, or Nearline SAS (NL-SAS) both of which terms once again have the potential to be confusing. NL-SAS is a similar concept to FATA – mechanically an ATA drive (head, media, rotational speed) – but with a SAS interface (rather than a FC bridge) to handle the SCSI command set.
When SCSI made the jump from parallel to serial the designers took the opportunity to build in compatibility with SATA via a SATA tunneling protocol, so SAS controllers can support both SAS and SATA drives.
The reason we use ATA drive mechanics is that they have higher capacity and a lower price. So what are some of the advantages of using NL-SAS drives, over using traditional SATA drives?
- SCSI offers more sophisticated command queuing (which leads directly to reduced head movement) although ATA command queuing enhancements have closed the gap considerably in recent years.
- SCSI also offers better error handling and reporting.
- One of the things I learned the hard way when working with Engenio disk systems is that bridge technology to go from FC to SATA can introduce latency, and as it turns out, so does the translation required from a SAS controller to a SATA drive. Doing SCSI directly to a NL-SAS drive reduces controller latency, reduces load on the controller and also simplifies debugging.
- Overall performance can be anything from slightly better to more than double, depending on the workload.
And with only a small price premium over traditional SATA, it seems pretty clear to me that NL-SAS will soon come to dominate and SATA will be phased out over time.
NL-SAS drives also offer the option of T10 PI (SCSI Protection Information) which adds 8 bytes of data integrity field to each 512b disk block. The 8 bytes is split into three chunks allowing for cyclic redundancy check, application tagging (e.g.RAID information), and reference tagging to make sure the data blocks arrive in the right order. I expect 2012 to be a big year for PI deployment.
I’m assured that the photograph below is of a SAS engineer – maybe he’s testing the effectiveness of the PI extensions on the disk drive in his pocket?
Not here this time… over there >>>
This week I’m doing a guest blogging spot over at Barry Whyte’s storage virtualizatiom blog, so if you want to read this week’s post head over to: https://www.ibm.com/developerworks/mydeveloperworks/blogs/storagevirtualization/entry/infinity_and_beyond?lang=en
p.s. Infiniband is the new interconnect being used in XIV Gen3