Some quotes from the web about XIV

There are of course the official IBM references but these below are are a few unofficial public comments from customers and analysts that I found after a quick sweep of the web. I was looking for something else and started stumbling across these, so I thought I would post them.

jdandur2 on vmware.com

“After a lengthy POC we put our XIV into production last week. We successfully migrated our Vmware environment to XIV over the weekend. Vmware is running better than ever. The XIV Migration tool was fast and easy to use . We migrated from a Htachi AMS500. 7 ESX servers, 150 VM’s, 9TB. The Software management tool is great.. The statistics and reporting is great. No agents needed to get Host statistics. In our POC we did test pulling multiple drives at the same time on different trays and no data was lost.”

Ben Daniels at sdn.com

“We are running… Netweaver 7.0 on AIX 5.3L TL 9 (11 on NIM and VIO) and Oracle 10.2.0.4. Our ERP DB is about 10TB and we are running a small CRM and BI landscape at about 2TB each. We were previously splitting our landscape on EMC, ERP was on a DMX1000 and our advanced apps and QA env. were on a Clarrion CX380.  We have about 2500 concurrent users during peak hours. We were an early adopter of XIV and so far it has exceeded our expectations. It has been very easy to maintain and after tuning [Oracle] we easily maintain all of our landscapes between about 6000 and 10,000 IOPS around regular peak hours. This obviously goes up around quarter close and other periods of intense usage. My best advice is that the XiV itself has very little to configure, outside of either “round robin” or “failover” for your disk depending on your environment.”

“We even lost a drawer at one point and experienced no downtime and only a minimal, short performance hit in our production landscape, so their claims about the grid architecture actually stood up to a real world test and performed well.”

Pokrface on arstechnica.com

“We’ve got a pair of XIV loaners in our lab right now for evaluation, and the arrays have shocked us speechless. Cheaper than a Clariion CX4 (sticker price is $5k per TB, with a single 80TB config available), with performance that stomps a Symmetrix DMX-4 literally across the board in all workload simulations (especially general user CIFS NAS usage, which for my metrics is 80% tiny reads and 20% tiny writes).”

Chalex on arstechnica.com

“One of my neighboring departments has an XIV. They are very happy with it.”

Richard McDermott, IT Director, Intertrust Services on recartait.co.uk

“The performance of the new IBM XIV solution to date is excellent.”

5 Responses

  1. What’s missing in those comments? The poor implementation of iSCSI. Don’t get me wrong, I love my XIV for the FC side, but the iSCSI was an afterthought, and a poorly implemented one at that. No Jumbo frames, no teaming, incomplete implementation of the iSCSI standard, poor performance on fanouts greater than 2-1, etc. I’m sure you are aware of this.

    Like

    • I’m sure you’re right that the development team was initially focused on FC, but I know there are plans for ongoing iSCSI enhancements so I expect things will keep getting better. Some of the points you raise might depend on which version of firmware you are running. Port aggregation is supported and I’m told that Windows iSCSI CHAP was delivered in 10.2 firmware. I know there were initial delays in the support of async replication over iSCSI but that was delivered with 10.2.1 and I saw a related IBM recommendation to use jumbo frames as part of that, so maybe that has been addressed also.

      Like

  2. I’m sorry to say that your quotes seem a bit biased. For example, in the same thread from VMWare people were saying:

    hmarcel: “We have since last week 2 XIV installed but are running in some issues. When connecting more than 12 or 13 VMFS the XIV stops “responding”. Storage Vmotion times out, even vm migration doesn’t work anymore. We see some slowness starting by using 6 to 7 VMFS volumes. ”

    or

    bobross; ” We evaluated XIV two months ago, and were underwhelmed. In a 30-day period, we had two controller h/w failures, necessitating replacement, and several failed drives. We also saw that when running a halfway decent ESX load (several ESX servers, roughly 40 VMs) the performance was quite poor. “

    Like

  3. Quotes are quotes. I didn’t see the ones you mentioned. They might have been added later or I might have missed them in my quick scan. I was focusing on the positive. There are always people having setup issues etc with every technology that ‘s what gets posted to most threads (the first one sounds like a queue depth setting issue) but it’s nice to highlight the positive stuff as well.

    Like

  4. Our company had an opportunity to test XIV’s performance against our existing internal storage solution.

    XIV’s sales/tech contacts indicated that our existing workload would run smoothly and allow us for additional application growth once we became comfortable.

    However, the tests indicated something entirely different. XIV was unable to run our single most active environment without significant performance hits. Although we use less than 1/3rd of the storage, it simply doesn’t provide quick enough response, and that adds up on long-running jobs to big hits in runtime.

    I’m sure there are plenty of ways to use XIV, but for us, where response time was critical… there was no hope.

    Like

Leave a comment