What is object storage and how does it differ from block and file?
Sign up for a free Object Storage seminar – discussion & examples – Tues, 16th Feb, 12-1.30pm, ViFX Auckland. lunch will be provided.
What is object storage and how does it differ from block and file?
Sign up for a free Object Storage seminar – discussion & examples – Tues, 16th Feb, 12-1.30pm, ViFX Auckland. lunch will be provided.
The market for IT infrastructure components, including servers and storage, continues to fragment as the few big players of five years ago are augmented by a constant stream of new entrants and maturing niche players, but some things haven’t changed.
It should go without saying that choices in IT infrastructure should be driven by identified requirements. Requirements are informed by IT and business strategy and culture, and it is also perfectly reasonable that requirements are influenced by the personal comfort zones of those tasked with accountability for decisions and service delivery.
I once had a customer tell me that “My IT infrastructure strategy is Sun Microsystems” which was perhaps taking a personal comfort zone and brand loyalty a little too far. His statement told me that he did not really have an IT infrastructure strategy at all since he was being brand-led rather than requirements-driven.
Comfort zones can be important because they send us warning signals that we should assess risks, but I think we all recognise that they should not be used as an excuse to repeat what we have always done just because it worked last time.
I had an astute customer tell me recently that his current very flexible solution had served him well through a wide range of changes and significant growth over the last ten years, but that his next major infrastructure buying decision would probably be a significant departure in technology because he was looking to establish a platform for the next ten years, not the last ten years.
Any major investment opportunity in IT infrastructure is an opportunity to move the needle in terms of value and efficiency. To simply do again what you did last time is an opportunity missed.
Most of us realise that we are all prone to apply our own style of decision-making with its inevitable strengths and weaknesses. Personal decision-making is then layered with the challenges of teams and interaction as all of the points of view come together (hopefully not in a head-on collision). Knowing our strengths and weaknesses and how we interact in teams can help us to make a balanced decision.
Some multi-criteria mathematical theories claim that there is always ultimately a single rational choice, but ironically even mathematicians can’t agree on that. Bernoulli’s expected utility hypothesis for example suggests that there are multiple entirely rational choices depending on simple factors like how risk-averse the decision-makers are. Add to that the effect of past experience (Bayesian inference for the hard core) and mathematics can easily take you in a circle without really making your decision any more objective.
Knowing all of this, it is still useful to layer some structure onto our decision-making to ensure that we are focused on the requirements and on the end goal of essential business value, for example, use of weightings in decision-making has been a relatively common way of trying to introduce some objectivity into proposal evaluations.
Many of you will be familiar with the NIST definition of Cloud as having five essential characteristics which we have previously discussed on this blog. One way to measure the overall generic quality of a cloud offering is to evaluate that offering against the five characteristics, but I am suggesting that we take that one step further and that these essential characteristics can also be applied more broadly to any infrastructure decision as a first pass highlighter of relative merit and essential value.
In client specific engagements, if you were going to measure five qualities, it might make more sense to tailor the characteristics measured to specific client requirements, but as a generic first-pass tool we can simply use these five approximated NIST characteristics:
In pursuit of essential value, the modified NIST essential characteristics can be evaluated to arrive at a “web of essential value” by rating the options from zero to five and plotting them onto a radar diagram.
You still have to do all your own analysis so I don’t think we’re going to be threatening the Forrester Wave or the Gartner Magic Quadrant any time soon. Rather than being a way to present pre-formed analysis and opinion, WEV is a way for you to think about your options with an approach inspired by NIST’s definition of Cloud essential characteristics.
WEV is not intended to be your only tool, but it might be an interesting place to start in your evaluations.
The next time you have an IT infrastructure choice to make, why not start with the Web of Essential Value? I’d be keen to hear if you find it useful. The only other guidance I would offer is not to be too narrow in your interpretation of the five essential characteristics.
I wish you all good decision-making.
I’ve been brushing up on my William Deming recently so I’ve been thinking a lot about change and how change does not always come easily. Markets keep changing and companies as well as people need to learn to adapt.
We can probably all recall brands that were dominant once, but now have faded. Some of the famous brands of my parent’s generation like Jaguar cars, and British Seagull faded quickly in the face of German and Japanese innovation and quality.
One of the most shocking examples is Eastman Kodak. Founded in 1888, they held around 90% of the film market in the United States during the 1970s and went on to invent much of the technology used in digital cameras. But essentially they were a film company and when their own invention overtook them they were not prepared. Kodak eventually emerged from Chapter 11 in 2013 with a very different and much smaller business. Do we say that this was a complete failure of Kodak’s management in the 1970s, or do we say that it was almost inevitable given the size of Kodak and the complete and rapid technological change that occurred?
Even very large, well established companies can cope with rapid technological change if they react appropriately. It is possible to turn a big ship. Two examples from the Information Technology industry, Hewlett Packard (est. 1939) and IBM (est. 1911) have, so far, managed to adapt as their markets undergo huge change. The future is always uncertain and both have suffered major setbacks at times, but both continue to be top tier players in their target markets.
Of those who failed to adapt, another famous example is Firestone. Founded in 1900 they followed Ford into the automobile era, but failed to keep up with the market move to radial tyres in the late 1960s. They then then ran into several years of serious manufacturing quality problems which greatly reduced their brand value. In 1988 they were purchased by Bridgestone of Japan. One interesting thing about Firestone was that they had an unusually homogeneous management team, most of whom lived in Akron Ohio and many of them were born there. In management studies this homogeneity has come under some suspicion as a contributing factor to their reluctance and then inability to innovate. It might be significant if we compare that with the strife that IBM got into in the early 1990s and the board’s decision to appoint the first outsider CEO who subsequently turned the company around at that time, a feat that was at least in part attributed to his lack of emotional attachment to past decisions.
These are big bold examples and we can no doubt find other examples closer to home.
With brands and whole companies, failures to innovate will eventually become obvious, but with projects and IT departments, the consequences of failure to innovate can be less obvious for a time.
So why would anyone choose to avoid innovation and change? I can think of four reasons straight off.
These are all examples of what Edward De Bono would call Black Hat thinking, which is very common in the world of I.T. Black Hat thinking is of course valid as part of a broader consideration, but it is not a substitute for broader consideration.
Perhaps the biggest thematic change in I.T. Infrastructure over the years has been commoditization. My background is largely in storage systems and I see commoditization as having a huge impact. In the past storage vendors have made use of commodity hardware, but integrated it into their products so that the products themselves were not commoditized.
It is no secret among I.T. vendors that manufacturer margins are dramatically higher on storage systems than on servers so new storage solutions based on truly commoditized servers can be expected to have a significant impact.
Not only does hardware commoditization underpin most cloud services like Azure, AWS and vCloud Air, but hardware commoditization is also a driver behind on-premises hyper-converged storage systems like VMware Virtual SAN. With hardware commoditization, the value piece of the pie becomes much more focused on the software function.
But hardware commoditization is only one example of change in our industry. The real issue is one of being willing to take advantage of change.
I started off this post with a reference to William Deming. 70 years ago he put forward his ideas on continuous improvement and those ideas are currently enjoying a new lease of life expressed through ITIL.
Three of the questions Deming said we need to ask ourselves are:
Together the answers to these questions help us to form a strategy.
External IT consultants can be useful in all three of these steps in helping to frame the challenges against a background of cross-pollinated ideas and capabilities from around the market. Consultants can also help you to consider the realistic bite sizes for innovation and the associated risks. But ultimately change and innovation is something that we all have to take responsibility for. And like they sing in Memphis Change Don’t Come Easy.
[a version of this post was originally released at http://www.vifx.co.nz/blog/embracing-cloud-innovation]
I have been watching the multi-site distributed NAS space for some years now. There have been some interesting products including Netapp’s Flexcache which looked nice but never really seemed to get market traction, and similarly IBM Global Active Cloud Engine (Panache) which was released as a feature of SONAS and Storwize V7000 Unified. Microsoft have played on the edge of this field more successfully with DFS Replication although that does not handle locking. Other technologies that encroach on this space are Microsoft Sharepoint and also WAN acceleration technologies like Microsoft Branchcache and Riverbed.
What none of these have been very good at however is solving the problem of distributed collaborative authoring of large complex multi-layered documents with high performance and sturdy locking. For example cross-referenced CAD drawings.
It’s no surprise that the founders of Panzura came from a networking background (Aruba, Alteon) since the issues to be solved are those that are introduced by the network. Panzura is a global file system tuned for CAD files and it’s not unusual to see Panzura sites experience file load times less than one tenth or sometimes even one hundredth of what they were prior to Panzura being deployed.
Rather than just provide efficient file locking however, Panzura has taken the concept to the Cloud, so that while caching appliances can be deployed to each work site, the main data repository can be in Amazon S3 or Azure for example. Panzura now claims to be the only proven global file locking solution that solves cross-site collaboration issues of applications like Revit, AutoCAD, Civil3D, and Bentley MicroStation as well as SOLIDWORKS CAD and Siemens NX PLM applications. The problems of collaboration in these environments are well-known to CAD users.
Panzura has been growing rapidly, with 400% revenue growth in 2013 and they have just come off another record quarter and a record year for 2014. Back in 2013 they decided to focus their energies on the Architectural, Engineering & Construction (so-called AEC) markets since that was where the technology delivered the greatest return on customer investment. In that space they have been growing more than 1000% per year.
ViFX recently successfully supplied Panzura to an international engineering company based in New Zealand. If you have problems with shared CAD file locking, please contact ViFX to see how we can solve the problem using Panzura.
My wife has been complaining that we don’t have enough cupboard space, both in the kitchen, and also for linen. On the weekend we bought a dining room cabinet, and that allowed my wife to reorganize the kitchen cupboards and pantry.
What came to light was that the pantry in particular was so overloaded that it was very difficult to tell what was in there, and as a result we discovered that there were six bottles of cooking oil (three of rice bran oil, three of olive oil), three containers with standard flour, two with high grade flour, two with rice, two with brown sugar, two with white sugar, two with opened packets of malt biscuits, two with opened packets of crackers etc.
More capacity is always nice. My wife’s solution involved spending money on buying additional capacity, and also effort to select and install the cabinet, and hours to sort through the existing cupboards and drawers and pantry to work out what was there and decide where best to put things.
I have however always maintained that the real problem is that we own too much stuff. If the cupboards had been better organised in the first place, we would have owned fewer duplicates, and the odds are we would not have needed the new cabinet. But new capacity is always nice.
I am sure you have realised by now that the parallel with the world of IT Storage did not escape me. If I had to pay for ongoing support on the new cabinet and I knew it was only going to last 5 years, I would have been less keen on the acquisition and would have pushed back harder with the “we own too much stuff” line.
It seems that it’s easier to add more capacity than to ask the hard questions, but that’s not always a wise use of money.
To read more about right-sizing check out http://www.vifx.co.nz/iaas-not-as-is/
The anatomy of the purchase was however a little different this time. Over the last 4 years and I found that the official Jaguar service agents (25 Kms away) offered excellent support. 25 Kms is not always a convenient distance however, so I did try using local neighbourhood mechanics for minor things, but quickly realized that they were going to struggle with anything more complicated.
When it came to buying a replacement, the proximity of a fully trained and equipped service agent became my number one priority. There is only one such agency in my neighbourhood, and that is Toyota, so my first decision was that I was going to buy a Toyota.
Coming from a traditional I.T. vendor background my approach to I.T. support has always been that it should be fully contracted 7 x 24, preferably with a 2 hour response time, for anything that business depended on. But something has changed.
The support requirements for software haven’t really changed, but hardware is now a different game. Clustered systems, scale-out systems, web-scale systems, including hyper-converged (server/storage) systems will typically quickly re-protect a system after a node failure, thereby removing the need for panic-level hardware support response. Scale-out systems have a real advantage over standalone servers and dual controller storage systems in this respect.
It has taken me some time to get used to not having 7×24 on-site hardware support, but the message from customers is that next-business-day service or next+1 is a satisfactory hardware support model for clustered mission-critical systems.
Nutanix gold level support for example, offers next-business-day on-site service (after failure confirmation) or next+1 if the call is logged after 3pm so, given a potential day or two delay, it is worth asking the question “What happens if a second node fails?”
If the second node failure occurs after the data from the first node has been re-protected, then there will only be the same impact as if one host had failed. You can continue to lose nodes in a Nutanix cluster provided the failures happen after the short re-protection time, and until you run out of physical space to re-protect the VM’s. (Readers familiar with the IBM XIV distributed cache grid architecture will also recognise this approach to rinse-and-repeat re-protection.)
This is discussed in more detail in a Nutanix blog post by Andre Leibovici.
To find out more about options for scale-out infrastructure, try talking to ViFX.
Just a quick blog post today in the run up to Christmas week and I thought I’d briefly summarize some of the things I have been dealing with recently and also touch on the role of the I.T. department as we move boldly into a cloudy world.
We have seen I.T. move through the virtualization phase to deliver greater efficiency and some have moved on to the Cloud phase to deliver more automation, elasticity and metering. Cloud can be private, public, community or hybrid, so Cloud does not necessarily imply an external service provider.
One of the things that has become clear is the need for right-sizing as part of any move to an external provider. External provision has a low base cost and a high metered cost, so you get best value by making sure your allowances for CPU, RAM and disk are a reasonably tight fit with your actual requirements, and relying on service elasticity to expand as needed. The traditional approach of building a lot of advance headroom into everything will cost you dearly. You cannot expect an external provider to deliver “your mess for less” and in fact what you will get if you don’t right size is “your mess for more”.
And it’s not necessarily true that all of your services are best met by the one or two tiers that a single Cloud provider offers. This is where the Hybrid Cloud comes in, and more than that, this is where a Cloud Management Platform (CMP) function comes in.
“Any substantive cloud strategy will ultimately require using multiple cloud services from different providers, and a combination of both internal and external cloud.” Gartner, September 2013, (Hybrid Cloud Is Driving the Shift From Control to Coordination).
A CMP such as VMware’s vRealize Automation, RightScale, or Scalr can actually take you one step further than a simple Hybrid Cloud. A CMP can allow you to right-locate your services in a policy-driven and centrally managed way. This might mean keeping some services in-house, some in an enterprise I.T. focused Cloud with a high level of performance and wrap-around services, and some in a race-to-the-bottom Public Cloud focused primarily on price.
Some organisations are indeed consuming multiple services from multiple providers, but very few are managing this in a co-ordinated policy-driven manner. The kinds of problems than can arise are:
This layer of policy and management has a natural home with the I.T. department, but as an enabler for enterprise-wide in-policy consumption rather than as an obstacle.
With the Service Brokering Capability, I.T. becomes the central point of control, provision, self-service and integration for all IT services regardless of whether they are sourced internally or externally. This allows an organisation to mitigate the risks and take the opportunities associated with Cloud.
I will be enjoying the Christmas break and extending that well into January as is traditional in this part of the world where Christmas coincides with the start of Summer.
Happy holidays to all.