All HCI Solutions Are Not Created Equal – Fortunately
With many organizations' first hyperconverged infrastructure refreshes behind them, we have enough information to examine what happens when upgrading or refreshing HCI solutions.
October 14, 2017
By Vendor
New technologies often fundamentally change how IT procures and operates infrastructure and Hyperconverged Infrastructure (HCI) is no exception.
HCI is the combination of compute and storage into an easy to manage solution that doesn’t require dedicated storage experts to keep it running.
The business case for selecting HCI over traditional infrastructure has long since been made (PDF).
Just like other new technologies, customers often do not know about system upgrades or refresh cycles that will occur in the future when they initially purchase these solutions.
We are now at a stage where HCI solutions from multiple vendors have been proven in the field by thousands of organizations for at least one refresh cycle.
With many organizations’ first HCI refreshes behind them, we have enough information to examine what happens when upgrading or refreshing HCI solutions.
Perhaps the most important lesson to come from analyzing data center HCI refreshes and upgrades is that not all HCI solutions are created equal.
Much of the promised value of HCI requires an HCI solution with the fewest restrictions possible.
Unfortunately, hardware or appliance-based HCI as implemented by many vendors didn’t live up to the flexibility claims that would have allowed customers to actually capitalize on this value when it came time to upgrade or refresh their HCI solution.
The initial sales cycle of HCI focused on the high cost of storage arrays.
Storage arrays from the big players were eye-wateringly expensive and there were straightforward savings to be had by just ejecting the traditional array vendors.
One did not have to invoke HCI TCO benefits or complicated grow-as-you-need economic models.
The margins the major array vendors were used to were just that high.
Today, this is different.
Competition in the storage industry has driven the cost of storage arrays down.
There are lots of vendors to choose from if you want an array.
Now, many hardware-based appliance HCI solutions are just as expensive to procure as storage array architectures and also have the same refresh models where a customer has to repurchase the software license when refreshing the hardware – except with hardware-based HCI it’s a three-year server refresh cycle instead of a five-year storage array refresh cycle.
If HCI is going to see mass adoption, then it is worth understanding some of the roadblocks that might be faced later in the product lifecycle.
Flexibility of HCI
Much of the long-term value from HCI comes from its ability to grow with the organization organically, ditching the fixed refresh period.
Unfortunately, it’s very common for vendors not to understand where and how this applies.
When most HCI vendors talk about the flexibility of HCI they talk about adding nodes when you need more storage or compute.
Their discussions and examples are all focused on their own product; they assume that capacity or performance constraints of the HCI solution will be the driver of upgrades or refreshes to the HCI platform.
For those vendors whose HCI solutions are rigid and inflexible, this is probably true.
For those HCI vendors whose focus has been on delivering a truly flexible solution, one of the most valuable features is that the HCI solution itself no longer has to drive the upgrade cycle.
There are lots of other components in a data center.
It is, for example, usually a lot more burdensome to swap out switches or increase the number of switch ports on a rack than it is to swap out some compute or storage nodes. Power and cooling considerations dictate how much system we can pack into a square foot.
WAN connectivity puts a limit on how much data churn can occur on a site before we can no longer meet backup and DR requirements.
A flexible HCI solution offers customers the ability to upgrade or replace nodes as needed.
It allows administrators to move software licenses from host to host, or even to license in more flexible terms, such as total storage under management, without fretting about the number of hosts.
A flexible HCI solution doesn’t punish or seek to exploit organizations that change.
Instead, it embraces change and makes it a selling point.
Actually accomplishing this isn’t easy.
HCI solutions, like all other technology offerings before them, have proven to have their problems.
HCI solutions can be fragile and brittle, making flexibility of implementation impossible.
Despite these problems, industry experts agree that HCI has been normalized and is quickly becoming the mainstream way to do IT infrastructure.
So where are the potential gotchas, and what should organizations look out for when selecting an HCI solution?
Success through flexibility
One lesson learned is that HCI solutions can be quite inflexible.
Offerings from multiple vendors – including industry-leading nameplate vendors – have proven to be uncomfortably restrictive.
The dominant example of this is restrictions on the use of dissimilar nodes within a cluster; the balance between storage capacity, storage performance, CPU capabilities and amount of RAM isn’t allowed to vary overmuch.
These restrictions have made themselves known in multiple ways.
In practice, the lack of node diversity has led to stranded resources.
The most common complaint amongst HCI customers is that they have found themselves adding more nodes to a cluster than their performance requirements would otherwise call for just to get additional storage capacity.
Some customers find themselves with the opposite problem: they find themselves without the ability to add compute-intensive nodes offering adequately powerful CPUs or enough RAM without also committing to adding more storage to a cluster capacity than they actually need.
In short, one of the most common criticisms of HCI is that the limited variation in appliances offered by vendors doesn’t satisfy the diversity of demand in the real world.
This lack of flexibility amongst major HCI vendors has some real-world consequences for those seeking to refresh existing HCI solutions.
One of the major promises of HCI was an end to forklift upgrades.
All upgrades would be non-disruptive.
Workloads wouldn’t have to come down during refreshes.
Truly, a new age had dawned on the data center.
This non-disruptive upgrade only works if the HCI solutions in question are flexible enough to cope with clusters that have dissimilar nodes.
Non-disruptive upgrades require adding new nodes to an existing cluster, live-migrating workloads over and then failing out the older nodes until they’re all gone.
Perhaps the older nodes get reused for less critical workloads, or given over to test and dev, but the critical workloads need to move from the old to the new without fuss or headache.
Fortunately, not all HCI was created equal.
Refreshing the refresh cycle
Before, HCI storage refreshes were horrible.
Despite this, there was often still economic incentive to do them “early,” rather than in sync with compute – especially during those lazy years when Intel had no real competition and this refresh cycle’s CPUs didn’t offer much over the last.
HCI changed this by allowing for rolling upgrades.
A flexible HCI vendor provides organizations the ability to grow clusters as they need and to organize phased refreshes.
Flexible HCI vendors offer not only pre-canned appliances, they work with channel partners to offer reference-architecture-based solutions with quite a bit of variability allowed per node.
The highest level of flexibility among HCI vendors comes from those willing to offer their solution as software-only, giving customers choice and allowing them to define their own needs.
Software-only HCI solutions will run on any x86 server, whether brand-name or white box.
They will run on any server model, from aged systems to the latest and greatest.
They can work with any storage media and, in some cases, multiple hypervisors.
This is a dramatic change from how storage was done before HCI.
There weren’t a lot of white-box storage array offerings, nor was there much in terms of channel-driven reference architectures.
Organizations could pick from among the pre-defined offerings and just sort of had to pray that when the next refresh came, the vendor they had selected had some idea how to migrate from one array to the next.
An entire industry sprang up around data migration.
Fortunes were made by those offering the ability to take data from one vendor’s storage solution and move it to another.
Growing only as needed and smoothly migrating from one generation to the next were pipe dreams.
Against this backdrop, it’s little wonder that even inflexible HCI vendors able to deliver only limited customer choice were still able to achieve success.
Now, it’s time for organizations to demand more.
Revisiting the promise of HCI
For all that HCI can seem inflexible, time does change the economics.
Today’s HCI solutions will give you more for your money than the appliances that were sold five years ago.
One can run more workloads in fewer rack units, an important consideration for organizations with growing IT demands but a fixed amount of data center space.
Organizations may also be looking to take advantage of the commoditization of flash to improve performance.
Five years ago, NVMe flash was not available in HCI solutions.
Meeting performance demands of primary workloads required expensive HHHL PCI-E cards and/or numerous SATA/SAS drives.
Today, a single NVMe drive can inexpensively deliver the performance needed by all but the most demanding workloads.
HCI solutions equipped with multiple NVMe drives can rival even top tier dedicated all-flash arrays.
HCI solutions use standard Ethernet networking, meaning no dedicated storage networking components are required.
HCI solutions also make use of enterprise data services to provide built-in data protection, a capability that reduces the need for additional data center complexity.
The data center refresh isn’t dead, even for organizations that invest in HCI.
Taking advantage of all that a refresh cycle done today can offer, however, requires choosing an HCI vendor that’s flexible.
HCI software delivers
To meet organization needs as they emerge, HCI solutions need to be able to support a diversity of node types in a single cluster.
Failing out older nodes needs to be simple, and not require taking down the whole cluster.
The HCI solution should support reusing those old nodes for different use cases, ultimately allowing an organization to rebalance their HCI clusters as their needs change and the nodes age.
In other words: proper HCI should be flexible.
Licensing can enable or interfere with solution flexibility.
When sold as a software solution not tied to a physical appliance, HCI licensing can provide organizations with a powerful storage solution that adapts to ever-changing needs.
When tied to physical appliance, HCI licensing becomes just another throwaway cost, the value ending when the hardware is retired.
With a software-driven HCI model, organizations don’t pay for storage twice.
If an existing server is no longer adequate, it can simply be upgraded or replaced.
The licensing will move to the new equipment.
A software approach to HCI is a flexible approach to HCI and means organizations can invest in only the resources they need, when they’re needed, without having to worry about stranded resources.
Organizations can keep their existing equipment for as long as it is serviceable and economically useful to operate.
No one can really know what the future holds.
At best, we can make educated guesses.
We provision our data centers based on careful analysis and prediction, but the span between refresh cycles is long.
In that time, anything can happen.
A single year can see mergers, acquisitions; and if all goes well, explosive and unpredictable growth.
Keeping one’s options open means more than saving a little money, it means preserving the ability to adapt to change.
Picking the right technology is a start, but picking the right vendors is the path to success.
Panzura
Panzura
Barry Phillips is responsible for marketing at Maxta, which has a unique software approach to hyperconvergence that enables service providers to choose their own servers and hypervisors.
You May Also Like