Scale-Out NAS: The Cornerstone of a Hybrid Cloud

Old-school scale-up storage systems are no match for next-gen cloud architectures.

Channel Partners

April 30, 2016

6 Min Read
Data Storage

Stefan BernboUse of hybrid clouds is on the rise, for good reason — they can maximize budget efficiency and meet performance goals at the same time. But there are challenges, including storage. Today’s data explosion is putting a strain on traditional storage architectures that are built to scale vertically. This model was just not created to handle such huge quantities of data — at least, not cost effectively, and not with high performance.

The storage status quo definitely can’t make the leap to support a hybrid cloud.

Your customers need scalability with affordability, and that’s the promise of software-defined and scale-out storage solutions. Let’s address some design elements, with guidance to ensure your customers’ hybrid clouds deliver the storage performance, flexibility and scalability they need.

Be Consistent

A scale-out NAS is the critical building block for a hybrid cloud storage solution. Since hybrid cloud architectures are relatively new to the market – and even newer in production deployments – many organizations are unaware of the importance of consistency in a scale-out NAS.

Many environments are what’s called “eventually consistent,” meaning that files written to one node are not immediately accessible from other nodes. This lag can be caused by improper implementation of protocols, or not having tight enough integration with the virtual file system.

The opposite of this is being strictly consistent: Files are accessible from all nodes at the same time. Compliant protocol implementations and tight integration with the virtual file system is a good recipe for success.

A three-layered hybrid cloud architecture that incorporates a scale-out NAS is one way to ensure tight data consistency. Each server in the cluster will run a software stack based on these layers:

  • The first layer is the persistent storage layer. This layer is based on an object store, which provides advantages like extreme scalability. However, the layer must be strictly consistent in itself. A key responsibility of the storage layer is to ensure redundancy, so a fast and effective self-healing mechanism is a must. To keep the data footprint low, the storage layer needs to support multiple file encodings. Some are good for performance, some for optimization.

  • The virtual file system is the heart of any scale-out NAS. It is in this second layer that features like caching, locking, tiering, quota and snapshots are handled.

  • The third layer contains the protocols, like SMB and NFS, but also integration points — for hypervisors, for example.

It is very important to keep the three-layer architecture symmetrical and clean. If you manage to do that, future architectural challenges will be much easier to solve, and your customers’ systems can scale up to exabytes of data and trillions of files.

Now let’s look at three other critical areas.

  • Metadata: In a virtual file system, “metadata” are critical pieces of information that describe the system structure. One metadata file may contain information about what documents and other assets are contained in a single folder within the file system. That means that a customers will have one metadata file for each folder in its virtual file system. As the virtual file system grows, they will accumulate more and more metadata files. Smaller deployments might benefit from centralized storage of metadata, but we’re talking about scale-out storage. So, let’s look at what not to do: Storing metadata in a single server can cause poor scalability, poor performance and poor availability. Since our storage layer is based on an object store, a better place to keep metadata is in the object store — particularly when we are talking about large quantities of metadata. This setup will ensure good scalability, good performance and good availability.

  • Cache: To increase performance, software-defined storage (SDS) solutions need caching devices. Speed and size matter, as does price. Finding the sweet spot is important. For an SDS solution, it is also important to protect data by replicating it to another node before destaging the data to the storage layer. Particularly in virtual or cloud environments, supporting multiple domains and file systems becomes more critical as the storage architecture grows in both capacity and features. Again, supporting multiple file systems is key: Different applications and use cases prefer different protocols. And sometimes, it is necessary to be able to access the same data across various protocols.

  • Flexibility: In a hybrid cloud, hypervisor support is an obvious priority. Does your customer have a flat or hyperconverged architecture with no external storage systems? Then the scale-out NAS must be able to run as a virtual machine and make use of the hypervisor host’s physical resources. Guest VM images and data will be stored in the virtual file system that the scale-out NAS provides. Guest VMs can use this file system to share files among themselves, making an SDS solution perfect for VDI environments as well.

These functionalities taken together – a software-defined architecture, supporting both fast and energy-efficient hardware; an architecture that allows users to start small and scale up; support for bare-metal as well as virtual environments; and support for all major protocols – make for a very flexible and useful storage solution.

At some point, many customers will want to share the system among locations that may need both private storage and an area that they share with other branches. So, each site has its own independent file system, and only parts of the file system will be shared with others. Choosing a portion of a file system that will be open for others to mount provides customers the flexibility needed to scale the file system outside the four walls of the office — but make sure that synchronization is made at the file-system level in order to have a consistent view across sites.

Being able to specify different file encodings at various sites is useful if, for example, one site is used as a backup target.

Valued Adviser

All of these elements work together to form a hybrid cloud storage system that offers what the data-besieged enterprise needs: clean, efficient and linear scaling up to exabytes of data. Because just one file system spans all servers, there are multiple points of entry and no more performance bottlenecks. This approach offers flexibility by being able to easily add nodes. It also offers native support of protocols and flash support for high performance.

One overriding principle to keep in mind is, again, to support many protocols — that keeps the architecture flat and, to some extent, enables customers to share data among applications that speak different protocols.

A scale-out NAS gives your customers far greater control of their investments while enabling them grow into private clouds without breaking the bank or sacrificing performance. They will be grateful for your guidance into this brave new world of storage.

Stefan Bernbo is the founder and CEO of Compuverde. For 20 years, Stefan has designed and built numerous enterprise scale data storage solutions designed to be cost effective for storing huge data sets. From 2004 to 2010 Stefan worked within this field for Storegate, the wide-reaching Internet based storage solution for consumer and business markets, with the highest possible availability and scalability requirements. Previously, Stefan has worked with system and software architecture on several projects with Swedish giant Ericsson, the world-leading provider of telecommunications equipment and services to mobile and fixed network operators.

Read more about:

Agents
Free Newsletters for the Channel
Register for Your Free Newsletter Now

You May Also Like