Meet the Channel: VIncent Hsu, IBM
This week we chatted with Vincent Hsu, vice president of IBM Fellow and CTO for Storage and SDE.
This week we chatted with Vincent Hsu, vice president of IBM Fellow and CTO for Storage and SDE.
TVG: What are some exciting trends you’re seeing in the storage industry right now?
Hsu: I want to talk about flash for a second. I know, don’t roll your eyeballs yet. Flash is an old story. But people are starting to ask us about the flash object storage. You know object storage traditionally was considered some kind of co-storage, but people are asking for those kind of capabilities now. [For partners,] it isn’t just “is this faster storage?” It’s “can I create a more efficient operating model?”
In 2017, you will see the new I/O protocol interface to be available to access the flash, based on the NVMe protocols and IBM also promoting the new protocol OpenCAPI. You will see the new protocol show up, and because it’s new, our partners will be the ones to figure out how they integrate those solutions together, what servers go with what protocol, what devices provide the most benefit.
TVG: What about any trends you are seeing in regards to analytics or software defined storage?
Hsu: We see 2017 as a time for the transformation of analytics. Up until this moment, when most people think of big data, they typically think about HDFS. We expect to see a sea change in 2017. With people creating more and more data, [analyzing it] will take a long time, and it’s not very efficient. Not only do I have to ingest the data, at times I have to keep two copies of the data. It’s not just the capacity problem, there’s also the network bandwidth problem.
The world has come to say, “We want to analyze data in place.” It’s the new model to operate analytic data.
TVG: Why is this becoming interesting for 2017?
Mostly it’s that the data has become so big. In the past, people needed to analyze data in a much smaller scale. A few terabytes, maybe 100 terabytes. Now we are easy to get into the petabyte scales to analyze those data. To make a replica of the data is very costly, and to ingest is very costly. That’s why we see the major revolutions on the analytic operation to deal with this in memory analytic capability with where the data is, instead of moving the data out.
My premise is, you cannot move the data to one location before you do things with it. The easier thing to do is move the operations to where the data resides. Well, that’s easy to say. Now that your data is scattered, or not centralized, you need to move the workload at the right place at the right time with the right data. These coordinations call for the higher level of the workload management to conjunction with this analytic capability.
There’s really no standard way to do it, and I think that in terms of a business partner or a general partner of ours, they play a very important role to tie those technology into a integrated solutions.
TVG: Will converged or hyper-converged infrastructure play a large part in the future of flash storage?
Hsu: Converged infrastructures will play a part of it. I don’t know how major a part, but I believe it will play an important part of it. The thing is that this is a very data-centric view. A lot of times when people talk about hyper-converged data, they are not looking at it from the data-centric perspective. They’re talking about how many servers, how much storage, and network connectivity. Those are all good things, but they are not playing on the same level. They are not as focused on the data center.
That brings us to the software-defined storage. I think that software-defined storage has been maturing for the last several years. Now you can see that software-defined storage is running on on-premises, in converged platforms, or on the cloud, which is very important. Now that a client can run those, our client is able to experience the same data services across multiple different implementations. That still leaves a most important problem. How do I make sure that it doesn’t become a cluster sprawl? In the past people worried about the computer storage getting out of control. Now it’s easy to get the cluster out of control.
With the technologies [we’ve talked about], combining with the cognitive capability, we start seeing that those things are no longer just one size fits all. We start to look at this problem and apply the concept of machine learning, the systems start to learn the behavior of an organization and start to come up with a recommendation. Analysis of the environment becomes a very important part of the ecosystem.
TVG: Will that present a lot of opportunities for the channel?
Hsu: Absolutely, because the conversation is no longer just, “Well, I want this dollar per gigabyte and how many IRs per second.” This is a lot more complicated. By the way, it provides a lot more value. Many people think that all they need are just IRs and dollars per gigabyte. That’s not true. They need to be able to understand the organizational behavior around the data.
I think this is a new opportunity. The best part is that there’s not a standard way to do it. People are still learning these things and this is where the huge opportunity for the business partner is. Because at the end of the day, they’re the ones that know most about the client environment. They are the one that has the most familiarity with the client. They have certain regulations. They require the certain compliance. They are the ones that must be able to educate a system, if you will, to get the things right.
About the Author
You May Also Like