Lumen CTO, Partner Leader Dish on 'AI Fabric' Pitch to Partners

Lumen's chief technology officer and channel leader say the company is incentivizing partners to sell the company's mesh network architecture solutions to AI-hungry enterprises.

James Anderson, Senior News Editor

August 2, 2024

8 Slides

As Lumen Technologies builds fiber routes and partnerships that will support AI deployments in the data center, the "techco" is working hard to bring its channel partners along in the go-to-market.

Lumen on Thursday announced that Corning will give it a large sum of optical cable to use to build fiber connecting "AI-enabled data centers." Just a week prior, Microsoft announced that it would use Lumen's Private Connectivity Fabric custom networks to support its data centers.

These partnerships support what Lumen and chief technology officer Dave Ward are calling an "AI fabric." In addition to building more fiber in less populated areas where data centers may reside, the fabric offers mesh architecture to help enterprise customize the different cloud and colocation demands components of their network.

dave_ward.jpg

"The thing that hasn't been discussed that we are building is how you get your data to the GPUs [and] how you get your data to the AI data centers. Those pipes were not discussed in any of the AI economy, and that's what we're bringing to the table. That's what the AI fabric is: how to get your data to the GPUs," Ward told Channel Futures in an interview.

In the meantime, Ward and senior vice president of global partner solutions, Breanna Kuhl, said Lumen is incentivizing its advisor (agent) and systems integrator (SI) partners to participate in this AI fabric. Adding this "outcome-based" sale could require a big adjustment for some partners, as the offering will be more than point-to-point circuits. Kuhl said Lumen is seeking to equip interested partners in making the change without alienating other partners.

Related:Partner: Lumen Restructuring, Creditor Agreement 'A Move in the Right Direction'

breanna_kuhl.jpg

"This is the telco channel. You can't you and expect this massive amount of change. You'll start losing agents, and you don't want that. You've got to be delicate, and you've got to make sure that we're feeding them in the right way and we're incentivizing them in the right way," Kuhl said.

Kuhl and Ward both joined Lumen in 2024 as part of a larger leadership refresh under new CEO Kate Johnson. Ward, a Cisco and Juniper alum, came over from PacketFabric in February. Kuhl, who joined in March, has worked for Salesforce, GE Digital and Vonage.

They shared more about the AI fabric and their corresponding go-to-market strategy in an interview with Channel Futures. The Q&A transcript has been edited for length and clarity.

Channel Futures: Will you tell us about your recent partnership announcements around Microsoft and Corning?

Dave Ward: What you're going to be hearing from us into the near future is that, in fact, we're partnering with many AI providers and many AI companies. What we're trying to do is build in the cloud economy the notion of an AI fabric, where Lumen has constructed a national network between all of the places the industry or enterprises need to go to find an AI solution. That means that it's agnostic and multiparty. If somebody is partnering with, let's say, an AI cluster provider that's in one particular data center operator, what does this mean to an enterprise, and how do I get there? That's what our AI fabric is. So we're creating and constructing with our infrastructure the fiber, the ability to have waves on demand, the ability to have Ethernet and IP, which are just layers on top of that, of connectivity such that enterprises now can build and our partners can build an AI-based solution.

Most folks are still training their data and need to move large data sets into these GPU farms (GPU clouds, GPU data centers) to train their data, because the value of AI for an enterprise is to unlock data. What I mean by that is inside Lumen and most enterprises, we've got SharePoint and we've got all sorts of network attached servers. We've got directories and we've got wikis, and we've got PowerPoints and documents like everybody else. Unless they've been used or looked at in the last month or so, they get buried somewhere in the company, and all that information is lost. What a large language model over all this data means is that we can then query, and AI will have already pre-read all of that material and give answers out of the entire history of all the electronic data that we have inside the company. That value proposition of unlocking the locked data in an enterprise means that you need to train it on these models. You have to get that data to these data centers. So we're building an AI fabric among these data centers and AI providers and model providers such that those solutions can be created.

CF: You used the term "GPU data centers," and I see the term "AI-enabled data centers" in some of the press releases from Lumen. Are those synonymous?

DW: Right now all training is basically being done on GPUs. And so they almost are synonymous. There are providers who are doing AI training that are CPU-based or especially ASIC-based. That's what AI-enabled is. It's ... just a big umbrella over all the things that could be doing AI training. But in fact, most often its GPUs.

In the slideshow above, read the full conversation with Ward and Kuhl.

Read more about:

VARs/SIsAgents

About the Author(s)

James Anderson

Senior News Editor, Channel Futures

James Anderson is a news editor for Channel Futures. He interned with Informa while working toward his degree in journalism from Arizona State University, then joined the company after graduating. He writes about SD-WAN, telecom and cablecos, technology services distributors and carriers. He has served as a moderator for multiple panels at Channel Partners events.

Free Newsletters for the Channel
Register for Your Free Newsletter Now

You May Also Like