What’s Behind AT&T’s Big Bet on Edge Computing
Hyper-scale cloud has big advantages in scale and efficiency, but for some things you need to have the computation done closer to the problem.
July 31, 2017
By Pino Vallejo
Brought to you by Data Center Knowledge
Hyper-scale cloud has big advantages in scale and efficiency, but for some things you need to have the computation done closer to the problem. That’s what AT&T is promising with its upcoming edge computing services that will put micro data centers in its central offices (think telephone exchange), cell towers, and small cells.
Eventually this edge computing network will use the future 5G standard to lower the latency even further. That will open up possibilities like using high-end GPUs AT&T says it will place at the edge of its network for highly parallel, near-real-time workloads. Take off-board rendering for augmented reality for example. Instead of rendering the overlay for AR frame by frame on your device, a cloud system that doesn’t have to worry about using up too much battery power could pre-render an entire scene and then quickly send what’s relevant as you turn your head.
“Today, one of the biggest challenges for phones running high-end VR applications is extremely short battery life due to the intense processing requirements,” an AT&T spokesperson told us. “We think this technology could play a huge role in multiple applications, from autonomous cars, to AR/VR, to robotic manufacturing and more. And we’ll use our software-defined network to manage it all, giving us a significant advantage over competitors.
More Than a Dumb Pipe to Cloud
The definition of edge computing is a little fuzzy; often it refers to aggregation points like gateways or hyperconverged micro data centers on premises, and what AT&T is promising from its tens of thousands of sites “usually never farther than a few miles from our customers” is perhaps closer to fog computing.
“Edge is different things to different people. Every vendor defines the edge as where they stop making products, and for AT&T the edge of their network is the RAN (Radio Access Network),” Christian Renaud, IoT Research Director at 451 Research, told Data Center Knowledge. “They’re talking about multi-access edge computing, MEC, which is a component of fog computing. For AT&T, it’s their way of saying ‘don’t just treat us as backhaul, or as a dumb pipe to hyper-scale cloud’.”
New categories of applications — from data analytics using information from industrial sensors to upcoming consumer devices like VR headsets — are pushing the demand for compute that’s closer to where data is produced or consumed. “This is because of applications like autonomous vehicles co-ordination — vehicle to vehicle and vehicle to infrastructure — or VR, where because of the demands of your vestibulo-ocular reflex for collaborative VR, there are fixed latencies you have to adhere to,” he explains. In other words, a VR headset has to render images quickly enough to trick the mechanism in your brain responsible for moving your eyes to adjust to your head movements.
“There are applications that demand sub-10 millisecond latency, and there’s nothing you can do to beat the speed of light and make data centers respond in five or 10 milliseconds,” Renaud said. “It’s impossible to haul all the petabytes of data off the sensors in a jet engine at the gate to the cloud and get the analysis that says the engine is OK for another flight in a 30-minute turnaround time.”
See also: Edge Data Centers in the Self-Driving-Car Future
Physics dictates that the compute-analysis-action loop occurs closer. It needs to happen in milliseconds, preferably single-digit or low-double-digit milliseconds, and that dictates geographical placement of edge computing capacity. It can take the shape of onboard compute on the device itself, a dusty old PC on the manufacturing floor, or a server in a colocation data center. Data from non-stationary devices has to go to the MEC (via a RAN) as its first stop, and there’s lots of opportunity for network operators to add value beyond transport by pushing compute closer to the edge.
That’s what AT&T is betting on. The company says it will be able to offer edge computing at tens of thousands of locations because it already has its network. Initially it’s targeting ‘dense urban areas’ – where it has more bandwidth and more cell tower sites.
“They have a unique geographical footprint that affords them all sorts of advantages they can exploit in the IoT compute and analysis battles to come by putting things into their small cells and macro cells and central offices, which are so geographically distributed and often placed very close to the source of data origination,” Reynaud said.
The Role of Software-Defined, Virtual Networking
AT&T highlighted software-defined networking (SDN) and network functions virtualization (NFV) as key to delivering these services efficiently, and he agrees. “You might have vehicles in motion, moving from cell tower to cell tower, while you’re also trying to dynamically say ‘here is your resource allocation’. The NFV and SDN piece is critical because of the coordination. If you’re shifting compute to be closest to the point of data origination to reduce latency, and that point of data origin is going 150 miles per hour — or 400 miles an hour if it’s an aircraft — that makes it a harder problem to solve for.”
AT&T CFO John Stephens told investors last week that 47percent of its network functions are already virtualized, and he expects it to be 55 percent by the end of the year and 75 percent by 2020. Network virtualization and automation is key to AT&T lowering its operating costs, hence its purchase of Brocade’s Vyatta network operating system, which includes Vyatta vRouter and a distributed services platform. But this also gives it the tools it needs to build edge computing services.
See also: Telco Central Offices Get Second Life as Cloud Data Centers
AT&T already offers local network virtualization with its FlexWareSM service, which uses a local, MPLS-connected appliance to let businesses create virtual network functions like a Palo Alto Networks or FortiNet firewalls, Juniper or Cisco routers, or Riverbed’s virtual WAN, running on servers.
“Pandora’s Box of Quality of Service”
The applications AT&T wants to eventually support are more complex though and will raise issues like multi-tenancy and ‘noisy neighbors’ when you have many latency-sensitive applications on the network.
“If you have a service that allows you to hail autonomous cars, and they’re one of multiple tenants on the MEC, on the radio access network, you have to work though issues like billing and prioritization,” Reynaud said. “If you have emergency vehicles, they might need to commandeer the bandwidth, so would other vehicles have to pull over and stop? If there’s a VR gaming competition going on and everyone is hitting the network, and they need super low latencies, but you have all the cars driving themselves around in the autonomous zone, hitting the same data center’s resources and the same radio network, which one do you prioritize? We’re opening Pandora’s box of quality of service.”
AT&T may also have an advantage in rolling out the 5G technology it will need for some of the more ambitious uses of edge computing, like multiplayer VR at a low enough latency to avoid nausea. It’s already talking about expanding its fixed wireless 5G using millimeter wave (mmWave) beyond a pilot in Austin, Texas. The carrier also has the contract to design and operate the national FirstNet first-responder network (and it sees opportunities for edge computing for public safety in that network).
Running the FirstNet network gives AT&T access to the 700MHz spectrum and it will start deploying wireless services in that spectrum by the end of 2017. AT&T will be deploying LTE-license assisted access (LAA) with carrier aggregation as part of building out the network, delivering additional bandwidth that lets it start preparing for 5G without having to do multiple updates to its cells and towers. With network virtualization, Reynaud points out, “they don’t have to do truck rolls; they can update their infrastructure at the flip of a software switch”.
No Need to Wait for 5G to Arrive
But AT&T can start offering some services without waiting for 5G, he believes.
“When 5G comes, it’s going to shave a lot of the latency off and increase the speeds. If I’ve got a latency budget for an app like multi-party participatory VR with a shared 3D space, I can trim that on the transport side to the data center, and I can trim that inside the data center by using SDN profiles. So this much of that goes to TCP windowing, and this much goes to the speed of light, and this much goes to the fundamental RAN speed and the latencies there; I’m going to be shaving off as much as I can in each of those areas already. 5G is just going to shave [more latency off] on the access side to the MEC resources, to their central offices and data centers. But I can put assets in the MEC or the RAN or my data centers that are not predicated on standards bodies coming to a consensus on 5G. I can still solve problems with 4G and judicious placement of compute resources; maybe I’m able to solve 80 percent of those now.”
Yet Another Unanswered Question
It’s worth noting that AT&T’s network only covers the US, and that multinational customers will likely want this kind of service at all their locations worldwide. “Either AT&T will end up contracting with Telefonica and similar carriers to leverage comparable infrastructure, or this will be US-only until those standards bake a bit more,” Reynaud notes.
AT&T couldn’t tell us more precisely when this service might go live. “We don’t have any target markets or trials to announce at this time,” the spokesperson told us. “We’re still in the early development phase. We hope to start deploying this capability over the next few years, as 5G standards get finalized, and that technology gets ready for broad commercialization.”
You May Also Like