IoT Redefined by Machine Learning Advances, Edge Computing
Different philosophies are being developed when it comes to “smart” products.
October 18, 2019
By Brian Buntz
From IoT World Today
Smart, connected products are changing the face of competition. That was the thesis of a formative 2014 article in Harvard Business Review that highlighted the transformative potential of information technology integrated into an array of products.
In the past five years, however, the seemingly straightforward words of “smart” and “connected” have become more enigmatic and, arguably, more loaded terms. And the meaning of those two terms has steadily evolved and continues to change. Five to 10 years ago, a “smart” product was one with embedded sensors, processors and software. These days, to qualify as “smart,” a device needs to take advantage of some form of basic machine learning at a minimum.
While most assessments of IoT adoption conclude the adoption of the technology has been steady in the past decade, neural network and machine learning advances have been swift.
Arm’s Steve Roddy
“It’s blossomed at a much faster rate than people thought, even three, four years ago,” said Steve Roddy, vice president, products in Arm’s machine learning group.
Neural Network and Machine Learning Advances Redefine ‘Smart’
One factor driving the progress is progress with convolutional neural networks. A pivotal moment came when Alex Krizhevsky, then a grad student at the University of Toronto, entered the ImageNet competition along with colleague Ilya Sutskever. A visual database that began life more than a decade ago, ImageNet became a stockpile for thousands of categories of images. The volume of data helped support the rise of a contest, known as the ImageNet Large Scale Visual Recognition Challenge in 2010. Two years later, Krizhevsky entered and ultimately won, defeating the then state-of-the-art, human-written image recognition code.
“There were people who would spend decades of their life writing image recognition software by hand,” Roddy said.
Then, suddenly, a grad student from the University of Toronto creates a neural net dubbed “AlexNet” that beat researchers who had spent their careers on such research.
“Oops,” Roddy joked. “Within two years of that, there’s an explosion of interest in neural nets from researchers.”
Big tech companies also threw their hat into the ring. In 2015, neural nets from Microsoft and Google defeated humans at image recognition. That was the aha moment.
“Neural nets better than what we previously had been able to code by hand with tens of thousands of lines of C code,” Roddy said. In addition, because researchers can train with large enough data sets, “neural have higher recognition rates on images flashed on the screen than humans,” he added.
Partly as a result of the impressive breakthroughs in image recognition, commercial machine learning and neural network applications are …
… proliferating. Gartner’s most recent hype cycle for AI projects that all three are two to five years away from widespread adoption. Already, several manufacturers of consumer electronics are scrambling to incorporate hardware to support the use of neural networks in their products. As a case in point, look at smartphone makers like Samsung that have incorporated neural processing units into their products for “visual intelligence.”
“The idea of a dedicated NPU has rapidly proliferated down in price points in phones and is going into lots of other markets,” Roddy said.
More broadly, the pace of machine vision adoption is ramping up for industrial applications as well. An assessment from MarketsandMarkets referenced by Advanced Manufacturing estimates the compound annual growth of the technology to be 54.8% between 2018 and 2025.
When the concept of IoT devices began to gain traction several years ago, the notion of rampant connected “things” raised eyebrows in some quarters. The most obvious concern was the security ramifications of a world with billions of headless IoT devices. The other criticism was simply that many emerging IoT devices – especially in the consumer space – made use of IT technology in ways that seemed silly. Using a smart toaster as an example, ExtremeTech declared in 2017, “The internet of things has officially hit peak stupid.” The most recent Gartner hype cycle dedicated to IoT anticipates that the umbrella technology is five to 10 years away from mainstream adoption.
But the ramifications of smart and connected IoT devices as data gatherers has only become apparent recently. And if attending the mammoth Consumer Electronics Show for the past two years is any indication, IoT in many cases has become more of a given than a novelty. And the rationale behind such devices is steadily growing more apparent as is their occasionally privacy-eroding potential.
Connectivity in a Dawning Edge Computing Era
Advances in voice-recognition technology have helped mainstream IoT in the consumer sector. Researchers have made swift progress with speech recognition in the past decade. Tech heavyweights such as Amazon and Google helped drive improvement in this area through smart speaker technology. But the capabilities of an average so-called “smart device” have paled in comparison to, say, a smartphone. In a device such as a smart speaker, the capabilities of the onboard computing have tended to be relatively limited. The first generations of the technology have relied on cloud connectivity to carry out voice-recognition tasks.
A string of critical articles from the likes of The Washington Post, The Guardian, CNet and others have highlighted the potential of smart speakers and other voice-recognition-capable devices to invade user privacy.
There’s a danger of this model of chipping away at user privacy at present, but especially in the long term. But the situation could be improving. For one thing, smart assistant vendors have recently taken steps to alleviate privacy concerns.
And in the long run, smart home devices and many other types of IoT products will make greater use of edge processing, projected Geoff Lees, senior vice president and general manager of microcontrollers at NXP. The trend has the potential to further address privacy concerns while removing a barrier to adoption in more privacy-conscious markets such as Germany, Lees said.
IoT devices such as smart speakers that send all data they gather to the cloud for processing are, by definition, privacy-invasive. The ability of the companies operating such devices will only …
… improve over time, complicating matters further.
“Why does a command of ‘turn the lights on’ have to go to the cloud to activate a light?” Roddy asked. “You want some speech processing locally because people will realize ‘if every time I get up to the bathroom in the morning and say, ‘Turn the lights on,’ that gets reported back to the hyperscale cloud provider. Now all of a sudden, they can intuit that you’re, say, a man in his 50s must be having prostate problems. Or, if you are always turning the lights on at 3 a.m. that you must have insomnia,” Roddy added.
The companies responsible for such devices can then use that information for targeted advertising.
But IoT device makers “can still have a device that is connected to the internet and still maintain privacy,” Roddy said. In some cases, having such devices connect to the internet is logical. “If I said: ‘Hey, XYZ assistant, tell me the score of the ballgame,’ at that point in time, that particular snippet is sent to the cloud, parsed, processed and a reply comes back,” Roddy said. “And your assistant tells you: ‘Your team is trailing.’” That’s not intrusive because it was a response to an explicit request. “It’s still a connected device. It’s still a smart device, but it’s now sort of prefiltering and only sending certain things to the cloud,” Roddy said.
NXP’s Gowrishankar Chindalore
“Having local voice control is the key thing,” agreed Gowrishankar Chindalore, Ph.D., head of technology and business strategy, embedded processors at NXP. “So there’s the voice control, and then there’s local natural voice recognition and understanding. As much of that capability comes to the edge to the end device, the better for all these concerns around privacy — and power consumption.”
Ultimately, extensive use of cloud computing for voice recognition scaled across millions of devices is a power-hungry activity.
“We did some quick calculations. We could do about 1000 operations on the MCU with the same power it takes to send 1 bit of data to the cloud over LTE and bring it back,” Chindalore said. “That’s the order of magnitude difference in power consumption. So the more it happens at the edge, the better it is, in general.”
You May Also Like