AI Trends to Watch in 2020
AI is a shepherd of change and a certified disruptor in 2020. Here we break down key AI trends to expect.
![AI 2020 AI Trends AI 2020 AI Trends](https://eu-images.contentstack.com/v3/assets/blt10e444bce2d36aa8/blt56faeee2cf157d0b/652459453ce442ed5fff7181/AI-2020-AI-Trends.jpg?width=700&auto=webp&quality=80&disable=upscale)
Shutterstock
Connected devices are on track to proliferate: Statista estimates that by 2025, there will be 75 billion connected devices. And this influx of data requires architectures such as edge computing to handle the information generated on a regular basis. In order to do so rapidly — without requiring a round trip to a data center in the cloud or at headquarters — that data is better processed directly at the edge.
Traditional processors aren’t suited for artificial intelligence, and AI-powered processes such as self-driving cars, virtual reality and remote surgery will require faster processing in the moment. Recently, though, a growing variety of sophisticated AI systems on a chip are entering the market, with Apple and Intel leading the way and Google and Amazon rumored to be developing similar systems. And as more data becomes available, these devices can house more native intelligence on the device. The more intelligence on the device, the less need to have a constant connection.
On its own, 5G promises to bring faster data uploads and downloads than are available with existing 4G connectivity, up to 100 times faster. But 5G technology is still in the early phases and requires additional infrastructure buildout to truly come to fruition. The Federal Communication Commission is working to provide additional radio spectrum for 5G buildout. As 5G infrastructure develops, it enables new capabilities for bandwidth-heavy processes, such AI-enabled processes and IoT-enabled processes that often require response times in milliseconds (consider self-driving cars, remote surgery or many artificial reality-enabled experiences).
This is one prong of a theme we noted in our discussion of IoT trends to watch in 2020, where a rising tide lifts all boats: AI and IoT require 5G networks to power them for data speed and network reliability and security.
But as Jessica Groopman, industry analyst and founding partner at Kaleido Insights noted, these trends are maturing in tandem and need to reach critical mass individually and indicate progress individually and collectively.
“It’s not whack-a-mole exactly — maybe a rising tide lifts all boats might be a better metaphor,” she said. “Ubiquitous, low-latency connectivity via 5G will enable things we’re still piloting and dreaming about on the blockchain side.”
ComScore predicts that, by 2020, roughly half of all searches will be carried out through voice rather than text. A recent PwC report discussing the impact of voice-activated devices on consumer behavior found that 72% had used a voice assistants, and eMarketer forecasts that nearly 100 million smartphone users will use voice assistants in 2020.
“Conversational AI is a big deal; it is rolling out in the enterprise in customer service and internal sales training, for example,” said Aditya Kaul, an analyst with Tractica. At the same time, Kaul cautioned that there is some overhyping of technologies like chatbots that are less sophisticated than conversational AI.
“These chatbots are not real AI in the real sense,” Kaul explained. “They use rules, where the conversation is scripted, and if you stray from the script, they break down. They are starting to use deep learning. The challenge is context and reasoning. That is where a lot of work is going on.”
The cost per cyberattack in 2019 was an average of $4.6 million, up from $3 million in 2018, according a Radware report. As attacks proliferate and become more sophisticated, human defenses are no match.
According to Ca[gemini’s “Reinventing Cybersecurity with Artificial Intelligence,” more than 60% of IT pros say they can’t effectively identify a data breach without the use of AI technologies. Forty-eight percent will get an average budget increase of nearly 30% in 2020 for AI-enabled cybersecurity technologies. And 73% of enterprises are testing use cases for AI for cybersecurity across their organizations today, with network security leading all categories.
Despite these aggressive projections concerning AI in cybersecurity, the dearth of skills among professionals is a limiting factor.
“There is a sheer lack of global cybersecurity experts who have the necessary knowledge and skills to work with AI and machine learning-based security algorithms,” wrote Nathan McKinley in “The Promise and Challenges of AI and Machine Learning for Cybersecurity.”
Often a lightning rod in policy circles and among workers alike, intelligent automation brings the specter of job loss as the capacities of AI surpass human ability. Gartner’s prediction was that while 1.8 million jobs would be eliminated, automation would create 2.3 million more. Some studies have been less promising, though. An Oxford Economics study estimated that, for lower-skilled workers, the impact of robotics could be immense, with 20 million manufacturing jobs potentially displaced by 2030.
The reality is that automation is coming. A 2018 McKinsey report notes that intelligent automation requires reskilling of the workforce.
“While we believe there will be enough work to go around (barring extreme scenarios),” the report noted, “society will need to grapple with significant workforce transitions. Workers will need to acquire new skills and adapt to the increasingly capable machines alongside them in the workplace.”
AI benefits from large volumes of data; these data sets help train algorithms to learn. Data privacy, and recent regulations like the General Data Protection Regulation (GDPR), are based on the minimization, transparency and deletion of customer data. Edge computing architecture and “federated learning” can play a role as well, by decentralizing data and enabling it to be used for localized processes on a device, but preventing data from being stored in a centralized repository.
“It will keep responsible and ethical AI at the forefront of everyone’s mind,” said Kathleen Walch, principal analyst at Cognilytica.
A blockchain is a decentralized network of computers that records and stores data to display a chronological series of events in a transparent and immutable ledger system.
In 2019, blockchain faced hype, then failed to meet expectations (in part because of its sister technology, bitcoin, a digital currency). But 2020 may see expectations for blockchain temper. There are many real-world use cases for blockchain that have gathered steam, such as tracking and tracing food in the supply chain, tracking cargo in shipping, and “health wallets” to manage a patient’s data. One company has even developed AI-enabled blockchain to govern blockchain.
Image recognition is helping doctors identify disease earlier.
”In health care, it’s a huge deal,” Kaul said. “We can now spot cancer cells in CAT and PET scans. We can identify many conditions. AI is all over the place in health care.”
Wearable technology and data analytics are bringing us closer to an era in which our health care regimen can be tailored. Known as precision medicine, practitioners can develop a health care plan for a patient based on his or her genetic history, location, environmental factors, lifestyle and habits. AI can also use data to predict a patient’s status over time or predictively recommend care based on genetic factors.
At the same time, precision medicine faces a few obstacles. One key area for AI-enabled diagnosis to overcome: data silos. Many sources of medical information, such as electronic medical records, could reside on different platforms that don’t communicate with one another. Getting data consolidated and integrated is a key issue for practitioners to have a comprehensive view of a patient’s health record. Another key issue is data privacy. Regulations like GDPR require that data be stored for finite amounts of time, whereas AI algorithms benefit from large swaths of data.
AI algorithms involve built-in assumptions that can reflect bias, and these biases can in turn drive certain outcomes, such as bias in housing lending that disadvantages minorities. With health data proliferating as wearables come online, and as facial and fingerprint recognition become standard fare for identity verification, data bias in AI-enabled processes becomes more critical to safeguard against.
“The technology is opaque,” noted Mildred Cho, associate director of the Stanford University Biomedical Ethics program, in an article on data ethics.
According to a recent KPMG report, companies invest in people and technology rather than in governance or control frameworks, with only about 25-30% having invested in mechanisms to develop greater trust and transparency.
While larger companies may always opt to cultivate in-house AI capabilities, smaller shops can find AI expertise in managed AI services.
Some companies will find that it will be far easier to use a model that’s already been built and trained with data, even for a different industry. This new “model as a service” form of AI will enable companies that don’t have well-established AI initiatives in-house to use others’ expertise and circumventing the need to reinvent the wheel. A company in the auto industry, for example, could use a “well-developed image recognition model for another industry and extend through transfer learning” to adapt a model to images of cars, said Ron Schmelzer, a principal analyst at Cognilytica. Using existing models and building on them with transfer learning can bring efficiency and cost-effectiveness by building on pre-existing models to shorten training time.
“We’re seeing a shift from production-centric of building models to using and consuming models,” Schmelzer said.
This affords channel partners an opportunity to become experts in industries and specific business processes fueled by AI, such as AI-enabled contract interpretation services for law firms. It may require a shift, though, as many channel partners will try to sell AI to the technologists within a company rather than sell to lines of business.
While models as a service enable companies to use others’ AI expertise without reinventing the wheel, they can also pose problems. How often the model is updated and which data was used to train it may be black boxes for the consuming organization, which can be problematic.
“If you didn’t build that model, you don’t have visibility into how it was trained,” said Kathleen Walch of Cognilytica. If you use satellite images to check roof damage on a house, you need to know, for example, that the training images were taken on sunny days. On a cloudy day, the model may not be accurate, Walch said.
While larger companies may always opt to cultivate in-house AI capabilities, smaller shops can find AI expertise in managed AI services.
Some companies will find that it will be far easier to use a model that’s already been built and trained with data, even for a different industry. This new “model as a service” form of AI will enable companies that don’t have well-established AI initiatives in-house to use others’ expertise and circumventing the need to reinvent the wheel. A company in the auto industry, for example, could use a “well-developed image recognition model for another industry and extend through transfer learning” to adapt a model to images of cars, said Ron Schmelzer, a principal analyst at Cognilytica. Using existing models and building on them with transfer learning can bring efficiency and cost-effectiveness by building on pre-existing models to shorten training time.
“We’re seeing a shift from production-centric of building models to using and consuming models,” Schmelzer said.
This affords channel partners an opportunity to become experts in industries and specific business processes fueled by AI, such as AI-enabled contract interpretation services for law firms. It may require a shift, though, as many channel partners will try to sell AI to the technologists within a company rather than sell to lines of business.
While models as a service enable companies to use others’ AI expertise without reinventing the wheel, they can also pose problems. How often the model is updated and which data was used to train it may be black boxes for the consuming organization, which can be problematic.
“If you didn’t build that model, you don’t have visibility into how it was trained,” said Kathleen Walch of Cognilytica. If you use satellite images to check roof damage on a house, you need to know, for example, that the training images were taken on sunny days. On a cloudy day, the model may not be accurate, Walch said.
As voice commands and facial recognition software become commonplace, the reach of artificial intelligence (AI) is apparent — possibly even ubiquitous. Still, while digitization has enhanced human experience in many ways, it also introduces a host of new concerns, such as data privacy, data bias and ethics, and the disruptive impact of AI on human work. Enterprises and channel partners alike have to evaluate these caveats as they consider deploying AI technologies.
Overall, though, people remain hopeful about the prospects for AI and its impact on human experience. Despite the downsides, 63% of respondents are hopeful that most individuals will be better off in 2030, and 37% said people will not be better off, according to a recent Pew Research Center survey.
In what follows, we explore some of the key themes in artificial intelligence and machine learning for 2020.
We sat down with several experts to discuss the 2020 AI trends to watch. Ron Schmelzer and Kathleen Walch, principal analysts at Cognilytica and Aditya Kaul, an research director at Tractica, weighed in.
Click through the slideshow above to read more.
Read more about:
MSPsAbout the Author(s)
You May Also Like