AI in Business Raises Ethical Questions
How a channel company brings artificial intelligence’s social impact into the business conversation.
Earlier this month, a workforce rebellion involving artificial intelligence (AI) happened in the hallways of one of Silicon Valley’s greatest companies. No, it wasn’t humans against machines. It was angry employees railing against management for letting the Pentagon tap Google’s AI technology to potentially improve drone strike targeting.
The Pentagon’s program, called Project Maven, has become a flashpoint for the ethical use of AI.
Companies, too, have to weigh the advantages of AI with cultural implications. AI can transform businesses and industries, make customers’ lives much more convenient, improve worker productivity, and boost the bottom line in ways never seen before. It can also strike at the nervous system of a society – freedoms, privacy and legal rights – causing blowback from employees and customers alike.
Enter the channel, a mediator of sorts.
Katrin Zimmermann, managing director at TLGG, a boutique digital consultancy, is one of those voices. A part of Omnicom Group, TLGG advises governments and big brands, such as Mercedes-Benz, Bayer, BMW, Nestle, Lufthansa and SAP, on how to navigate AI’s turbulent waters.
Katrin Zimmerman
Katrin Zimmerman
“We help companies make the most out of the opportunities of digital transformation and minimize the challenges,” Zimmermann says. “Also, generally, we help them understand technology’s impact on society.”
AI touches practically every conversation with every client, and it’s up to TLGG to help companies understand the cultural impact in order to get ahead of the risks. The idea is for companies to embrace AI without igniting the wrath of customers and employees; or else, they’ll have to backtrack like Google did. After the rebellion, Google pledged not to renew the Pentagon contract, and CEO Sundar Pichai shared the company’s AI principles and practices.
Pichai wrote in a blog: “We recognize that such powerful technology raises equally powerful questions about its use. How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right.”
Channel Futures caught up with Zimmermann to get her take on Project Maven, what she’s seeing in the market, and how channel partners can help steer companies in the right direction on their AI journeys.
Channel Futures: Where do you stand on the great AI debate?
Katrin Zimmermann: Some top-notch experts and researchers will say AI is the greatest thing to happen to humanity. Then we look at the reality of things, what’s actually happening. In China at the moment, facial recognition is leveraged to make people instantly pay a fee when they cross a red light on the street. Depending on how governments are using this type of technology, we can have significant limitations on what I personally see as freedoms in the Western world for people to express themselves and the way they live.
CF: What do you think about the Pentagon’s Project Maven and Google employee pushback? Were you surprised?
KZ: Not at all. This has been happening in China in testing phases for quite some time. The cultural implications and rules that the population in China [have] most of the time [are] not as Western – morally and ethically – infused. The leeway that the private companies in China have, together with the Chinese government, to test and evaluate and refine, are significant and relevant.
We, as a Western society, not only have to discuss the ifs and hows of AI for our own societies, but also on a global scale. We cannot think that geographical borders will help us to prevent the impact. It is actually a global implication on human beings.
It’s interesting and logical that the U.S. government would aim to do that in order to stay competitive. Potentially, many other governments will go in that direction. Whether it is the right thing to do, it’s an ongoing discussion that goes beyond my personal abilities.
CF: How do you help a client chart a sensible path with AI?
KZ: We’ve been working with clients to actively engage and leverage these types of technologies. We help them understand that they have to enable society – meaning consumers – to understand through information the implications of AI. These implications might be predicting outcomes for human beings and taking away decision-making processes.
We believe the better society understands the positive and negative implications, which will inform potential policy. But we need a true, neutral and good understanding of the implications for every individual at touch points where humans interact with AI.
CF: At its I/O developer conference this spring, Google demoed AI calling a hair salon to make an appointment. The AI sounded like a person and fooled the real person on the other end of the line. Given your take on an informed society, what do you think about this?
KZ: I think it’s a two-perspective approach.
It’s very appealing, as it creates a lot of convenience. But it actually makes me shiver. We’re seeing a global need for human interaction and human touch in a world that’s being more technology-enabled. The need for the human to have a human interaction and not replacing it with technology is something I understand and see.
Then, obviously, in times of transition into these kinds of opportunities and allowing human beings to be more aware and understanding, I personally always prefer that these kinds of intelligent solutions are being flagged as intelligent solutions. Don’t mock human interaction.
Also, we have to understand that whoever creates AI at the moment does so out of personal bias. They bring this personal bias when they’re laying the foundation. AI can learn by itself, but some form of the foundation has bias from humans. I often question if that is the world we want to live in.
About the Author
You May Also Like