Ex-OpenAI Board Members: More Guardrails Needed for Responsible AI

Former OpenAI board member Helen Toner broke her silence on why she and other directors briefly fired CEO Sam Altman in November.

James Anderson, Senior News Editor

May 30, 2024

6 Slides
Former OpenAI board member Helen Toner questioned Sam Altman's prioritization of responsible AI

Already have an account?

jamesonwu1972/Shutterstock

Multiple former board members of artificial intelligence behemoth OpenAI – including those who temporarily ousted CEO Sam Altman – have publicly raised concerns about the company's commitment to ensuring responsible AI.

Former OpenAI board members Helen Toner and Tasha McCauley penned an article for the Economist on May 26 expressing their concern for how profit incentives are shaping AI companies' behavior. The authors, who both resigned in late 2023, argued that OpenAI's "experiment in self-governance" is insufficient in aligning with the public good without proper regulatory frameworks.

"OpenAI was founded as a bold experiment to develop increasingly capable AI while prioritizing the public good over profits. Our experience is that even with every advantage, self-governance mechanisms like those employed by OpenAI will not suffice," Toner and McCauley wrote.

Toner served on the OpenAI board of directors from 2021 until her removal in late 2024. She was part of the nucleus of directors who moved to fire Altman in November. After backlash from 49% owner Microsoft and a letter from various OpenAI employees, Altman returned as CEO after four days, and Toner, McCauley and Ilya Sutskever resigned from the board.

While the board's initial explanation for ousting Altman was his being inconsistently candid in communications with them, Toner in a recent podcast interview on The Ted AI Show revealed further reasons related to Altman's character and behavior.

Related:OpenAI CEO, Elon Musk and a Channel Partner Talk Generative AI Opportunities, Pitfalls

Toner said Altman's inconsistent communication included not telling the board that OpenAI was about to release ChatGPT in November 2022. Toner said she and fellow board members learned about the launch on Twitter. Altman also did not disclose to the board that the OpenAI startup fund for venture capital, according to Toner (OpenAI revoked Altman's ownership of the fund in April).

A breakdown in trust had already occurred leading up to Altman's firing, she said.

"That's a completely unworkable place to be in as a board, especially a board that is supposed to be providing independent oversight over the company, not just helping a CEO to raise more money," Toner said in the podcast. "Not trusting the word of the CEO who is your main conduit to the company – your main source of information about the company – is totally impossible."

Furthermore, Toner said said different OpenAI executives came forward to report negative behavior they had experienced from Altman. Many had been afraid to come forward with their stories, Toner said.

"Telling us how they couldn't trust him, [telling us] about the toxic atmosphere he was creating," Toner said. "They used the phrase 'psychological abuse.'"

Altman_Sam_OpenAI_135x180_2023.jpg

Toner said board members kept their plans closely concealed for fear of Altman. That's a key reason why Altman's firing appeared to be so sudden, she said.

All of this came as OpenAI was inching nearer and nearer to achieving artificial general intelligence (AGI), defined by OpenAI as "a highly autonomous system that outperforms humans at most economically valuable work."

Six months later, OpenAI is moving forward at a fast clip under Altman's leadership. The company on Wednesday made the news for its deal with Apple to embed technology like ChatGPT into Apple products. At the same time, OpenAI has seen an exodus of executives and researchers who were working to understand the risks of such a phenomenon. News reports surfaced on May 17 that OpenAI's "superalignment" team, formed in 2023 to investigate long-term implications of AI, has dissolved.

Does the drama at OpenAI have implications for business customers and the channel partners that serve them? Sources speaking to Channel Futures think so.

"A company and its leadership's commitment to ethical AI development influences and impacts the trust and reliability end users place in that company's products, and I think OpenAI is no different," said John Triano, a conversational AI expert who has worked for 8x8, Five9 and now Auraya Systems.

Triano said the disbanding of the OpenAI Superalignment team reflects challenges felt across the AI market.

"The industry is moving at an unprecedented rate, as are the people working in and leading the industry. The demands of the market to create compelling cutting edge AI technology is forcing companies to get out ahead of their skis in my opinion," Triano told Channel Futures. "Seems products being 'beneficial' may be outweighing 'safe.'"

Bret Taylor and Larry Summers, who joined the OpenAI board after Toner and McCauley's departures, published a rebuttal Economist article on Thursday. They cited a review by law firm Wilmer Hale, which reportedly "rejected the idea that any kind of AI safety concern necessitated Mr. Altman’s replacement."

In the slideshow above, Channel Futures recaps Toner's allegations against Altman and recent developments in OpenAI's AI guardrails, and we discuss trickle-down effect on the channel.

Read more about:

VARs/SIsMSPsAgents

About the Author

James Anderson

Senior News Editor, Channel Futures

James Anderson is a senior news editor for Channel Futures. He interned with Informa while working toward his degree in journalism from Arizona State University, then joined the company after graduating. He writes about SD-WAN, telecom and cablecos, technology services distributors and carriers. He has served as a moderator for multiple panels at Channel Partners events.

Free Newsletters for the Channel
Register for Your Free Newsletter Now

You May Also Like