OpenAI CEO, Elon Musk and a Channel Partner Talk Generative AI Opportunities, Pitfalls
OpenAI CEO Sam Altman testified before the U.S. Senate on Tuesday, the same day Elon Musk ranted to CNBC about OpenAI.
![OpenAI CEO on future of AI by tech giants OpenAI CEO on future of AI by tech giants](https://eu-images.contentstack.com/v3/assets/blt10e444bce2d36aa8/blt0946454a682c83ac/6523f99f08f32f4057436c81/AI-Future.jpg?width=700&auto=webp&quality=80&disable=upscale)
metamorworks/Shutterstock
OpenAI CEO Sam Altman’s hearing and Elon Musk’s CNBC comments come about two months after an open letter from the Future of Life Institute went live.
The letter, containing more than 27,000 signatures, called on all AI labs to “immediately pause” their system training for at least six months. Those AI systems would contain all those that are more power ful than ChatGPT-4.
The signers cited the spread of misinformation by AI language models, the impact of artificial intelligence on human jobs and finally, the possibility of AI beings “replacing” humans some day.
“Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter read.
That pause would allow for providers and independent experts to develop and share safety protocols, according to the letter.
“This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities,” the letter stated. “AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy and loyal.”
Musk told CNBC’s David Faber that he signed the document despite seeing it as “futile.”
“I have recommended that we pause. Did I think there would be a pause? Absolutely not,” Musk said.
Sam Altman joined IBM chief privacy and trust officer Christina Montgomery and NYU professor Gary Marcus for the senate hearing.
During the meeting, Altman laid out three elements of government regulation that he would like to see enacted.
First, he recommended the formation of a new government agency that would license and de-license AI efforts that go above a certain threshold.
Second, Altman said safety standards should exit to prohibit AI models that can self-replicate and self-exfilitrate.
Third, he said independent audits should be in place at AI providers to make sure AI models meet the stated safety thresholds.
Media outlets and senators themselves have painted Altman’s hearing as a corrective to mistakes regulators and tech leaders have made in the past. One Democratic senator described Altman’s request for government regulation as a “historic” step. The New York Times described him a “calm and unruffled” and “boyish,” taking pains to emphasize the “friendly” tone of the hearing.
In particular, the hearing often turned back to the topic of social media. Regulators openly admitted that they missed the boat on mitigating problems like the spread of misinformation as well as harmful material for children on social media platforms.
“Our goal is to demystify and hold accountable those new technologies to avoid some of the mistakes of the past,” Connecticut Senator Richard Blumenthal said. “Congress failed to meet the moment on social media.”
The hearing raised the challenge of the lawmaker’s trying to regulate a technology that most of them don’t understand.
“Even those who built generative AI that we’re all trying to legislate don’t understand how it works,” said Bradley Shimmin, Omdia chief analyst for AI and data analytics. “So how can a governing body that doesn’t even understand what people what the people are telling them govern it?”
Despite the widespread praise for Altman for requesting regulation, Shimmin encouraged readers to read the CEO’s comments through an OpenAI lens. Namely, that companies like OpenAI and Alphabet/Google have a competitive interest in increased regulation for their open source competitors.
“He earnestly thinks it needs to be regulated. And I very much agree with him. But the regulation in closing it up and not letting anyone have access to it, I think is possibly more damaging in the long-term, simply because when you have a race to the bottom with a couple of companies, it rarely ends well,” Shimmin told Channel Futures.
OpenAI’s name has made it the target of many jokes, as the company has evolved from its open source identity to one that has closed off its methodology and sources to the world.
“It does seem weird that something can be a nonprofit, open source and somehow transform itself into a for-profit, closed source,” said Elon Musk, who said he contributed around $50 million to OpenAI with a mind for backing a competitor to Google’s privately held DeepMind group.
“… Let’s say you funded an organization to save the Amazon rain forest, and instead they became a lumber company, and chopped down the forest and sold it for money. And you’d be therefore like, ‘Oh, wait a second, that’s the exact opposite of what I gave the money for. Is that legal?'” said Musk.
AI policy researcher Sarah Myers West told The New York Times that Altman’s suggestions for regulation need to go further. For example, she said more needs to be done in limiting how generative AI uses biometric data.
“It’s such an irony seeing a posture about the concern of harms by people who are rapidly releasing into commercial use the system responsible for those very harms,” she said.
Meta in February released its LLaMA large language model to the general public. That update gave a huge boost to the open-source community. And it put the pressure on companies like Google and OpenAI.
“It costs $100 to fine tune on Hugging Face and gets the same results as [Google, OpenAI and others] are giving them with this behemoth that they’ve created behind closed doors. That’s a cause for worry for a company like that,” Shimmin told Channel Futures.
SemiAnalysis on May 4 published an internal memo from Google that raised concerns about competition closed-source providers face. The document noted that although Google and the DeepMind group it acquired in 2015 has striven to compete with OpenAI, a “third faction” is creeping up on them.
“We Have No Moat, And neither does OpenAI,” reads the memo’s title.
That moat, Shimmin said, is closed-source software.
“In open source, you’re paving over the moat. People can come in and look at your code and [say], ‘What is that doing?’ Closed-source is saying, ‘We’re a castle. We have a drawbridge. You can get to our software through this API, which is the drawbridge. But if you try to storm the wall and see what’s going on outside of that we’re gonna throw hot buckets of tar on you,'” Shimmin said in an on-the-fly expansion of the moat metaphor.
Shimmin said the greater danger stems from researchers lacking access to the technology these providers use.
“I think we as a species are in much greater danger from any sort of ill outcomes from that technology, than if it’s just in the ivory tower behind closed doors. So we have a chance to understand it if we have open source technologies, and the researchers can actually afford to start working,” he said.
Altman fielded questions from senators about the economic impact of generative AI. Those senators expressed concern that AI is going to take away human jobs.
Altman, who said he views history as an ongoing technological revolution, said he expects a “significant impact on jobs” as other technology shifts have caused in the past.
“GPT4 will entirely automate away some jobs, and it will create new ones that we believe will be much better,” he told senators.
He noted, however, that the public needs to understand ChatGPT as a tool rather than as a “creature.”
“It’s a tool people have a great deal of control over how they use it … GPT-4 and other systems like it are good at doing tasks, not jobs.”
Musk in his wide-ranging interview with David Faber called into question OpenAI’s relationship with Microsoft, suggesting that Microsoft could be calling the shots for the AI developer.
“I do worry that Microsoft actually may be more in control than, say, the leadership team at OpenAI realizes. I mean, Microsoft, as part of Microsoft’s investment, they have rights to all of the software, all of the model weights and everything necessary to run the inference system,” Musk said.
Microsoft CEO Satya Nadella later countered Musk’s claim.
“Look, while I have a lot of respect for Elon and all that he does, I’d just say that’s factually not correct,” Nadella told CNBC. “OpenAI is very grounded in their mission of being controlled by a nonprofit board. We have a noncontrolling interest in it, we have a great commercial partnership in it.”
Shimmin said the nuance exists somewhere in between Musk and Nudella’s assertions. While OpenAI remains its own company, Shimmin said that Microsoft is also an investor.
“They’ve hitched their wagon to this company, just as Alphabet has done with DeepMind. They can dictate exactly what happens or doesn’t happen,” he said.
He agreed that the $10 billion investment from Microsoft doesn’t prevent OpenAI taking research inthe direction they want. However, Shimmin said questions of packing and licensing are another matter.
“OpenAI is literally using Azure backend services. The cost offset from that is something that has to be borne by OpenAI,” he said.
Musk, who co-founded OpenAI and sat on its board until 2018, in the interview with CNBC’s David Faber credited himself with creating the company. That’s not just due to what he said equated to about a $50 million investment, but also the vision he said he provided.
“It wouldn’t exist without me,” he said.
In addition to claiming credit for the “OpenAI” name (and its emphasis on open source), Musk said he played an instrumental role in recruiting co-founder and chief scientist Ilya Sutskever.
Shimmin noted that the success of OpenAI looks less like a “single Thomas Edison moment” and more like “lots of people standing on the shoulders of a lot of giants.”
“It’s like Al Gore with the internet perhaps,” Shimmin told Channel Futures. “It’s hard sometimes to really say who is the owner of something that is the product of a lot of research. And a lot of that research that actually made OpenAI’s ChatGPT possible didn’t actually happen within the confines of OpenAI.”
Musk admitted in the interview that he considers himself an “idiot” for not acquiring a level of governance in the company. But he said that at the time he questioned the viability of OpenAI against closed source rival Google.
“… In the beginning, I thought, look, this is probably a hopeless endeavor. How could we possibly compete with – how could OpenAI possibly compete with Google DeepMind. I mean, this seemed like an ant against an elephant – not a contest,” he said.
While people in various industries have questioned whether AI will ultimately replace them, many folks in the channel see the upside. Small MSPs and technology advisors in particular use tools as a sort of staff augmentation for aspects of their businesses for which they can’t afford to hire.
Kyle Burt runs Catch Advisors, a cybersecurity-focused technology advisor out of Texas. He said his journey started at a basic level but ultimately touched more areas of his business.
“At first I was using AI in my communication stack; every call, every meeting, every interaction was being transcribed, analyzed for keywords, key moments and sentiment. Then I started using it everywhere,” he said.
For example, he used AI to rebrand his company to Catch Advisors and build a logo. He also turns to AI to build LinkedIn content, newsletters and media content.
In addition to marketing, Burt also taps into AI on the sales side.
“I’m using AI to load balance email domains for SMART email sending based on recipient’s domain, role and industry. [I’m] leveraging this to scale and get the attention of the right people at the right time,” he told Channel Futures.
On the finanical side, he has used AI to analyze account information and visualize data.
Lastly, he has leveraged AI in operations. That includes building “legal templates, policies and cybersecurity frameworks, which can be part of a customer deliverable.” He has notably shared with other technology advisors examples of a master services agreement.
The topic of artificial intelligence recently came up in a discussion of tools technology services distributors (TSDs) are building to recommend vendors to their sales partners.
Select Communications CEO Jerry Goldman said the implementation of predictive AI makes the supplier recommendation process more efficient as well as build trust with the partner community.
“If I can go to ChatGPT and I can put some information there to tell me which UCaaS supplier to choose based on my requirements, then how can we utilize that in the frame of our business to use tools like [Telarus] SolutionVue or [Avant] Pathfinder? Those tools are are essentially doing that, but AI is going to quickly advance what can be done through those tools,” Goldman said in an interview about Telarus’ platform update.
The topic of artificial intelligence recently came up in a discussion of tools technology services distributors (TSDs) are building to recommend vendors to their sales partners.
Select Communications CEO Jerry Goldman said the implementation of predictive AI makes the supplier recommendation process more efficient as well as build trust with the partner community.
“If I can go to ChatGPT and I can put some information there to tell me which UCaaS supplier to choose based on my requirements, then how can we utilize that in the frame of our business to use tools like [Telarus] SolutionVue or [Avant] Pathfinder? Those tools are are essentially doing that, but AI is going to quickly advance what can be done through those tools,” Goldman said in an interview about Telarus’ platform update.
As billionaire business leaders like Elon Musk and OpenAI CEO Sam Altman debate the best path forward for generative AI programs and their regulators, partners in the B2B technology channel are already making use of solutions like ChatGPT-4.
AI and ChatGPT have dominated dinner conversations over the last few months, and those topics are getting extra play in the news cycle this week. First, Sam Altman on Tuesday appeared before the U.S. Senate Judiciary Subcommittee Privacy, Technology and the Law. There he defended the steps OpenAI has taken to protect children online and mitigate ChatGPT’s spread of misinformation. He also called for the creation of a regulatory government agency to provide oversight of generative AI.
“I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that,” Altman told senators. “We want to work with the government to prevent that from happening. But we try to be very clear-eyed about what the downside case is and the work we have to do to mitigate that.”
Bradley Shimmin, Omdia chief analyst for AI and data analytics, pointed out that Altman was making an argument for more than just general regulations on generative AI programs like OpenAI’s ChatGPT, Dall-E and Google’s DeepMind. (Omdia and Channel Futures are both part of Informa Tech.) Shimmin said Altman was encouraging lawmakers to increase restrictions on open-source generative AI providers, which do not include OpenAI or Google. This hearing occurred less than two weeks after a leaked Google memo expressed the concern that neither Google nor OpenAI “have a moat” to fend off open-source competitors leveraging Meta’s LLaMA language model.
Omdia’s Bradley Shimmin
“What Altman was reiterating to Congress is, ‘This is not something that should be open in the wild, because people are going to do nefarious things with it. You need to ‘trust the experts,'” Shimmin told Channel Futures.
OpenAI launched in 2015 as an open-source, nonprofit project but in 2019 transitioned into a “capped-profit” company. Microsoft in January announced a $10 billion investment in OpenAI which would reportedly ultimately give it a 49% stake in the organization.
Multi-company CEO Elon Musk was one of OpenAI’s co-founders, but he left the company’s board in 2018, citing his concerns over the increasing privatization of the company. Musk in an interview with CNBC’s David Faber on Tuesday lambasted OpenAI for its movement away from a nonprofit and open-source model and raised concerns about Microsoft’s controlling influence on OpenAI as an investor.
Channel Futures turned to Shimmin to help read between the lines on Altman and Musk’s comments and get a better understanding of the debate between open-source and closed-source generative AI models. In addition, Channel Futures heard from Kyle Burt, founder and chief technology advisor at partner firm Catch Advisors about how how he is using ChatGPT to automate his business in multiple areas.
Read all of the commentary and observations in the slides above.
Want to contact the author directly about this story? Have ideas for a follow-up article? Email James Anderson or connect with him on LinkedIn. |
About the Author(s)
You May Also Like