Malwarebytes Survey Shows Negative Consumer Sentiment Toward ChatGPT
The overall message is that people are "deeply uncomfortable" about ChatGPT and generative AI.
![Consumer Sentiment survey by Malwarebytes Consumer Sentiment survey by Malwarebytes](https://eu-images.contentstack.com/v3/assets/blt10e444bce2d36aa8/blt39e98e66c141ce24/6523f1fee6c2cd0b641a4e25/Consumer-Sentiment.jpg?width=700&auto=webp&quality=80&disable=upscale)
Zurainy Zain/Shutterstock
Despite the avalanche of ChatGPT media coverage and online chatter, only 35% of respondents agreed with the statement, “I am familiar with ChatGPT,” according to the Malwarebytes survey. That’s significantly less than the 50% that disagreed.
Of those who said they were familiar with ChatGPT, 51% question whether AI tools can improve internet safety.
Malwarebytes‘ Mark Stockley said the most surprising thing about the survey findings is the lack of optimism among people who say they are familiar with ChatGPT.
“The seven months since the launch of ChatGPT have seen an avalanche of press coverage and tweets about how generative AI is going to change everything, and all the amazing things it can do, alongside a slew of startups and established tech companies launching or announcing products based on ChatGPT-like bots,” he said. “The survey suggests that the excitement about the possibilities of ChatGPT is not universal. People are typically much more excited about changes they initiate rather than changes that are imposed upon them. I suspect the excitement about ChatGPT is felt largely by ‘disruptors’ who are excited by change, can initiate it and stand to profit from it, rather than those who will end up working in environments or jobs that are changed by it.”
The overall message from the survey is that people are “deeply uncomfortable” about ChatGPT and generative AI, Stockley said.
“They don’t trust it, don’t trust what it generates, and don’t see it as a force for good,” he said. “This is particularly interesting to me, working at a cybersecurity company that has been leveraging AI and ML for years to help improve efficiency, to identify malware and improve the overall performance of many technologies. I think the message for the industry is that we need to slow down, be thoughtful of how we use and develop this next iteration of AI, and also make sure we’re taking the time to educate.”
AI, particularly in the form of ML, has been around for many years and is used very successfully in cybersecurity applications like detecting malware by companies like Malwarebytes, Stockley said.
“This has occurred without fanfare, and without the kind of backlash and trepidation we are seeing against ChatGPT,” he said.
Meanwhile, we’ve yet to see really successful applications of generative AI in cybersecurity, Stockley said.
“And despite exhaustive hype, we have seen little if any interest in generative AI by cybercriminals,” he said. “The lesson for the channel is to not confuse the map with the territory. ChatGPT may be interesting, and it is probably part of all our futures, but it does not accurately represent the practical application and opportunity of AI as we see it today.”
In March, a raft of tech luminaries signed a letter that said, “We call on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4,” according to Malwarebytes. The letter “pulled no punches” on the “profound risks” posed by “AI systems with human-competitive intelligence.”
“The letter calls for the pause to be used to ‘jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts,'” Stockley said.
In March, a raft of tech luminaries signed a letter that said, “We call on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4,” according to Malwarebytes. The letter “pulled no punches” on the “profound risks” posed by “AI systems with human-competitive intelligence.”
“The letter calls for the pause to be used to ‘jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts,'” Stockley said.
A new Malwarebytes survey shows overall consumer sentiment toward ChatGPT is far from excitement, much less positive.
Malwarebytes conducted a pulse survey of its newsletter readers across the globe late last month via the Alchemer Survey platform. More than 1,400 people responded.
Malwarebytes’ Mark Stockley
“The uncertainty around how ChatGPT will change our lives, and whether it will take our jobs, is compounded by the mysterious way in which it works,” said Mark Stockley, cybersecurity evangelist at Malwarebytes. “It is an unknown quantity to everyone, even its creators. Machine learning (ML) models like ChatGPT are ‘black boxes’ with emergent properties that appear suddenly and unexpectedly as the amount of computing power used to create them increases.”
Consumer Sentiment Centers on Trust
The survey findings indicate consumer sentiment toward ChatGPT revolves around trust. Only 10% surveyed agreed with the following statement: “I trust the information produced by ChatGPT,” while 63% disagreed.
Respondents had a similar sentiment in relation to accuracy, with only 12% agreeing with the statement: “the information produced by ChatGPT is accurate,” while more than one-half disagreed.
Beyond concerns around trust and accuracy, 81% of respondents believe ChatGPT could be a possible safety or security risk, with 52% of respondents calling for a pause on ChatGPT work for regulations to catch up. That echoes similar tech luminary concerns voiced earlier this year.
Scroll through our slideshow above for more about consumer sentiment toward ChatGPT.
Want to contact the author directly about this story? Have ideas for a follow-up article? Email Edward Gately or connect with him on LinkedIn. |
About the Author(s)
You May Also Like