How MSPs Can Minimize Generative AI Risk for Customers

Safeguards include training staff to look for bias, fact-checking content, add gen AI-based security tools.

David Primor, Founder and CEO

July 19, 2024

4 Min Read
SomYuZu/Shutterstock

Generative artificial intelligence has become instrumental in innovating many industries and while there are some real benefits to this technology, there are also some challenges.

Cybercriminals have begun to harness generative AI for their benefits by using generative AI to create fake profiles that are tough to distinguish from real ones. They then use these profiles to engage in social engineering activities and phishing campaigns as well as to impersonate brands and create sophisticated malware.

Internal Risks

What makes things even worse is that there are internal risks to utilizing generative AI as well. If an organization isn't keeping careful tabs on how AI is being used internally, the result can be an increase in privacy concerns along with cybersecurity issues for customers. Technology like ChatGPT that is being used without careful oversight could expose customers to identity theft and leaks of sensitive data, as well as loss of intellectual property.

ChatGPT has skyrocketed in popularity, with close to 2 billion visits per month. According to McKinsey's 2023 AI report, 79% of organizations have had at least some exposure to generative AI. Most of the departments in an organization are using it extensively for many functions. Marketing, HR, sales, IT, operations, finance and executive leaders are all feeding data prompts and queries into generative AI engines. Some of the support they're receiving includes articles, marketing and social media content, emails and answers to customer's questions as well as general strategies.

Related:Generative AI Deployments Fuel Expected MSP Growth

All of this can be extremely helpful and time-saving, but there can be potential issues such as:

  •  Compromised identities

  •  Loss of intellectual property

  •  Data breaches

  •  Data privacy violations

  •  Lawsuits due to plagiarism

The real issue is that the technology is emerging and changing more rapidly than safeguards can be implemented to mitigate any potential security challenges.

Security Safeguards Falling Behind Challenges

Areas of concern include data employed in generative AI scripts, generative AI outcomes and the use of third-party generative AI tools. When a user inputs scripts, prompts, and data into these engines, sensitive or confidential data may accidentally be included. Customers may not fully understand that the data they enter is going to an external source.

According to a McKinsey report, only 21% of those using gen AI in the enterprise have established policies governing employee usage. Of those, 38% have taken steps to mitigate cybersecurity risks but only 32% are addressing the potential for gen AI inaccuracy. These numbers alone show what a great opportunity exists to support customers to safely navigate this new technology.

The inputs are an issue, but the outputs and outcomes are also a problem. The answers and responses that come from ChatGPT or other generative AI engines could contain sensitive information, bias, plagiarized content or proprietary information. There have already been instances of lawsuits regarding generative AI by writers, image owners and other creative professionals.

We as humans have bias, therefore there can be bias in generative AI answers as well. The bias can result in how the question is framed or other sources. Not all answers generated from AI are correct, either. All employees need to know this before they start to use data that came from generative AI. It is important to fact-check when using content that comes from generative AI.

MSPs and MSSPs Can Benefit from Generative AI

While there are things to be concerned about when it comes to data breaches, service providers should seize the opportunity. They can start by contacting existing customers to provide an assessment of their current risks when it comes to generative AI and provide them with a detailed plan in how to mitigate those risks.

When communicating with customers about AI, provide them with an understanding of the risks that this technology poses, a way to address the cybersecurity challenges and a set of best practices that can be implemented to ensure safe use of generative AI across the organization.

Some immediate actions that customers should take include:

  •  Educating and training employees.

  •  Implementing robust authentication protocols.

  •  Use secure and trusted tools.

  •  Regularly update software.

  •  Secure sensitive data.

  •  Ensure safe usage of generative AI outputs.

Once a customer has a plan and puts the security tools and policies in place, managed service providers can then help them avoid any negative consequences. But they cannot put risk-mitigation tactics in place if they're unaware of the risks involved. As a provider, assisting with the implementation of generative AI-based security tools can help clients avoid costly breaches and security issues.

Read more about:

EMEAVARs/SIsMSPs

About the Author

David Primor

Founder and CEO, Cynomi

David Primor is founder and CEO of Cynomi, a virtual CISO platform provider. A retired Israeli lieutenant colonel in IDF unit 8200, and previously technology director of Israel's cyber authority, David spent decades dealing with state-level cyber threats. He holds a bachelor's degree in electrical engineering from the Technion, Israel, and completed his Ph.D. at CERN.

Free Newsletters for the Channel
Register for Your Free Newsletter Now

You May Also Like