The EU AI Act Is Here – Here's What It Means for Channel Partners
The EU AI Act, launched Thursday, is the first comprehensive AI regulation anywhere in the world. With it likely to be replicated in other regions, what will the cost be for channel partners?
![EU AI Act now in effect EU AI Act now in effect](https://eu-images.contentstack.com/v3/assets/blt10e444bce2d36aa8/blt6545b84b045d24fe/66aba04ab8081cb3d7c15472/AI_Regulation_EU_2024.jpg?width=700&auto=webp&quality=80&disable=upscale)
Ivan Marc/Shutterstock
Nick Isherwood, group CIO at Microsoft partner Advania, has welcomed the introduction of the EU AI Act. He said it brings a consistent framework for the use and supply of AI systems. In doing so it offers “greater assurance for consumers that AI technology is assessed against a risk-based classification system before entering production."
“The EU is focusing on promoting the ongoing innovation of AI technology whilst also ensuring it is safe, transparent and ethical to use," said Isherwood. "This will boost consumer trust and increase confidence that AI can provide tremendous benefits to society if used appropriately and governed by a set of standards that the whole industry can get behind.”
Chris Shaw, UKI&SA country channel manager at SaaS vendor AvePoint, warned that the cost of compliance with the EU AI Act could put smaller businesses at risk. However, MSPs can help them with data classification and compliance to avoid those costs.
“The EU AI Act classifies AI according to risk and limits usage accordingly, with the highest-risk activities receiving the most regulatory control," said Shaw. "Organisations with financial or health information would be at higher risk, but MSPs can help them to better organise, protect and manage large datasets to avoid penalties.”
Elsewhere, Denas Grybauskas, head of legal at web scraping company Oxylabs, believes that the EU AI Act may also be detrimental to innovation. He said organizations must be on their toes to avoid facing penalties in the millions for severe violations involving high-risk AI systems.
It is Oxylab’s position that European tech companies are worried that the new legislation has been rushed with the EU, not taking its consequences into consideration.
“As the AI Act comes into force, the main business challenges will be uncertainty in its first years,” said Grybauskas. “Various institutions, including the AI office, courts and other regulatory bodies, will need time to adjust their positions and interpret the letter of the law. During this period, businesses will have to operate in a partial unknown, lacking clear answers if the compliance measures, they put in place are solid enough.
“One business compliance risk that is not being discussed lies in the fact that the AI Act will affect not only firms that directly deal with AI technologies but the wider tech community. Currently, the AI Act lays down explicit requirements and limitations that target providers (for example, developers), deployers (users), importers and distributors of artificial intelligence systems and applications. However, some of these provisions might also bring indirect liability to the third parties participating in the AI supply chain, such as data collection companies.”
Grybauskas added that another compliance-related risk stems from the decision to grant some exemptions under the AI Act for systems based on free and open-source licenses.
“There is no consolidated, single definition of ‘open-source AI ’ and it is unclear how the widely defined open-source model might be applied to AI. This situation has already resulted in companies falsely branding their systems as open-source AI for marketing purposes," he added. "Without clear definitions, even bigger risks will manifest if businesses start abusing the term to win legal exemptions.”
On the flip side, Simon Fisher, senior advisory services consultant at Orange Cyberdefense, said fears around regulation stifling development is an “age-old debate.” He noted, “did we not have the same debates about PCI DSS and DORA?”
Fisher believes that the EU AI Act can help ally fears around the seemingly unchecked rise of AI.
“The fear that AI will intrude on people’s liberties has been countered by a strict risk-based approach and defined no-go areas for the technology, backed up by clear accountability and rules of engagement for developers," said Fisher.
“Until the Act, there has not been clarity on where AI will replace human interventions with wild rumours and speculation now replaced by go and no-go areas. This will have a defining impact on technologies such as vulnerability detection, IM tools and audit services where the Act states that AI ‘is not meant to replace or influence the previously completed human assessment without proper human review.’
“The final area is around transparency, and this is where I believe the Act has its day. The Act requires developers to show the working of their models and provides clear governance on how developers should proceed with room for downstream rebuttal and a mechanism to measure compliance with the Act," he said.
Finally, Fisher said the scope of the Act gives real pause for thought for providers of high-risk AI systems.
“The Act states that it applies to those who ‘intend to place on the market or put into service high-risk AI systems in the EU, regardless of whether they are based in the EU or a third country, and also third-country providers where the high-risk AI system’s output is used in the EU.’ I read this as a global regulation for those providing such services.”
Fisher said the Act “is by no means a panacea for AI as there are still grey areas and question marks around how it will be monitored and policed. However, it goes a long away to distilling the fears and rumours that have been accumulating in the industry to date.”
Tom Henson is managing director at UK MSP Emerge Digital. He said that despite Brexit, the EU AI Act will still affect UK-based partners who develop AI solutions for use in the EU.
“They’ll need to meet high standards for documentation, risk assessment and AI transparency, which could increase their costs and require specialist compliance teams or legal advice. Being quick to comply could give them a competitive edge in the EU market,” he said.
“Even for those not working directly with the EU, it's important to keep up with the Act, as it might shape global AI policies and future UK laws. Taking early steps towards compliance can benefit UK firms by showcasing their commitment to ethical AI, improving their reputation, and building trust with clients. Keeping ahead of these regulations can also open up new collaboration and market opportunities within the EU," said Henson.
“Some partners might be looking at this Act reluctantly, worrying about potential fines, but they should also be looking at it as an opportunity to set up for long-term success," he added.
Ensono cloud evangelist Gordon McKenna said the effectiveness of the EU AI Act is contingent on “the successful conversion of its overarching safety mandates into detailed, enforceable standards.”
He noted that companies that deploy AI within Europe will be subject to penalties of up to EUR 35 million ($37.7 million) or 7% of their global annual revenue for any violations.
“With this in mind, companies like Ensono that deploy AI solutions for companies that operate in the EU will have to invest in significant training for consultancy, engineering and product teams,” he said.
This will probably include mandatory compliance training such as is in place for GDPR and HIPAA.
Ian Heath, UK channel and distribution lead, Dell Technologies described the EU AI Act as “an important milestone in the regulation of Artificial Intelligence.”
He said: “We welcome efforts to make AI safe while balancing innovation globally. AI regulation should be agile, flexible, and collaborative to ensure it can keep pace with the rapid evolution of the technology.
“In addition to the EU’s efforts, there is a multitude of forums currently addressing the potential risks of AI, and Dell is engaged with global policy networks like the OECD through its AI Community.
“We look forward to working in partnership with policymakers and regulators in the months and years ahead.”
Matt Cloke, CTO at partner Endava, said one of the most notable aspects of the EU AI Act is its extraterritorial effect. In other words, the Act not only applies to AI systems developed within the EU but also to those offered to EU customers or affecting EU citizens, regardless of where the providers are located. This means that AI developers and providers outside of the EU must also adhere to these regulations if they wish to operate within the European market.
“For these companies, the EU AI Act offers both a challenge and an opportunity,” he said. “While the compliance requirements may initially seem daunting, they also present a chance to differentiate themselves by adopting best practices in AI governance. The emphasis on transparency and human oversight over the technology which this Act brings aligns with growing public and consumer expectations around ethical use.”
Mark Skelton, chief technology & strategy officer at cloud-based MSP Node4, said with the current rate of progress, “we are at high risk of AI overtaking us before we can control the use of it.”
He said governments across the world “should be collaborating to get a handle on the situation. Crucially, they need to agree on guardrails for AI use and enforce them to the same degree. Otherwise, we run the risk of having ‘AI havens,' like tax havens, where entrepreneurs will move or start up their businesses so that they can develop and use AI in ways that are restricted in other countries. This stifles innovation and growth within the mandated countries, whilst going against the objectives of the regulation.
“Unfortunately, a globally acknowledged regulation is unlikely to be enforced any time soon, if ever," he added. "Technology companies and individual businesses should be stepping up and enforcing their own guardrails to control the use of AI. This will enable the industry to foster investment and innovation in AI with the confidence that they are doing so in a safe, respectful and moral way. The cat is out of the bag and there is no stopping AI in its tracks now. But the sooner we decide how to use it safely, the sooner we can reap the benefits and plan for a future with AI on our side.”
ISMS.online is a SaaS vendor helping organizations with their data privacy and information security compliance. CEO Luke Dash said the EU AI Act will present both an element of risk for channel partners, alongside opportunity.
“There is risk involved in non-compliance, so it is essential that channel businesses understand their requirements under the EU AI Act. They need to ensure they know what risk category they fall into and ensure that all parties are compliant, particularly if you are a channel business delivering an AI service into a high-risk category, such as a company involved in critical national infrastructure. But in real terms this is just the start. As more and more AI regulations come into play, arguably it will get harder for businesses and channel partners to keep abreast of these new developments.
“However, on the flipside, there is opportunity," he added. "Channel partners can help companies understand their requirements, consult, recommend or even help build the necessary tools and processes for businesses to succeed in a newly regulated world. This could even result in a new revenue stream for channel businesses.”
Finally, Alastair Edwards is chief analyst at channel research firm Canalys, which is owned by Channel Futures’ parent company, Informa. He said perhaps of any recent technology development, AI is most in need of regulatory controls.
“There are of course accusations that it will stifle innovation and progress. Meta and Apple not releasing some of their AI products in the EU because of GDPR and DMA respectively are examples of that. But other regions and countries – including the U.S. – will need to follow the EU’s lead, so this is just a short-term gap in my view. By its nature, the Act has global implications, not just for any supplier into the EU, but any company using AI systems whose output is in the EU.”
In contrast, he said the regulation should actually help drive customer investment and adoption of AI.
“Many businesses are concerned about deploying AI because a lack of understanding about the implications for data privacy and governance, and in the absence of clear guardrails," he said. "The regulation helps define those protections and guardrails, giving businesses greater confidence in the AI technologies they use. In other regions without regulation, businesses may see greater risks in deploying AI.
“The biggest challenge is the complexity, extra administrative burden and cost that the EU AI Act will create for both suppliers and users of AI systems, as well as a lack of expertise that exists around compliance. The Act is highly ambitious, given that it covers so many parts of the supply chain. It’s not clear how the EU will apply or police the regulation, and it’s likely to need significant refining over time.
“This is where the channel can play a role in helping their customers to navigate through this complexity and deliver professional services focused on preparation and compliance. This is what happened with GDPR, with many partners building service practices after its launch. The issue for channel partners will be building these skills, but also making sure they themselves are compliant. In reality, not every partner is going to benefit," the analyst concluded.
Finally, Alastair Edwards is chief analyst at channel research firm Canalys, which is owned by Channel Futures’ parent company, Informa. He said perhaps of any recent technology development, AI is most in need of regulatory controls.
“There are of course accusations that it will stifle innovation and progress. Meta and Apple not releasing some of their AI products in the EU because of GDPR and DMA respectively are examples of that. But other regions and countries – including the U.S. – will need to follow the EU’s lead, so this is just a short-term gap in my view. By its nature, the Act has global implications, not just for any supplier into the EU, but any company using AI systems whose output is in the EU.”
In contrast, he said the regulation should actually help drive customer investment and adoption of AI.
“Many businesses are concerned about deploying AI because a lack of understanding about the implications for data privacy and governance, and in the absence of clear guardrails," he said. "The regulation helps define those protections and guardrails, giving businesses greater confidence in the AI technologies they use. In other regions without regulation, businesses may see greater risks in deploying AI.
“The biggest challenge is the complexity, extra administrative burden and cost that the EU AI Act will create for both suppliers and users of AI systems, as well as a lack of expertise that exists around compliance. The Act is highly ambitious, given that it covers so many parts of the supply chain. It’s not clear how the EU will apply or police the regulation, and it’s likely to need significant refining over time.
“This is where the channel can play a role in helping their customers to navigate through this complexity and deliver professional services focused on preparation and compliance. This is what happened with GDPR, with many partners building service practices after its launch. The issue for channel partners will be building these skills, but also making sure they themselves are compliant. In reality, not every partner is going to benefit," the analyst concluded.
The EU AI Act, effective Thursday, is the first comprehensive regulation of artificial intelligence by a major regulator anywhere. It is designed so businesses across the EU have clear guidelines to help support their ethical AI adoption journey.
The EU AI Act assigns applications of AI to three risk categories. First, applications and systems that create an unacceptable risk, such as government-run social scoring of the type used in China, are banned. Second, high-risk applications, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirements. Lastly, applications not explicitly banned or listed as high risk are largely left unregulated.
But what does the introduction of the EU AI Act mean for channel partners? AI is a hot topic in the channel, with more partners developing use cases and deploying the technology internally. There will be costs involved related to both compliance and building potential practices serving customers, for example.
To understand the impact, we spoke with vendors and channel partners themselves. Most have welcomed the new regulation. Others are worried that compliance with the EU AI Act could put businesses at risk. On the plus side, there is an opinion that the Act will help counter fears around the unchecked rise of AI.
See the sideshow above to find out how the channel really feels about the new EU AI Act.
About the Author(s)
You May Also Like