New EU artificial intelligence law ‘risks stalling innovation’, says Oxylabs

European tech companies worry that the new legislation has been rushed with the EU not taking its consequences into consideration.

In March, the European Parliament adopted the Artificial Intelligence Act, the world’s first comprehensive horizontal legal framework for AI.

The intention of this groundbreaking act is to safeguard fundamental rights, democracy and environmental stability from the surge of high-risk AI.

Through the act, the EU is attempting to install ethical AI use across Europe, but many think its introduction has been rushed due to increased pressure. Trustworthy AI across Europe is the aim, but the EU must consider its impact on the technology industry.

Denas Grybauskas, head of legal at Oxylabs, said, “As the AI Act comes into force, the main business challenges will be uncertainty in its first years.

“Various institutions, including the AI office, courts, and other regulatory bodies, will need time to adjust their positions and interpret the letter of the law.

“During this period, businesses will have to operate in a partial unknown, lacking clear answers if the compliance measures, they put in place are solid enough.

“One business compliance risk that is not being discussed lies in the fact that the AI Act will affect not only firms that directly deal with AI technologies but the wider tech community as well.

“Currently, the AI Act lays down explicit requirements and limitations that target providers (that is, developers), deployers (that is, users), importers, and distributors of artificial intelligence systems and applications.

“However, some of these provisions might also bring indirect liability to the third parties participating in the AI supply chain, such as data collection companies.”

Most AI systems today are based on machine learning models that require an abundance of data for training to ensure that the model has an adequate contextual understanding, is not outrightly biased, and does not hallucinate its outputs.

Today, AI developers are looking for ways to scrape as much publicly available web data as possible. Although the AI Act does not target data-as-a-service (DaaS) companies and web scraping providers, these firms might indirectly inherit certain ethical and legal obligations.

Grybauskas continued, “A prime example is web scraping companies based in the EU who will have to ensure they do not supply data to firms developing prohibited AI systems.

“If a company willingly cooperates with an AI firm that, under EU regulation, is breaking the law, such cooperation might bring legal liability.

“Moreover, web scraping providers will need to install robust know-your-customer (KYC) procedures to ensure their infrastructure is used ethically and lawfully, ensuring an AI firm is collecting only the data they are allowed to collect, not copyright-protected information.

“Another broad compliance-related risk that I can foresee comes from the decision to grant some exemptions under the AI Act for systems based on free and open-source licences.

“There is no consolidated, single definition of ‘open-source AI’; and it is unclear how the widely defined open-source model might be applied to AI.

“This situation has already resulted in companies falsely branding their systems as ‘open-source AI’ for marketing purposes. Without clear definitions, even bigger risks will manifest if businesses start abusing the term to win legal exemptions.”

Grybauskas concluded: “The AI Act has the potential to establish trust across the industry but may also be detrimental to innovation across the technology industry.

“Organisations must be on their toes, as they may face penalties in the millions for severe violations involving high-risk AI systems.”