Home Sports What to expect from the entry into force of the EU Artificial...

What to expect from the entry into force of the EU Artificial Intelligence Law

87
0


This article was originally published in English

‘Euronews Next’ looks at what’s to come in the coming months and years, as businesses prepare to meet the requirements of the legislation.

ADVERTISING

The EU Artificial Intelligence Law It goes into effect on Thursday and will apply to all artificial intelligence (AI) systems existing or in development. The law is considered the world’s first regulation that attempts to regulate AI based on the risks it poses.

Lawmakers approved the law in March, but its publication in the Official Journal of the European Commission In July it launched the procedures for its entry into force. The date of August 1 puts in place a series of dates and deadlines over the coming months and years to prepare companies using AI in any capacity to familiarize themselves with and comply with the new legislation.

The AI ​​Law evaluates companies based on risk

The EU AI Law assigns its rules to each company that uses AI systems based on four levels of risk, which in turn determines what deadlines apply to them. The four types of risk are: Risk free, minimal risk, high risk y banned AI systems.

The EU will completely ban certain practices from February 2025. These include those that manipulate a user’s decision making o facial recognition databases expanded through Internet scraping.

Other AI systems being considered high-risk, such as those that collect biometric data and those used for critical infrastructure or employment decisions, will have to meet the highest standards. These companies will have to show their AI training data sets and provide evidence of human supervision, among other requirements.

According to Thomas Regnier, spokesman for the European Commission, about 85% of AI companies fall into the third “minimal risk” categorywith very little regulation required.

Between 3 and 6 months for companies to comply with the regulations

Heather Dawe, responsible AI director at consultancy UST, is already working with international clients to align their use of AI with the new law. Dawe’s international customers “agree” with the new requirements of the law because it is recognized that AI regulation is necessary.

According to Dawe, adapting to the new law can take between three and six months, depending on the size of the company and the importance of AI in its workflow.

The companies They could consider creating internal AI governance councilsDawe continued, bringing in legal, technology and security experts to conduct a full audit of the technologies being used and how they must comply with the new law.

If it is discovered that a company does not comply with the Artificial Intelligence Law in the different deadlines, you could face a fine of up to 7% of your global annual turnoveraccording to Regnier of the Commission.

The Commission’s preparations

The Commission’s AI Office will monitor compliance with the rules relating to general-purpose AI models. Sixty members of the Commission’s internal staff will be reassigned to this office and another 80 external candidates will be hired in the next year, Regnier said.

An AI Council made up of high-level delegates from the 27 EU member states laid the foundations for the implementation of the Law at its first meeting in June, according to a Press release.

The Board will work with the AI ​​Office to ensure that law enforcement is harmonized across the EU, Regnier added. More than 700 companies say they will sign an AI Pactwhich implies a commitment to comply with the law.

EU states have until August to create competent national authorities that oversee the application of regulations in their country. The Commission is also preparing to accelerate its investments in AI, with an injection of 1 billion euros in 2024 and up to 20 billion in 2030.

“What you hear everywhere is that what the EU is doing is pure regulation (…) and that this will block innovation. This is not correct,” Regnier said. “The legislation is not there to push companies not to launch their systems, quite the opposite.

ADVERTISING

For the Commission, one of the main challenges is regulating future AI technologies, Regnier said, but he believes the risk-based system means they can quickly regulate any new system.

More revisions needed

Risto Uuk, head of EU research at the Future of Life Institute, believes that the European Commission still needs to clarify the degree of risk of certain technologies.

For example, according to Uuk, Using a drone to take photos around a water supply in need of repair “doesn’t seem like much of a risk”despite falling into the legislation’s high-risk category.

“When you read it right now, it’s pretty general,” Uuk said. “We have this orientation at a more general level and that’s useful, because companies can then ask the question of whether a specific system is high risk.”

ADVERTISING

Uuk believes that, as it is implemented, the Commission will be able to give more concrete answers.

According to Uuk, The law could go further by imposing more restrictions and higher fines on big tech companies that operate with generative AI (GenAI) in the EU.

Big AI companies like OpenAI and DeepMind are considered “general purpose AI” and are in the minimal risk category.

Companies developing general-purpose AI must demonstrate how they comply with copyright laws. Copyrightpublish a summary of training data and demonstrate how they protect cybersecurity.

ADVERTISING

Other aspects to improve are human rights, according to European Digital Rights, a group of NGOs. “We regret that The final law contains several important gaps in biometrics, policies and national securityand we ask legislators to close these gaps,” a spokesperson said in a statement provided to ‘Euronews Next’.



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here