Rechercher
Contactez-nous Suivez-nous sur Twitter En francais English Language
 

Freely subscribe to our NEWSLETTER

Newsletter FR

Newsletter EN

Vulnérabilités

Unsubscribe

WithSecure Comment: EU officials pass draft law to regulate AI and ban facial recognition systems

June 2023 by WithSecure™

News has emerged that EU officials pass a draft law to regulate AI and ban facial recognition systems.

Andrew Patel, Senior Researcher at WithSecure Intelligence, offers the following insight:
“Some of the requirements set forth by the AI Act are going to be difficult to enforce. These include disclosing that content was generated by an AI and ensuring that models cannot generate illegal content. Many open-source generative models are now available to the public and these models can be finetuned and repurposed by anyone with enough know how. Large language models have already been customized by extremists into AI systems capable of enabling and amplifying online radicalisation*. Regulations probably aren’t going to be able to stop those efforts.

Paul Brucciani, Cyber Security Advisor at WithSecure, agrees with Andrew, adding:
“Ensuring that LLMs can’t generate illegal content is a technical and legal minefield. Good luck to the developer that needs to interpret EU hate laws in each jurisdiction.

I would like to make two points:
• Compared to the glacial pace at which bureaucracies move compared to the pace of technology evolution, the EU has acted with foresight and leopard-like agility on the matter of AI.
• The EU’s position is very different to US regulatory position, and it will take years before a transatlantic agreement on regulating AI can be reached. This will affect investment in the EU in AI.

Proposed EU AI Act
The AI Act aims to provides a legal framework that is innovation-friendly, future-proof and resilient to disruption.
The act will classify AI systems based on their perceived risk and require the companies responsible for building the most impactful tools to disclose important data about safety, interpretability, performance, and cyber security. As with previous tech regulation pushed by the EU such as GDPR, the #AIAct will undoubtedly have a global effect on how tech companies do business. It is likely to come into force in 2025 or later.
A late provision reportedly added to the EU’s forthcoming AI Act would force companies like OpenAI to disclose their use of copyrighted training data. Already, a number of high-profile AI firms have been hit by copyright lawsuits.
Some of the AI world’s biggest players, like OpenAI, have avoided scrutiny by simply refusing to detail the data used to create their software. But legislation proposed in the EU to regulate AI could force companies to disclose this information, according to reports from Reuters and Euractiv.

AI Bill of Rights
On October 4, 2022, the White House published a Blueprint for an AI Bill of Rights (the “Blueprint”).
It identifies five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence.
• Safe and Effective Systems
• Algorithmic Discrimination Protections
• Data Privacy
• Notice of automated system use and explanation of its outcome
• Human Alternatives, Consideration, and Fallback

Unlike the EU’s planned AI Act, the Blueprint is non-binding but the blueprint breaks ground by framing AI regulations as a civil-rights issue. It asserts that communities have the right to protection and redress against harms to the same extent that individuals do.
The choice of language and framing clearly positions it as a framework for understanding AI governance broadly as a civil-rights issue, one that deserves new and expanded protections under American law.


See previous articles

    

See next articles


Your podcast Here

New, you can have your Podcast here. Contact us for more information ask:
Marc Brami
Phone: +33 1 40 92 05 55
Mail: ipsimp@free.fr

All new podcasts