Rechercher
Contactez-nous Suivez-nous sur Twitter En francais English Language
 

Freely subscribe to our NEWSLETTER

Newsletter FR

Newsletter EN

Vulnérabilités

Unsubscribe

Artificial Intelligence vs. GDPR: take advantage of the potential of AI in full compliance - Cohesity’s view

May 2023 by Cohesity

AI will be regulated in the EU separately under the proposed “AI Act”, with GDPR and AI being closely intertwined, as the recent ban in Italy shows, and just end of last week with Google deciding to delay access to its Bard AI chatbot in Europe because of GDPR. But the AI Act also has the power to make the use of AI safer through establishment of controls and regulation.

With fast growing consumer applications powered by AI technology, it raises more questions - legal, technical, and ethical, because there is a lack of transparency - no one from the outside can break into this black box.

The overall picture is blurred, but there are clear examples already that the use of AI with certain data creates legal risks when handling GDPR-relevant data. And the statistics show that the authorities in Europe continue to impose heavy penalties especially against prominent American companies in the technology sector. From May 2022 to May this year, fines totaling 1.1 billion euros were levied - nine of the ten highest penalties were imposed on prominent US-based organizations.

The EU wants to regulate the legal situation more clearly in their AI Act. The Europe-wide AI law cleared the first hurdle on May 11 and is scheduled to be passed in the plenum in mid-June. It will take until 2024 for the law to actually come into force. And only much later does it become clear in the first cases how it will actually work in practice.

Nobody in the free economy can afford to wait until then. Companies and private individuals now need clear orientation because they want to utilize the great potential of this technology:

• Always think about compliance: whilst it depends on the data used, companies should seek the advice of a data protection expert before introducing AI technologies.
• Know your data: Companies and their employees need to know exactly what data they are feeding the AI with, and what value this data has for the company.
• Understand the content of the data: Predefined filters immediately fish compliance-relevant data such as credit card or other personal details out of the data pool and mark them. Whatever the AI Act brings, Machine Learning (ML) and AI powered classification will be able to search for those additional attributes and give the company a piece of security for the future, and vastly enhance its compliance position.
• Control data flows: If the data is classified and categorized with the right characteristics, the underlying data management platform can automatically enforce rules without the data owner having to intervene. Modern data management platforms control access to this data by automatically encrypting it and requiring users to authorize themselves using access controls and multi-factor authentication.

For Mark Molyneux, CTO EMEA at Cohesity “What is certain is that companies and their employees will face new tasks and obligations from a compliance perspective. And while law takes time to be ratified, and even longer to be implemented to allow regulation, AI usage will continue to grow exponentially.

What is clear is that AI will disrupt the economy like the internet did. Companies want their employees to use AI in innovative ways, and AI itself has the power to tame AI by examining the data and its content. This opens up many good ways for companies to control their use of AI without having to fear high risks and penalties.”


See previous articles

    

See next articles


Your podcast Here

New, you can have your Podcast here. Contact us for more information ask:
Marc Brami
Phone: +33 1 40 92 05 55
Mail: ipsimp@free.fr

All new podcasts