EU AI Act + Cybersecurity - Expert Insights with F5 EMEA CTO
October 2024 by Bart Salaets, F5 EMEA Field CTO
After the news earlier this week about BigTech’s cybersecurity compliance pitfalls with the EU AI Act. The comment from Bart Salaets, F5 EMEA Field CTO:
“Article 15 of the EU AI Act states that so-called high risk AI systems must be designed and developed in a way that guarantees high levels of cybersecurity. It is good to see that tabs are being kept to ensure this very important legislation passage is respected, especially by big tech companies.
As AI becomes more integrated into key parts of our lives like healthcare, finance and the public sector, the potential cybersecurity risks keep growing. Any organisation that is going to put a high-risk AI system in production in the EU (and anywhere in the world for that matter) should be talking cybersecurity very seriously and ensure that they are introducing all relevant security precautions. Protecting AI applications requires businesses to have a full arsenal of existing app security solutions, such as web application firewall, distributed-denial-of-service mitigation, bot defence and API security. The addition of new solutions that protect against these newer LLM specific attacks, some of which are explicitly referred to in the AI act (e.g. prompt injection) is obviously also essential.
It is imperative that security has to be baked into the AI design phase from the very beginning and not bolted on as an afterthought. As the EU AI Act has set new standards for AI safety within the Union, it has also created the opportunity for cybersecurity innovation around AI, bolstering the importance of protecting people in the age of AI.”