Contactez-nous Suivez-nous sur Twitter En francais English Language

Freely subscribe to our NEWSLETTER

Newsletter FR

Newsletter EN



Mindgard launched Mindgard’s AI Security Lab

February 2024 by Marc Jacob

Mindgard, the market AI cybersecurity platform, launched Mindgard’s AI Security Labs, a free online tool for engineers to perform red teaming by evaluating the cyber risk to AI systems, including large language models (LLMs) like ChatGPT. As well as derisking a wide range of AI deployment scenarios, the tool marks a major advance in the cyber threat educational tools available to engineers.

As enterprises rapidly develop or adopt AI to gain competitive advantage, they are exposed to new attack vectors that conventional security tools cannot address. Even using so-called foundation models, such as ChatGPT, exposes them to risks as there have been no automated processes available until now to test the possible impacts of attacks.

Mindgard’s AI Security Labs lifts the lid on exposure to ML attacks faced by model developers and user organisations alike. These risks are currently predominantly undetected due to the complexity of identification and the lack of the specialised skills needed. Current AI penetration tests – if they happen at all – require months of programming and testing by hard-to-find and highly expensive teams. Even where they do happen, any subsequent change in the AI stack, model or underlying data necessitates a completely new test. As a result, senior management is often completely unaware of the likely impact of any disruption.

Mindgard’s free AI Security Labs automates the threat discovery process, providing repeatable AI security testing and reliable risk assessment in minutes rather than months. It allows engineers to select from a range of attacks against popular AI models, datasets and frameworks to assess potential vulnerabilities. The results provide insight on what is the current “art of the possible” in AI attacks and the likelihood of evasion, IP theft, data leakage, and model copying threats.

After the phenomenally fast adoption of LLMs following the launch of ChatGPT 3.5, a range of possible attacks on AI systems has been starting to emerge. Data poisoning has been observed, where chatbots have been made to swear or produce anomalous results. Data extraction is another threat, where an LLM reveals, for example, the sensitive data on which it was trained. And copying entire AI/ML models is increasingly common: Mindgard’s AI security researchers demonstrated this by copying ChatGPT 3.5 in two days at a cost of just $50.

Mindgard’s AI Security Labs is available via a simple online sign-up. No payment or credit card information are required. Key benefits of Mindgard’s free AI cyber risk tool include:

• Able to conduct AI red teaming in over 170 unique attack scenarios.
• Assessment of cyber risk of leading LLMs such as Mistral
• Demonstration of jailbreaking, data leakage, evasion, and model copying attacks
• Easy selection of AI models, datasets and frameworks to be used in the AI attack scenario
• Detailed reports on AI cyber risk, and attack success rates

As well as immediate sign-up from its website, Mindgard also plans to make its solution available on the Azure marketplace in the coming months, with Google Cloud Platform (GCP) and Amazon Web Services (AWS) to follow.

See previous articles


See next articles

Your podcast Here

New, you can have your Podcast here. Contact us for more information ask:
Marc Brami
Phone: +33 1 40 92 05 55

All new podcasts