Poisonous Mushrooms & EU AI Act 2024: A Journey of Safe AI
Simplifying the EU AI Act 2024 & 5 Steps to get started!!
A recent incident involving autonomous robots mistakenly collecting poisonous mushrooms from farmland highlights the importance of safe and responsible AI use. How will the EU AI Act, implemented in March 2024, will prevent such Risks and Hazards from AI? Read on to find out more.
This blog post will serve as your guide to demystifying the EU AI Act. By the end, you'll understand its scope and how to prepare for compliance.
What You'll Learn:
The Scope of the EU AI Act
Getting Started with EU AI Act Compliance
Impact of Non-Compliance of the EU AI Act:
Existing businesses with non-compliant AI products face hefty fines of up to 7% of their global revenue under the EU AI Act. This could be a significant financial loss. Conversely, new businesses with compliant AI products may find easier entry into the lucrative European market.
As per EU AI Act, What is AI?
I think we all know AI, but do we understand AI?
EU AI Act defines a broad definition of AI to include systems that are using traditional Machine Learning, Deep Learning, and the latest Generative AI.
Artificial intelligence (AI) refers to systems that can learn, reason, and act autonomously.
Unlike a simple Excel formula for auto-summation, AI systems analyze data, identify patterns, and make predictions based on their learnings. An example of AI could be a medical system that provides personalized advice to patients based on their health data.
EU AI Act in a Nutshell:
The EU AI Act aims to ensure the safe and ethical usage of AI products. It categorizes AI based on potential risks (product safety) and sets compliance requirements for each category.
EU AI Act was first proposed in April 2021 and after a few iterations, Final Draft was approved in March 2024.
It will be enforced in multiple phases including:
6 months: To comply with the ban on prohibited AI.
12 months: To comply with Foundational Models such as GPT4.
24 Months: To comply with High-Risk AI products.
Riksy AI Categories under the EU AI Act
The AI Act primarily defines various AI Risky product categories where it will be applicable. Based on the categories, appropriate rules and regulations will be applicable:
Prohibited AI (Article 5): These pose unacceptable risks, e.g., social scoring systems for mass surveillance.
High-Risk AI: Requires stricter compliance due to high safety risk, e.g., AI for recruitment or medical diagnosis.
Limited-Risk AI: Needs to be transparent and user-friendly, e.g., chatbots.
Minimal-Risk AI: Lower regulatory burden, but basic AI literacy is encouraged.
Example of High-Risk AI Systems
Primarily High-risk and Limited-risk AI products are of major focus as prohibited AI systems will be strictly banned in the Europan Region as per the Act.
Is your AI-powered Product falls into High Risk or Limited Risk Category?
Many Systems that involve Autonomous but possibly biased & life-threatening decision-making falls into the high-risk category such as:
Biometric and facial Recognition for Tracking and Surveillance
Autonomous Education (Ed Tech products using GenAI)
Employment (GenAI powred HR Products such as Autonomous Resume Screening)
Medical AI (GenAI-powered medical assistants, robots so on.
Creditworthiness assessment
Insurance (Life & Health)
Social benefits distribution
While the EU website provides a draft version of a High-Risk Category Checker, it's important to remember this is a preliminary tool. For a final decision on your AI product's risk classification, consulting with legal and other relevant experts is crucial.
5 Steps to Get EU AI Act Ready
Avoid Prohibited AI: Don't develop products that fall under Article 5.
Identify Risk Category: Assess whether your product falls under this category. It is easy but the harder part. Perform quick assessment and take Legal Help.
High-Risk? Act Now!
If you are Fine-tuning your own model? Follow data acquisition and model training safety and transparency guidelines.
Deploying a pre-trained model? Test various AI Safety Parameters such as Accuracy, Robustness, and IT security. Mitigate High and Critical Issues.
Detoxio AI helps enterprises to Test their GenAI against Safety and Robustness parameters by performing LLM Red Teaming on the target System.
Transparency is Key: Ensure your AI product is user-friendly and explains its outputs clearly.
Human Oversight: Build mechanisms for human intervention in high-risk AI systems.
Putting the EU AI Act in Perspective
In a nutshell, the EU AI Act defines What is Unsafe and High-Risk AI, and other standards augment it by providing details on How to measure, test &mitigate, Who is responsible, and so on.
Here are a few other Standards and Regulations worth studying:
Product Liability Directive (PLD): Establishes who is liable if an AI product violates the AI Act.
ISO/IEC 42001:2023: Standard outlining best practices for AI product safety and security (ISO/IEC 42001:2023 standard, Paid)
NIST AI Risk Management Framework (RMF): US framework for managing AI risks (NIST AI RMF)
Conclusion:
Understanding and complying with the EU AI Act is crucial for businesses operating in the European market. By taking proactive steps, you can avoid hefty fines and ensure your AI products are developed responsibly. In the next parts 2 & 3 of this blog, we will go deep into How to Get EU AI Act Ready, including Key Concepts of AI Robustness, Reliability, and Adversarial Testing.
A bit about ourselves. We are Detoxio AI and our mission is Secure and Reliable GenAI
Our team of elite hackers and security engineers, with a proven track record of building industry-leading security products, has built a powerful solution. We leverage a massive database of 1 billion test prompts (10x more coverage than competitors!) across 40+ industries, covering over 1,000 potential attack tactics). This allows us to comprehensively identify and mitigate vulnerabilities in your GenAI models and apps.