Whether you’re an organisation using AI-driven chatbots for customer enquiries, developing predictive algorithms for credit risk, or image recognition software for security purposes, the upcoming legal obligations of deploying certain artificial intelligence (AI) technologies under the EU’s landmark AI Act may severely impact your data handling practices.
A team of experts in data protection services and AI governance have penned the following article to help businesses understand the requirements of the AI Act, as well as what will be required of your organisation in order to achieve compliance.
What is The AI Act?
The AI Act establishes a regulatory and legal framework for the deployment, development and use of AI systems within the EU. The legislation takes a risk-based approach, categorising AI systems according to their potential impact on safety, human rights and societal well-being. Some systems are banned entirely, while systems deemed ‘high-risk’ are subject to stricter requirements and assessments before deployment.
AI systems are categorised into different risk levels based on their potential impact, with the burden of compliance increasing proportionate to the risk. The three main categories are prohibited, high risk and low risk.
Prohibited systems are banned entirely, due to the unacceptable potential for negative consequences. High risk systems are those with a significant impact on people’s safety, wellbeing and rights. They are allowed, but are subject to stricter requirements. Low risk systems are those which pose minimal dangers, and therefore have fewer compliance obligations.
How will the AI Act and the GDPR work together?
“In many cases, the AI Act and the GDPR will complement each other”, comments one UK-based data protection specialist. “The AI Act is essentially a product safety legislation designed to ensure the responsible and non-harmful deployment of AI systems. The GDPR is a principles-based law, protecting fundamental human privacy rights”.
Where Are We At The Moment?
The AI Act was approved by the European Council on 21 May 2024, with a phased implementation schedule over two years, which has been designed to give organisations time to make necessary changes to their systems and practices.
The Act will apply to public and private organisations operating within the EU that develop, deploy, or use AI systems within the EU’s single market. This includes companies, institutions, government bodies, research organisations and any other organisations involved in AI-related activities.
When will the AI Act apply?
The AI Act’s finalised text will be published in the Official Journal of the European Union, officially entering into force twenty days after publication – expected by late June or early July 2024. The new law will then apply two years later, in 2026.
The EU Commission has also established the EU AI Office. From 16 June 2024, the AI Office will support the implementation of the AI Act across all Member States.
Timeline and Important Deadlines
The AI Act becomes law (expected late June to early July 2024)
6 months later
AI practices with unacceptable risks to health and safety or fundamental human rights will be banned. The deadline for compliance on unacceptable risk AI systems is, understandably, one of the first to be enforced, so organisations should evaluate their risk exposure in this area as soon as possible.
9 months later
The AI Office will finalise the codes of conduct to cover the obligations for developers and deployers of AI systems. These codes will provide voluntary guidelines for responsible AI development and use.
12 months later
The rules for providers of General Purpose AI (GPAI) will come into effect and organisations must align their practices with these new rules. “GPAI” refers to advanced AI systems capable of performing a wide range of tasks – ChatGPT being one such example. In addition, the first European Commission annual review of the list of prohibited AI applications will also take place 12 months after the AI Act enters into force.
18 months later
The European Commission will issue implementing acts for high-risk AI providers. This means organisations using high-risk AI systems must follow a standard template to monitor the AI systems after deployment. The monitoring plan will help to ensure that any issues or risks are promptly identified and addressed.
24 months later
The remainder of the AI Act will apply, including regulations on high-risk AI systems listed in Annex III of the AI Act. These systems include those related to biometrics and include technologies such as fingerprint recognition, facial recognition, iris scanning and voice authentication.
36 months later
Regulations for high-risk AI systems stipulated in Annex I become effective.
Conclusion
The AI Act represents a substantial legislative shift for organisations that use artificial intelligence in the EU, and organisations must plan for severe criteria and assessments, especially for systems posing a high risk. As the AI landscape continues to develop, staying informed and adaptable will be key for businesses to continue harnessing AI’s potential while adhering to new legal obligations.