The European Union (EU) has been working on a new legal framework that aims to regulate the development and use of artificial intelligence (AI) in the EU. The proposed legislation, the Artificial Intelligence (AI) Act, focuses on ensuring that AI systems are trustworthy, respect human values and rights, and support the EU single market.
The AI Act introduces a risk-based approach to classify AI
systems into four categories: unacceptable, high-risk, limited-risk, and
minimal-risk. Unacceptable AI systems are those that pose a clear threat to the
safety, livelihoods, or rights of people, such as social scoring or mass
surveillance. High-risk AI systems are those that are used in critical sectors,
such as healthcare, education, or law enforcement, and have a significant
impact on people’s lives, such as medical devices, recruitment tools, or facial
recognition. Limited-risk AI systems are those that pose some risks to people’s
rights or expectations, such as chatbots, online advertising, or deepfakes1.
Minimal-risk AI systems are those that pose no or negligible risks to people,
such as video games, spam filters, or smart appliances.
The AI Act imposes different obligations and requirements
for each category of AI systems. Unacceptable AI systems are banned from being
developed, sold, or used in the EU. High-risk AI systems must comply with
strict rules on data quality, transparency, human oversight, accuracy,
security, and accountability. They must also undergo a conformity assessment
before being placed on the market or put into service. Limited-risk AI systems
must provide clear and adequate information to users about their nature,
purpose, and capabilities. Minimal-risk AI systems are subject to voluntary
codes of conduct and best practices.
The AI Act also establishes a governance structure and a
cooperation mechanism for the implementation and enforcement of the rules. The
European Commission will be responsible for monitoring and updating the list of
high-risk AI systems and sectors, as well as adopting delegated and
implementing acts. The European AI Board will be an independent advisory body
that will provide guidance and recommendations to the Commission and the member
states. The national competent authorities will be in charge of supervising and
sanctioning the compliance of AI systems with the rules, as well as ensuring
cross-border cooperation.
The AI Act is a landmark proposal that aims to make the EU a
global leader in trustworthy and human-centric AI. However, it also faces some
challenges and criticisms from various stakeholders, such as industry, civil
society, and other countries. The AI Act will need to balance the interests and
concerns of different actors, as well as adapt to the fast-changing and
evolving nature of AI
There is a pyramid of risk.
References
https://www.weforum.org/agenda/2023/06/european-union-ai-act-explained/
https://www.bbc.com/news/world-europe-67668469