Wednesday, 21 February 2024

EU AI Act, the first extensive AI regulation globally, is approved

The European Union (EU) has been working on a new legal framework that aims to regulate the development and use of artificial intelligence (AI) in the EU. The proposed legislation, the Artificial Intelligence (AI) Act, focuses on ensuring that AI systems are trustworthy, respect human values and rights, and support the EU single market.

The AI Act introduces a risk-based approach to classify AI systems into four categories: unacceptable, high-risk, limited-risk, and minimal-risk. Unacceptable AI systems are those that pose a clear threat to the safety, livelihoods, or rights of people, such as social scoring or mass surveillance. High-risk AI systems are those that are used in critical sectors, such as healthcare, education, or law enforcement, and have a significant impact on people’s lives, such as medical devices, recruitment tools, or facial recognition. Limited-risk AI systems are those that pose some risks to people’s rights or expectations, such as chatbots, online advertising, or deepfakes1. Minimal-risk AI systems are those that pose no or negligible risks to people, such as video games, spam filters, or smart appliances.

The AI Act imposes different obligations and requirements for each category of AI systems. Unacceptable AI systems are banned from being developed, sold, or used in the EU. High-risk AI systems must comply with strict rules on data quality, transparency, human oversight, accuracy, security, and accountability. They must also undergo a conformity assessment before being placed on the market or put into service. Limited-risk AI systems must provide clear and adequate information to users about their nature, purpose, and capabilities. Minimal-risk AI systems are subject to voluntary codes of conduct and best practices.

The AI Act also establishes a governance structure and a cooperation mechanism for the implementation and enforcement of the rules. The European Commission will be responsible for monitoring and updating the list of high-risk AI systems and sectors, as well as adopting delegated and implementing acts. The European AI Board will be an independent advisory body that will provide guidance and recommendations to the Commission and the member states. The national competent authorities will be in charge of supervising and sanctioning the compliance of AI systems with the rules, as well as ensuring cross-border cooperation.

The AI Act is a landmark proposal that aims to make the EU a global leader in trustworthy and human-centric AI. However, it also faces some challenges and criticisms from various stakeholders, such as industry, civil society, and other countries. The AI Act will need to balance the interests and concerns of different actors, as well as adapt to the fast-changing and evolving nature of AI

There is a pyramid of risk.



References

https://www.weforum.org/agenda/2023/06/european-union-ai-act-explained/

https://www.bbc.com/news/world-europe-67668469

https://cset.georgetown.edu/article/the-eu-ai-act-a-primer/

https://www.finextra.com/the-long-read/847/what-is-the-eu-ai-act-understanding-europes-first-regulation-on-artificial-intelligence

Thursday, 15 February 2024

Fundamentals of Generative AI

 Generative AI is a branch of artificial intelligence that focuses on creating new content or data from scratch, such as images, text, music, or code. Generative AI models learn from existing data and use it to generate novel and realistic outputs that are not part of the original data. Some of the applications of generative AI include:

Image synthesis: Generative AI can create realistic images of faces, landscapes, animals, or objects that do not exist in the real world. 

Text generation: Generative AI can produce natural language texts on various topics, such as stories, poems, essays, or code. 

Music composition: Generative AI can compose original music in different genres, styles, and moods. 

Data augmentation: Generative AI can enhance or expand existing data sets by creating new samples that are similar but not identical to the original ones. This can help improve the performance and robustness of machine learning models. For example, generative AI can create new images of handwritten digits or new sentences of natural language.

The main challenge of generative AI is to ensure that the generated outputs are both diverse and realistic, meaning that they cover a wide range of possibilities and resemble the real data. To achieve this, generative AI models often use two types of techniques:

Probabilistic models: These are models that learn the probability distribution of the data and sample from it to generate new outputs. For example, variational autoencoders (VAEs) are probabilistic models that encode the data into a latent space and decode it back into the original space, adding some noise in the process to create variations.

Adversarial models: These are models that consist of two components: a generator and a discriminator. The generator tries to create outputs that fool the discriminator, while the discriminator tries to distinguish between real and fake outputs. The two components compete with each other and improve over time. For example, generative adversarial networks (GANs) are adversarial models that use neural networks as the generator and the discriminator.

Generative AI is a fascinating and rapidly evolving field of artificial intelligence that has many potential benefits and applications for society. However, it also poses some ethical and social risks, such as misuse, deception, or bias. Therefore, it is important to develop and use generative AI models responsibly and transparently, with respect for human values and rights. 

To learn more about the Fundamental of Generative AI , Microsoft Learn has a great course

It also covers what is the Azure OpenAI service. This being a Microsoft's cloud solution for deploying, customizing, and hosting large language models. There is a brief overview of  CoPilot.



Thursday, 1 February 2024

Data Toboggan - Purview in Microsoft Fabric

Excited to be speaking at Data Toboggan 

Event Date: 3rd February 2024 

Register now for free: https://bit.ly/DT24-Register

Agenda: https://bit.ly/DT24-Agenda

Abstract

Microsoft Fabric comes with Purview for data governance. What does that mean and how can it help with managing your data estate. This session looks to connect the dots between the old and new and explains, which of the apps exist in Fabric.



Data Toboggan Winter Edition 2024

Please join us on Saturday for the #DataToboggan winter edition. We have 32 speakers, 3 tracks, including an AI track. Lots of fun and learning. We also have the amazing Knee-deep in Tech not to be missed and a keynote from Kim Manis

Event Date: Saturday 3rd February 2024 

Register now for free: https://bit.ly/DT24-Register

Agenda: https://bit.ly/DT24-Agenda





#azuresynapse #microsoftfabric #synapseanalytics #AI #artificialintelligence #copilot #openai