Passionately curious about Data, Databases and Systems Complexity. Data is ubiquitous, the database universe is dichotomous (structured and unstructured), expanding and complex. Find my Database Research at SQLToolkit.co.uk . Microsoft Data Platform MVP

"The important thing is not to stop questioning. Curiosity has its own reason for existing" Einstein

Wednesday 21 February 2024

EU AI Act, the first extensive AI regulation globally, is approved

The European Union (EU) has been working on a new legal framework that aims to regulate the development and use of artificial intelligence (AI) in the EU. The proposed legislation, the Artificial Intelligence (AI) Act, focuses on ensuring that AI systems are trustworthy, respect human values and rights, and support the EU single market.

The AI Act introduces a risk-based approach to classify AI systems into four categories: unacceptable, high-risk, limited-risk, and minimal-risk. Unacceptable AI systems are those that pose a clear threat to the safety, livelihoods, or rights of people, such as social scoring or mass surveillance. High-risk AI systems are those that are used in critical sectors, such as healthcare, education, or law enforcement, and have a significant impact on people’s lives, such as medical devices, recruitment tools, or facial recognition. Limited-risk AI systems are those that pose some risks to people’s rights or expectations, such as chatbots, online advertising, or deepfakes1. Minimal-risk AI systems are those that pose no or negligible risks to people, such as video games, spam filters, or smart appliances.

The AI Act imposes different obligations and requirements for each category of AI systems. Unacceptable AI systems are banned from being developed, sold, or used in the EU. High-risk AI systems must comply with strict rules on data quality, transparency, human oversight, accuracy, security, and accountability. They must also undergo a conformity assessment before being placed on the market or put into service. Limited-risk AI systems must provide clear and adequate information to users about their nature, purpose, and capabilities. Minimal-risk AI systems are subject to voluntary codes of conduct and best practices.

The AI Act also establishes a governance structure and a cooperation mechanism for the implementation and enforcement of the rules. The European Commission will be responsible for monitoring and updating the list of high-risk AI systems and sectors, as well as adopting delegated and implementing acts. The European AI Board will be an independent advisory body that will provide guidance and recommendations to the Commission and the member states. The national competent authorities will be in charge of supervising and sanctioning the compliance of AI systems with the rules, as well as ensuring cross-border cooperation.

The AI Act is a landmark proposal that aims to make the EU a global leader in trustworthy and human-centric AI. However, it also faces some challenges and criticisms from various stakeholders, such as industry, civil society, and other countries. The AI Act will need to balance the interests and concerns of different actors, as well as adapt to the fast-changing and evolving nature of AI

There is a pyramid of risk.






Thursday 15 February 2024

Fundamentals of Generative AI

 Generative AI is a branch of artificial intelligence that focuses on creating new content or data from scratch, such as images, text, music, or code. Generative AI models learn from existing data and use it to generate novel and realistic outputs that are not part of the original data. Some of the applications of generative AI include:

Image synthesis: Generative AI can create realistic images of faces, landscapes, animals, or objects that do not exist in the real world. 

Text generation: Generative AI can produce natural language texts on various topics, such as stories, poems, essays, or code. 

Music composition: Generative AI can compose original music in different genres, styles, and moods. 

Data augmentation: Generative AI can enhance or expand existing data sets by creating new samples that are similar but not identical to the original ones. This can help improve the performance and robustness of machine learning models. For example, generative AI can create new images of handwritten digits or new sentences of natural language.

The main challenge of generative AI is to ensure that the generated outputs are both diverse and realistic, meaning that they cover a wide range of possibilities and resemble the real data. To achieve this, generative AI models often use two types of techniques:

Probabilistic models: These are models that learn the probability distribution of the data and sample from it to generate new outputs. For example, variational autoencoders (VAEs) are probabilistic models that encode the data into a latent space and decode it back into the original space, adding some noise in the process to create variations.

Adversarial models: These are models that consist of two components: a generator and a discriminator. The generator tries to create outputs that fool the discriminator, while the discriminator tries to distinguish between real and fake outputs. The two components compete with each other and improve over time. For example, generative adversarial networks (GANs) are adversarial models that use neural networks as the generator and the discriminator.

Generative AI is a fascinating and rapidly evolving field of artificial intelligence that has many potential benefits and applications for society. However, it also poses some ethical and social risks, such as misuse, deception, or bias. Therefore, it is important to develop and use generative AI models responsibly and transparently, with respect for human values and rights. 

To learn more about the Fundamental of Generative AI , Microsoft Learn has a great course

It also covers what is the Azure OpenAI service. This being a Microsoft's cloud solution for deploying, customizing, and hosting large language models. There is a brief overview of  CoPilot.

Thursday 1 February 2024

Data Toboggan - Purview in Microsoft Fabric

Excited to be speaking at Data Toboggan 

Event Date: 3rd February 2024 

Register now for free: https://bit.ly/DT24-Register

Agenda: https://bit.ly/DT24-Agenda


Microsoft Fabric comes with Purview for data governance. What does that mean and how can it help with managing your data estate. This session looks to connect the dots between the old and new and explains, which of the apps exist in Fabric.

Data Toboggan Winter Edition 2024

Please join us on Saturday for the #DataToboggan winter edition. We have 32 speakers, 3 tracks, including an AI track. Lots of fun and learning. We also have the amazing Knee-deep in Tech not to be missed and a keynote from Kim Manis

Event Date: Saturday 3rd February 2024 

Register now for free: https://bit.ly/DT24-Register

Agenda: https://bit.ly/DT24-Agenda

#azuresynapse #microsoftfabric #synapseanalytics #AI #artificialintelligence #copilot #openai 

Wednesday 31 January 2024

Generative AI framework for Government

The UK government has released their generative AI framework created by the Central Digital and Data Office V1.0.  This is public sector guidance with a focus on Large Language Models (LLMs). 

The framework outlines ten principles:

Principle 1: You know what generative AI is and what its limitations are

Principle 2: You use generative AI lawfully, ethically and responsibly

Principle 3: You know how to keep generative AI tools secure

Principle 4: You have meaningful human control at the right stage

Principle 5: You understand how to manage the full generative AI lifecycle

Principle 6: You use the right tool for the job

Principle 7: You are open and collaborative

Principle 8: You work with commercial colleagues from the start

Principle 9: You have the skills and expertise that you need to build and use generative AI

Principle 10: You use these principles alongside your organisation’s policies and have the right assurance in place

It defines Generative AI as a form of AI '– a broad field which aims to use computers to emulate the products of human intelligence or to build capabilities which go beyond human intelligence'

Then within Generative AI  how those public LLM's fit within the field

The framework has lots of information to draw on from  advocating for lawful, ethical, and responsible usage to addressing the the challenges of accuracy, bias and environmental impact.  Transparency and human control are paramount going forward. You can read more here.

Tuesday 23 January 2024

Purview in Microsoft Fabric

I am pleased to be speaking at Data Toboggan Winter Edition 2024 on Saturday 3rd February.
Register now for free: https://bit.ly/DT24-Register

My Abstract

Microsoft Fabric comes with Purview for data governance. What does that mean and how can it help with managing your data estate. This session looks to connect the dots between the old and new and explains, which of the apps exist in Fabric.


Wednesday 10 January 2024

AI Builder Prompting Guide

Microsoft have released the AI Builder Prompting Guide, This is another useful tool to help with Prompt engineering.

The guide explains 'Prompts are how you build custom generative  AI capabilities in Power Platform, like summarizing a body of text, drafting a response, or categorizing an incoming email. Think of prompts as a way of building custom GPT functions using only natural language'

This comprehensive guide details the art of prompt engineering, providing insights on how to create effective prompts to maximize AI capabilities. 

There are 6 common uses for prompts: 
  • Classification of text
  • Sentiment analysis
  • Rewriting content
  • Summarize information
  • Extract information
  • Drafting a response
From summarization to sentiment analysis, this is guide helps explain prompts and strategies.  It also talks about Chain-of-thought prompting and how to lead AI step by step in reasoning processes. 

What to do when writing prompts:

The guide covers the important discussions that need to be had relating to the importance of responsible AI.