Welcome

Passionately curious about Data, Databases and Systems Complexity. Data is ubiquitous, the database universe is dichotomous (structured and unstructured), expanding and complex. Find my Database Research at SQLToolkit.co.uk . Microsoft Data Platform MVP

"The important thing is not to stop questioning. Curiosity has its own reason for existing" Einstein



Friday, 1 May 2026

Operationalising Responsible AI: What Microsoft’s Approach Reveals

Responsible AI has become one of those phrases that organisations like to reference but rarely operationalise. It appears in strategy decks, risk registers, and conference panels, yet the practical mechanisms that make it real are often missing.  

Microsoft’s recent article on its internal responsible‑AI approach is useful not because it offers something radically new, but because it demonstrates what it looks like when a large organisation treats responsible AI as a discipline rather than a marketing narrative.

Below are the core lessons worth thinking about especially if you’re trying to move your organisation from aspiration to implementation.

1. Responsible AI is an organisational discipline, not a technical feature

The most important message is also the simplest: responsible AI only works when it is treated as a governing framework that shapes how AI is designed, deployed, and monitored.   This is not a “nice to have”. It is not a late‑stage review. It is not a compliance tick‑box.  It is a structural commitment that defines how decisions are made, how risks are surfaced, and how accountability is distributed. If organisations are still treating responsible AI as a technical add‑on, you will not scale safely.

2. A central authority is essential for coherence

Microsoft’s Office of Responsible AI functions as a single point of truth. It sets policy, interprets standards, and ensures that teams are aligned.  This matters because without a central authority, governance fragments. Different teams make different assumptions. Risk becomes inconsistent. Decisions become harder to audit. A central function does not need to be large, but it does need to be authoritative. It needs the mandate to say “no”, “not yet”, or “not like this”.

3. Distributed oversight is the only scalable model

A central team cannot carry the entire burden. Microsoft’s model. A senior council supported by a network of responsible‑AI champions is the only realistic way to scale oversight across a complex organisation. This mirrors how other disciplines have matured:  
- data protection officers and privacy champions  
- security teams supported by local security leads  
- governance functions with embedded practitioners  

The pattern is consistent with central clarity and distributed execution. If you want responsible AI to work, you need people embedded in delivery teams who understand the risks and know how to escalate them.

4. A unified workflow is the backbone of responsible AI operations

One of the most practical elements of Microsoft’s approach is its internal workflow tool. Every AI project is logged, assessed, and reviewed through a single structured process. This creates:  
- traceability  
- auditability  
- consistent risk categorisation  
- clear escalation routes  
- visibility across the portfolio  

Most organisations underestimate how much risk comes from fragmentation. If you don’t know what AI systems exist, you can’t govern them. A unified workflow is not optional. It is foundational.

5. Culture and process design matter more than tooling

The article makes a point that resonates strongly with anyone who has worked in governance, the tools support the work, but they do not define it. If you don’t have:  
- clear expectations  
- shared language  
- leadership commitment  
- a culture that values scrutiny  

no tool will save you. Responsible AI succeeds when the organisation behaves as if it matters — not when it installs a dashboard.

Thrre are some actionable steps for organisations to take to build their own responsible AI capability. These are the practical takeaways that any organisation can adopt immediately.

1. Start with a written standard
Define what “good” looks like. Set mandatory requirements. Clarify what triggers deeper review. This becomes your anchor.

2. Build a network of responsible AI practitioners. Identify people with the right instincts, governance‑minded, risk‑aware, delivery‑literate. Train them and Empower them.

3. Design the assessment process before you build tooling. Clarify the workflow:  
- What must every project declare?  
- Who reviews what?  
- How are risks escalated?  

Only then should you build or buy tools.

4. Integrate responsible AI checkpoints into delivery. Move away from late‑stage reviews. Embed assessments into initiation, design, and release readiness.

5. Treat bias detection and data quality as non‑negotiable. Bias is rarely intentional; it is inherited. Build structured checks into your evaluation pipeline.

6. Assign responsibility for monitoring regulatory change. Someone needs to track global AI regulation and translate it into internal practice. This prevents compliance surprises.

7. Use the open resources already available
Microsoft’s Responsible AI Toolbox, Human‑AI Experience guidance, and impact‑assessment templates provide a strong foundation. Use them to accelerate maturity.

Responsible AI is not about slowing innovation. It is about enabling it safely, predictably, and sustainably.  The organisations that will thrive in the next decade are those that treat responsible AI as a discipline with structure, clarity, and accountability, rather than a slogan.

Read more here.

Thursday, 30 April 2026

Data Governance explained

I had a very fun packed day in Manchester a few weeks ago talking on my favourite topic Data Governance, AI Governance and Microsoft Purview. Watch my recording here to help you get started.



Sunday, 26 April 2026

Inside Microsoft’s Responsible AI Framework: What Matters for Data Governance

Microsoft’s updated Responsible AI framework represents a significant evolution in how organisations are expected to approach AI oversight. While the principles themselves, fairness, reliability, safety, privacy, inclusiveness, transparency, and accountability are familiar, the operational expectations behind them have deepened. This isn’t a philosophical document; it’s a practical guide for embedding responsibility into the lifecycle of AI systems.

For data governance leaders, the most important shift is the emphasis on traceability. The framework makes it clear that organisations must be able to explain how data flows into models, how those models behave, and how decisions are made. This requires robust lineage, versioning, and monitoring. Without these, transparency becomes impossible.

Another critical element is human oversight. The framework reinforces that AI should augment, not replace, human judgement. This means governance must ensure that humans remain in the loop for high‑impact decisions, and that they have the context needed to interpret model outputs. Oversight is not a checkbox, it is a design requirement.

The framework also highlights the importance of data quality and representativeness. Poor data leads to poor models, and poor models lead to poor outcomes. Governance must ensure that training data is accurate, relevant, and free from harmful bias. This is where stewardship, classification, and quality controls become essential.

Finally, the framework calls for ongoing monitoring, not one‑time validation. Models evolve, data changes, and risks shift. Governance must be continuous, adaptive, and embedded into operational workflows.

Tracing my career journey though my blog

I was looking at my blog stats this morning and was really interested to see the geographical spread. I started writing my blog in 2011 and it has been read by 1.18m. I wanted to record all the technical tips I found and technology advancements which were useful to me and might be of use to help others. I started writing on SQL Server and the blog has migrated with me throughout my career through architecture, my PhD research and over the last few years I have been mostly writing on Data Governance, Microsoft Purview, AI Governance and Microsoft Fabric. 


I asked Copilot to share some interesting thoughts about my journey for my blog and here is what it thought.

Your blog reveals a journey defined by intellectual curiosity, data‑driven leadership, and a distinctive narrative voice. Here are five evidence‑based, genuinely interesting aspects of that journey.

1. You frame data leadership as a process of unlearning, not just learning

Your post “What data leaders must unlearn to lead in the Age of AI” argues that modern leadership requires shedding outdated assumptions, such as believing governance slows innovation or that documentation equals understanding. This reframing positions you as a thought leader challenging entrenched industry norms. 

2. You consistently connect research, industry practice, and philosophical curiosity

Your blog tagline “Chaos, complexity, curiosity and database systems” captures a rare blend: deep technical expertise paired with a reflective, almost philosophical lens on data systems. This fusion shapes your writing style and differentiates your professional voice. 

3. You document the shift from AI experimentation to AI industrialisation with governance at the centre

In your coverage of the Gartner Data & Analytics Summit, you highlight how governance has moved from a compliance checkbox to the engine of AI ROI. This shows your role as an interpreter of industry change, translating large‑scale trends into practical insights for practitioners. 

4. Your journey is grounded in both academic achievement and community leadership

Across external references, you are consistently described as a Microsoft Data Platform MVP, a PhD researcher recognised with the AOUG Will Swann Award, and a founder/organiser of Data Toboggan. This positions your blog as the narrative thread connecting your academic, professional, and community contributions. 

5. Your posts reveal a long‑standing commitment to making governance practical, accessible, and embedded

Whether discussing AI oversight, lineage, behavioural metadata, or Purview governance models, your writing emphasises practical implementation over theory. You repeatedly advocate for governance that is embedded, automated, and literacy‑driven, showing a consistent philosophy across years of posts. 





Wednesday, 22 April 2026

SQLBits 2026 Day 1

 SQLBits in Wales is happening this week. We have held the conference at the ICC before so all very familiar.  The keynote was introduced by Simon Sabin before it moved into an in-depth session on the future of Microsoft One SQL—from on-premises to Azure and into Microsoft Fabric. It delved into the unified, AI-ready relational database that powers modernization and next-gen AI apps. SQL  delivers consistency, performance, and some innovative features. The speakers in the keynote were Bob Ward, Anna Hoffman, Priya Sathy, Shiva Gurumurthy.

Data and AI is changing the world. It is the fuel that powers AI.  Microsoft SQL is one consistent SQL for the era of AI. It is
enterprise ready, has evolved over the decades to an industry leading scalable, dynamic platform with high availability and  best in class price performance. 

The keynote delved into migrate and modernize , the need for cloud native AI apps and the need for unified data platforms. 

Highlights of new features 

Azure SQL Server Managed Instance GA
SQL Server 2025 on Azure Virtual Machine GA
Azure Accelerate for Databases announced
aka.ms/modernizedatabases

Azure SQL Database Hyperscale GA you only pay for cores and storage and no license fee.

Mirroring from Microsoft SQL to Fabric GA
SQL Database in Fabric GA

Database Hub in Fabric was announced with fleet management,  observability and database agents.

The depth and breadth of SQL Server has grown substantially over the years and supports many engine types for holistic use. The engines being graph, vector,  columnar, document, spatial, key value, hierarchical, in memory and ledger.

Many sessions today delved into SQL migrations in various forms. There was a fun session talking about Databricks vs Fabric. There are many differences and business needs and business technology stack skills in house often influence the choice of technology.

The Azure SQL Server Hyperscale session talked about Hyperscale which is about the architecture design,  not the engine. It is truly a distributed , cloud native architecture  with boundless storage that grows automatically with elastic compute at two speed. It uses SQL Server as caches.

More sessions for day 2 tomorrow. 




Saturday, 11 April 2026

GRAICE Foundation Training Principles of Responsible AI Governance

I am pleased to share I have completed the GRAICE Foundation Training Principles of Responsible AI Governance and am certified for foundational competency in GRACIE, Humanity's Operating System for AI. 

GRAICE is a robust governance operating system geared toward instilling confidence and accountability in AI systems on a global scale. It has 6 foundational values, 7 operational pillars and a 3 teir assurance model. 



Wednesday, 8 April 2026

Operationalising Responsible AI: What Microsoft Purview Actually Enables and How to Use It Well

The conversation around Responsible AI is accelerating, but many organisations still struggle with the same practical gap: How do we turn principles into operational behaviour inside real systems?  
Frameworks like GRAICE™ and Microsoft’s Responsible AI Standard set the expectations,  but they don’t tell you how to wire those expectations into your data estate.

This is where Microsoft Purview plays a meaningful, but often misunderstood, role. Purview is not an end‑to‑end Responsible AI lifecycle platform. It doesn’t manage model development, evaluation, or fairness testing. What it does provide is the governance and security foundation that ensures AI systems interact with enterprise data safely, consistently, and in line with organisational policy.

Below are three actionable ways organisations can use Purview to strengthen Responsible AI practice without overstating its scope.

1. Use Purview to establish data boundaries for AI systems
AI systems are only as responsible as the data they can see. Purview’s classification, sensitivity labels, and access policies give organisations the ability to:

- identify sensitive or regulated data  
- prevent AI systems (including Copilot and internal agents) from accessing inappropriate content  
- enforce information barriers and least‑privilege access  
- ensure data minimisation by design  

Why this matters:  
GRAICE™ and Microsoft’s RAI Standard both emphasise data minimisation, privacy, and controlled access. Purview doesn’t enforce RAI principles directly — but it does enforce the data boundaries those principles depend on.

Action:  
Map your AI use cases to Purview sensitivity labels and access policies. Treat this as a precondition for deploying any AI capability.

2. Use Purview’s lineage and scanning to understand AI‑related data risk
Purview lineage is often misunderstood as “AI lifecycle traceability”. It isn’t.  
But it is a powerful mechanism for:

- understanding where sensitive data originates  
- seeing how data flows across systems AI may interact with  
- identifying shadow data sources that could introduce risk  
- supporting DSPM (Data Security Posture Management) for AI workloads  

Why this matters:  
Responsible AI requires organisations to understand the provenance, quality, and risk profile of the data AI systems rely on. Purview provides visibility into the data estate, not the model estate — and that visibility is essential for any RAI programme.

Action:  
Enable automated scanning and lineage for all data sources used by AI applications. Use lineage to identify high‑risk flows before enabling AI access.

3. Use Purview’s AI usage governance to monitor and control how AI behaves with your data
The newest Purview capabilities focus on AI usage governance — including Copilot and internal AI agents. This includes:

- monitoring AI interactions with sensitive data  
- detecting risky prompts or behaviours  
- applying data‑loss prevention controls to AI usage  
- generating audit trails for compliance and oversight  

Why this matters:  
Responsible AI is not just about how models are built — it’s about how they are used. Purview provides the observability and guardrails needed to ensure AI systems behave safely in production.

Action:  
Enable Purview’s AI usage governance features for all enterprise AI tools. Treat AI usage logs as part of your RAI assurance evidence.

In summary Purview does not operationalise Responsible AI on its own — and it shouldn’t be positioned as a lifecycle governance platform.  
What it does provide is the data governance, security, and AI‑usage oversight that Responsible AI frameworks rely on.

If you use Purview to:

1. Set data boundaries for AI  
2. Understand data risk and provenance  
3. Monitor and govern AI usage  

you create the conditions in which Responsible AI can actually function.