Welcome

Passionately curious about Data, Databases and Systems Complexity. Data is ubiquitous, the database universe is dichotomous (structured and unstructured), expanding and complex. Find my Database Research at SQLToolkit.co.uk . Microsoft Data Platform MVP

"The important thing is not to stop questioning. Curiosity has its own reason for existing" Einstein



Sunday, 26 April 2026

Inside Microsoft’s Responsible AI Framework: What Matters for Data Governance

Microsoft’s updated Responsible AI framework represents a significant evolution in how organisations are expected to approach AI oversight. While the principles themselves, fairness, reliability, safety, privacy, inclusiveness, transparency, and accountability are familiar, the operational expectations behind them have deepened. This isn’t a philosophical document; it’s a practical guide for embedding responsibility into the lifecycle of AI systems.

For data governance leaders, the most important shift is the emphasis on traceability. The framework makes it clear that organisations must be able to explain how data flows into models, how those models behave, and how decisions are made. This requires robust lineage, versioning, and monitoring. Without these, transparency becomes impossible.

Another critical element is human oversight. The framework reinforces that AI should augment, not replace, human judgement. This means governance must ensure that humans remain in the loop for high‑impact decisions, and that they have the context needed to interpret model outputs. Oversight is not a checkbox, it is a design requirement.

The framework also highlights the importance of data quality and representativeness. Poor data leads to poor models, and poor models lead to poor outcomes. Governance must ensure that training data is accurate, relevant, and free from harmful bias. This is where stewardship, classification, and quality controls become essential.

Finally, the framework calls for ongoing monitoring, not one‑time validation. Models evolve, data changes, and risks shift. Governance must be continuous, adaptive, and embedded into operational workflows.

Tracing my career journey though my blog

I was looking at my blog stats this morning and was really interested to see the geographical spread. I started writing my blog in 2011 and it has been read by 1.18m. I wanted to record all the technical tips I found and technology advancements which were useful to me and might be of use to help others. I started writing on SQL Server and the blog has migrated with me throughout my career through architecture, my PhD research and over the last few years I have been mostly writing on Data Governance, Microsoft Purview, AI Governance and Microsoft Fabric. 


I asked Copilot to share some interesting thoughts about my journey for my blog and here is what it thought.

Your blog reveals a journey defined by intellectual curiosity, data‑driven leadership, and a distinctive narrative voice. Here are five evidence‑based, genuinely interesting aspects of that journey.

1. You frame data leadership as a process of unlearning, not just learning

Your post “What data leaders must unlearn to lead in the Age of AI” argues that modern leadership requires shedding outdated assumptions, such as believing governance slows innovation or that documentation equals understanding. This reframing positions you as a thought leader challenging entrenched industry norms. 

2. You consistently connect research, industry practice, and philosophical curiosity

Your blog tagline “Chaos, complexity, curiosity and database systems” captures a rare blend: deep technical expertise paired with a reflective, almost philosophical lens on data systems. This fusion shapes your writing style and differentiates your professional voice. 

3. You document the shift from AI experimentation to AI industrialisation with governance at the centre

In your coverage of the Gartner Data & Analytics Summit, you highlight how governance has moved from a compliance checkbox to the engine of AI ROI. This shows your role as an interpreter of industry change, translating large‑scale trends into practical insights for practitioners. 

4. Your journey is grounded in both academic achievement and community leadership

Across external references, you are consistently described as a Microsoft Data Platform MVP, a PhD researcher recognised with the AOUG Will Swann Award, and a founder/organiser of Data Toboggan. This positions your blog as the narrative thread connecting your academic, professional, and community contributions. 

5. Your posts reveal a long‑standing commitment to making governance practical, accessible, and embedded

Whether discussing AI oversight, lineage, behavioural metadata, or Purview governance models, your writing emphasises practical implementation over theory. You repeatedly advocate for governance that is embedded, automated, and literacy‑driven, showing a consistent philosophy across years of posts. 





Wednesday, 22 April 2026

SQLBits 2026 Day 1

 SQLBits in Wales is happening this week. We have held the conference at the ICC before so all very familiar.  The keynote was introduced by Simon Sabin before it moved into an in-depth session on the future of Microsoft One SQL—from on-premises to Azure and into Microsoft Fabric. It delved into the unified, AI-ready relational database that powers modernization and next-gen AI apps. SQL  delivers consistency, performance, and some innovative features. The speakers in the keynote were Bob Ward, Anna Hoffman, Priya Sathy, Shiva Gurumurthy.

Data and AI is changing the world. It is the fuel that powers AI.  Microsoft SQL is one consistent SQL for the era of AI. It is
enterprise ready, has evolved over the decades to an industry leading scalable, dynamic platform with high availability and  best in class price performance. 

The keynote delved into migrate and modernize , the need for cloud native AI apps and the need for unified data platforms. 

Highlights of new features 

Azure SQL Server Managed Instance GA
SQL Server 2025 on Azure Virtual Machine GA
Azure Accelerate for Databases announced
aka.ms/modernizedatabases

Azure SQL Database Hyperscale GA you only pay for cores and storage and no license fee.

Mirroring from Microsoft SQL to Fabric GA
SQL Database in Fabric GA

Database Hub in Fabric was announced with fleet management,  observability and database agents.

The depth and breadth of SQL Server has grown substantially over the years and supports many engine types for holistic use. The engines being graph, vector,  columnar, document, spatial, key value, hierarchical, in memory and ledger.

Many sessions today delved into SQL migrations in various forms. There was a fun session talking about Databricks vs Fabric. There are many differences and business needs and business technology stack skills in house often influence the choice of technology.

The Azure SQL Server Hyperscale session talked about Hyperscale which is about the architecture design,  not the engine. It is truly a distributed , cloud native architecture  with boundless storage that grows automatically with elastic compute at two speed. It uses SQL Server as caches.

More sessions for day 2 tomorrow. 




Wednesday, 8 April 2026

Operationalising Responsible AI: What Microsoft Purview Actually Enables and How to Use It Well

The conversation around Responsible AI is accelerating, but many organisations still struggle with the same practical gap: How do we turn principles into operational behaviour inside real systems?  
Frameworks like GRAICE™ and Microsoft’s Responsible AI Standard set the expectations,  but they don’t tell you how to wire those expectations into your data estate.

This is where Microsoft Purview plays a meaningful, but often misunderstood, role. Purview is not an end‑to‑end Responsible AI lifecycle platform. It doesn’t manage model development, evaluation, or fairness testing. What it does provide is the governance and security foundation that ensures AI systems interact with enterprise data safely, consistently, and in line with organisational policy.

Below are three actionable ways organisations can use Purview to strengthen Responsible AI practice without overstating its scope.

1. Use Purview to establish data boundaries for AI systems
AI systems are only as responsible as the data they can see. Purview’s classification, sensitivity labels, and access policies give organisations the ability to:

- identify sensitive or regulated data  
- prevent AI systems (including Copilot and internal agents) from accessing inappropriate content  
- enforce information barriers and least‑privilege access  
- ensure data minimisation by design  

Why this matters:  
GRAICE™ and Microsoft’s RAI Standard both emphasise data minimisation, privacy, and controlled access. Purview doesn’t enforce RAI principles directly — but it does enforce the data boundaries those principles depend on.

Action:  
Map your AI use cases to Purview sensitivity labels and access policies. Treat this as a precondition for deploying any AI capability.

2. Use Purview’s lineage and scanning to understand AI‑related data risk
Purview lineage is often misunderstood as “AI lifecycle traceability”. It isn’t.  
But it is a powerful mechanism for:

- understanding where sensitive data originates  
- seeing how data flows across systems AI may interact with  
- identifying shadow data sources that could introduce risk  
- supporting DSPM (Data Security Posture Management) for AI workloads  

Why this matters:  
Responsible AI requires organisations to understand the provenance, quality, and risk profile of the data AI systems rely on. Purview provides visibility into the data estate, not the model estate — and that visibility is essential for any RAI programme.

Action:  
Enable automated scanning and lineage for all data sources used by AI applications. Use lineage to identify high‑risk flows before enabling AI access.

3. Use Purview’s AI usage governance to monitor and control how AI behaves with your data
The newest Purview capabilities focus on AI usage governance — including Copilot and internal AI agents. This includes:

- monitoring AI interactions with sensitive data  
- detecting risky prompts or behaviours  
- applying data‑loss prevention controls to AI usage  
- generating audit trails for compliance and oversight  

Why this matters:  
Responsible AI is not just about how models are built — it’s about how they are used. Purview provides the observability and guardrails needed to ensure AI systems behave safely in production.

Action:  
Enable Purview’s AI usage governance features for all enterprise AI tools. Treat AI usage logs as part of your RAI assurance evidence.

In summary Purview does not operationalise Responsible AI on its own — and it shouldn’t be positioned as a lifecycle governance platform.  
What it does provide is the data governance, security, and AI‑usage oversight that Responsible AI frameworks rely on.

If you use Purview to:

1. Set data boundaries for AI  
2. Understand data risk and provenance  
3. Monitor and govern AI usage  

you create the conditions in which Responsible AI can actually function.


Why Data Catalogues Fail (And How Purview Is Quietly Fixing the Industry’s Blind Spots)

Most data catalogues fail for a simple reason: they  assume that documentation alone creates understanding. It doesn’t. A catalogue full of stale metadata, incomplete lineage, and inconsistent tagging is worse than useless and it creates a false sense of confidence. Many organisations have learned this the hard way, investing heavily in catalogues that quickly became digital graveyards.

Purview succeeds where others fail because it treats the catalogue as part of a governance ecosystem, not a standalone tool. Lineage, classification, access policies, and data maps are not optional extras. They are the core of the experience. This integrated approach ensures that metadata is accurate, automated, and actionable.

Another blind spot Purview addresses is operational relevance. Traditional catalogues focus on documentation whereas Purview focuses on control. It doesn’t just describe data as it also governs it. This shift from passive to active metadata is what makes Purview viable at enterprise scale.

Purview also excels in hybrid and multi‑cloud environments, where many catalogues struggle. Its connectors, scanning capabilities, and policy enforcement mechanisms are designed for real‑world estates, not idealised architectures.

Purview is integrated with Fabric which positions it as the governance backbone of the Microsoft ecosystem. As organisations consolidate their data platforms, Purview becomes the source of truth that ties everything together.



Saturday, 4 April 2026

GCRAI and the Rise of GRAICE™: A New Global Framework for Responsible AI Governance

The global conversation around responsible AI has been dominated for years by national strategies, corporate principles, and academic frameworks. But the launch of the Global Council for Responsible AI (GCRAI) and its GRAICE™ framework marks a shift toward something far more ambitious: a unified, cross‑sector, cross‑industry operating system for AI governance. Unlike many initiatives that focus on high‑level ethics, GCRAI positions itself as a mechanism for operationalising responsibility at scale. It’s an attempt to move responsible AI from aspiration to enforceable practice.

What makes GCRAI notable is its global footprint. With representation across dozens of countries and a network of ambassadors, it aims to create a governance ecosystem that transcends borders and industries. This matters because AI risk is not localised. Models trained in one region influence decisions in another. Data flows across jurisdictions. And the consequences of AI misuse rarely stay within organisational boundaries. A global framework is not just desirable, it is necessary.

The GRAICE™ framework, unveiled at Davos, is positioned as “humanity’s operating system for AI.” While the branding is bold, the intent is clear: create a standard that is actionable, measurable, and adaptable. GRAICE™ focuses on transparency, security, accountability, and human‑centric design. But what sets it apart is its emphasis on measurable compliance. Many frameworks articulate principles; GRAICE™ attempts to define behaviours. It seeks to bridge the gap between what organisations say about AI and what they actually do.

Running alongside GCRAI is the G.R.A.C.E. Global Council for AI, which articulates a complementary set of principles centred on human‑centred AI. Their pillars emphasise mission, vision, and the balance between technology, ethics, and humanity. While still evolving, the G.R.A.C.E. principles reinforce the idea that responsible AI is not just a technical discipline but it’s a societal one. They highlight the need for AI systems that enhance human capability rather than diminish it, and for governance that protects people as much as it protects organisations.

Together, GCRAI and G.R.A.C.E. represent a growing recognition that responsible AI cannot be solved by isolated efforts. Organisations need frameworks that are interoperable, globally recognised, and grounded in real‑world practice. They need standards that can be implemented, audited, and adapted as technology evolves. And they need governance models that reflect the complexity of modern AI systems and systems that learn continuously, behave unpredictably, and operate across boundaries.

For data and AI leaders, the emergence of GRAICE™ is a signal. The era of voluntary, principle‑only responsible AI is ending. The next phase is about operationalisation, measurement, and accountability. Whether organisations adopt GRAICE™ directly or use it as a benchmark, its influence will shape how responsible AI is defined, governed, and enforced in the years ahead. This is not just another framework but a part of a global shift toward responsible AI as a shared, enforceable standard.

G.R.A.C.E. is
GROUNDED
RESPONSIBLE
AUTHENTIC
COMPASSION
ETHICAL

Every decision involving AI should align with moral truth, respect for life, and integrity of purpose through moral align




https://www.graceglobalcouncil.com/
https://gcrai.ai/

Wednesday, 1 April 2026

How Responsible AI frameworks shape the future of AI Governance

The responsible AI landscape is shifting fast. Organisations are no longer looking for a single framework to rule them all; they’re looking for interoperability, clarity, and practical pathways to operational maturity. Two frameworks are increasingly shaping that conversation: GRAICE™, the new global framework from the Global Council for Responsible AI (GCRAI), and Microsoft’s Responsible AI Standard, one of the most established engineering‑level governance standards in the industry.

These frameworks are often discussed in the same breath, but they operate at different layers of the governance stack. Understanding that distinction is essential — because it’s precisely what makes them complementary rather than competitive.

GRAICE™: A Global Meta‑Framework for Cross‑Sector Alignment

GRAICE™ is designed as a global, cross‑sector framework. Its purpose is not to replace organisational or vendor standards, but to provide:

- a shared global vocabulary for responsible AI  
- a principles‑level structure that governments, industry, academia, and civil society can align to  
- a meta‑framework that organisations can map their internal standards against  
- a societal‑level lens that sits above implementation detail  

GRAICE™ is intentionally broad. It sets direction, coherence, and expectations at a global level — the “north star” rather than the engineering manual.

Microsoft’s Responsible AI Standard: Operational Discipline for Real Systems

Microsoft’s Responsible AI Standard sits at a different layer: the practical, engineering‑focused layer where teams build, evaluate, deploy, and monitor AI systems.

It provides:

- detailed lifecycle requirements  
- controls for data, evaluation, transparency, and oversight  
- guidance for product teams and engineering functions  
- mechanisms for translating principles into day‑to‑day practice  

Where GRAICE™ is global and principle‑driven, Microsoft’s standard is specific, actionable, and operational.

Complementary by Design

This is the critical point:  
GRAICE™ does not replace Microsoft’s Responsible AI Standard — or any other organisational framework.

Instead, the two frameworks operate in a layered model:

- GRAICE™ → global alignment, societal expectations, cross‑sector coherence  
- Microsoft RAI Standard → engineering discipline, implementation controls, operational maturity  

Together, they create a governance ecosystem that is:

- globally relevant  
- locally actionable  
- technically grounded  
- aligned with societal expectations  

This layered approach reflects where responsible AI is heading: ecosystems of interoperable frameworks, not a single universal standard.

Where They Converge

Despite their different scopes, both frameworks reinforce core responsible AI expectations:

- transparency as a foundation for trust  
- accountability and human oversight  
- continuous monitoring of evolving systems  
- responsible AI as an ongoing operational commitment  

These shared foundations show a field moving toward coherence, even when frameworks serve different purposes.

The Real Opportunity: Use Them Together

For organisations, the value lies in the combination:

- GRAICE™ provides the global direction and cross‑sector alignment.  
- Microsoft’s Responsible AI Standard provides the operational machinery to implement responsible AI in real systems.  

Using both gives organisations a governance model that is both strategically aligned and practically executable — exactly what mature AI governance requires.