Welcome

Passionately curious about Data, Databases and Systems Complexity. Data is ubiquitous, the database universe is dichotomous (structured and unstructured), expanding and complex. Find my Database Research at SQLToolkit.co.uk . Microsoft Data Platform MVP

"The important thing is not to stop questioning. Curiosity has its own reason for existing" Einstein



Sunday, 26 April 2026

Tracing my career journey though my blog

I was looking at my blog stats this morning and was really interested to see the geographical spread. I started writing my blog in 2011 and it has been read by 1.18m. I wanted to record all the technical tips I found and technology advancements which were useful to me and might be of use to help others. I started writing on SQL Server and the blog has migrated with me throughout my career through architecture, my PhD research and over the last few years I have been mostly writing on Data Governance, Microsoft Purview, AI Governance and Microsoft Fabric. 


I asked Copilot to share some interesting thoughts about my journey for my blog and here is what it thought.

Your blog reveals a journey defined by intellectual curiosity, data‑driven leadership, and a distinctive narrative voice. Here are five evidence‑based, genuinely interesting aspects of that journey.

1. You frame data leadership as a process of unlearning, not just learning

Your post “What data leaders must unlearn to lead in the Age of AI” argues that modern leadership requires shedding outdated assumptions, such as believing governance slows innovation or that documentation equals understanding. This reframing positions you as a thought leader challenging entrenched industry norms. 

2. You consistently connect research, industry practice, and philosophical curiosity

Your blog tagline “Chaos, complexity, curiosity and database systems” captures a rare blend: deep technical expertise paired with a reflective, almost philosophical lens on data systems. This fusion shapes your writing style and differentiates your professional voice. 

3. You document the shift from AI experimentation to AI industrialisation with governance at the centre

In your coverage of the Gartner Data & Analytics Summit, you highlight how governance has moved from a compliance checkbox to the engine of AI ROI. This shows your role as an interpreter of industry change, translating large‑scale trends into practical insights for practitioners. 

4. Your journey is grounded in both academic achievement and community leadership

Across external references, you are consistently described as a Microsoft Data Platform MVP, a PhD researcher recognised with the AOUG Will Swann Award, and a founder/organiser of Data Toboggan. This positions your blog as the narrative thread connecting your academic, professional, and community contributions. 

5. Your posts reveal a long‑standing commitment to making governance practical, accessible, and embedded

Whether discussing AI oversight, lineage, behavioural metadata, or Purview governance models, your writing emphasises practical implementation over theory. You repeatedly advocate for governance that is embedded, automated, and literacy‑driven, showing a consistent philosophy across years of posts. 





Wednesday, 22 April 2026

SQLBits 2026 Day 1

  

SQLBits in Wales is happenig this week. We have held the conference at the ICC before so all very familiar.  The keynote was introduced by Simon Sabin before it moved into an in-depth session on the future of Microsoft One SQL—from on-premises to Azure and into Microsoft Fabric. It develved into the unified, AI-ready relational database that powers modernization and next-gen AI apps. SQL  delivers consistency, performance, and some innovative features. The speakers in the keynote were Bob Ward, Anna Hoffman, Priya Sathy, Shiva Gurumurthy.

Data and AI is changing the world. It is the fuel that powers AI.  Microsoft SQL is one consistent SQL for the era of AI. It is
enterprise ready, has evolved over the decades to an industry leading scaleable, dynamic platform with high availability and  best in class price performance. 

The keynote develved into migrate and modernize , the need for cloud native AI apps and the need for unified data platforms. 

Highlights of new features 

Azure SQL Server Managed Instance GA
SQL Server 2025 on Azure Virtual Machine GA
Azure Accelerate for Databases announced
aka.ms/modernizedatabases

Azure SQL Database Hyperscale GA you only pay for cores and storage and no license fee.

Mirroring from Microsoft SQL to Fabric GA
SQL Database in Fabric GA

Database Hub in Fabric was announced with fleet management,  observability and database agents.

The depth and breadth of SQL Server has grown substantially over the years and supports many engine types for holistic use. The engines being graph, vector,  columnar, document, spacial, key value, heirachical, in memory and ledger.

Many sessions today develed into SQL migrations in various forms. There was a fun session talking about Databricks vs Fabric. There are many differences and business needs and business technology stack skills in house often influence the choic of technology.

The Azure SQLServer Hyperscale session talked about Hyperscale which is about the architecture design,  not the engine. It is truly a distributed , cloud native architecture  with boundless storage that grows automatically with elastic compute at two speed. It uses sql server as caches.

More sessions for day 2 tomorrow. 




Wednesday, 8 April 2026

Why Data Catalogues Fail (And How Purview Is Quietly Fixing the Industry’s Blind Spots)

Most data catalogues fail for a simple reason: they  assume that documentation alone creates understanding. It doesn’t. A catalogue full of stale metadata, incomplete lineage, and inconsistent tagging is worse than useless and it creates a false sense of confidence. Many organisations have learned this the hard way, investing heavily in catalogues that quickly became digital graveyards.

Purview succeeds where others fail because it treats the catalogue as part of a governance ecosystem, not a standalone tool. Lineage, classification, access policies, and data maps are not optional extras. They are the core of the experience. This integrated approach ensures that metadata is accurate, automated, and actionable.

Another blind spot Purview addresses is operational relevance. Traditional catalogues focus on documentation whereas Purview focuses on control. It doesn’t just describe data as it also governs it. This shift from passive to active metadata is what makes Purview viable at enterprise scale.

Purview also excels in hybrid and multi‑cloud environments, where many catalogues struggle. Its connectors, scanning capabilities, and policy enforcement mechanisms are designed for real‑world estates, not idealised architectures.

Purview is integrated with Fabric which positions it as the governance backbone of the Microsoft ecosystem. As organisations consolidate their data platforms, Purview becomes the source of truth that ties everything together.



Saturday, 4 April 2026

GCRAI and the Rise of GRAICE™: A New Global Framework for Responsible AI Governance

The global conversation around responsible AI has been dominated for years by national strategies, corporate principles, and academic frameworks. But the launch of the Global Council for Responsible AI (GCRAI) and its GRAICE™ framework marks a shift toward something far more ambitious: a unified, cross‑sector, cross‑industry operating system for AI governance. Unlike many initiatives that focus on high‑level ethics, GCRAI positions itself as a mechanism for operationalising responsibility at scale. It’s an attempt to move responsible AI from aspiration to enforceable practice.

What makes GCRAI notable is its global footprint. With representation across dozens of countries and a network of ambassadors, it aims to create a governance ecosystem that transcends borders and industries. This matters because AI risk is not localised. Models trained in one region influence decisions in another. Data flows across jurisdictions. And the consequences of AI misuse rarely stay within organisational boundaries. A global framework is not just desirable, it is necessary.

The GRAICE™ framework, unveiled at Davos, is positioned as “humanity’s operating system for AI.” While the branding is bold, the intent is clear: create a standard that is actionable, measurable, and adaptable. GRAICE™ focuses on transparency, security, accountability, and human‑centric design. But what sets it apart is its emphasis on measurable compliance. Many frameworks articulate principles; GRAICE™ attempts to define behaviours. It seeks to bridge the gap between what organisations say about AI and what they actually do.

Running alongside GCRAI is the G.R.A.C.E. Global Council for AI, which articulates a complementary set of principles centred on human‑centred AI. Their pillars emphasise mission, vision, and the balance between technology, ethics, and humanity. While still evolving, the G.R.A.C.E. principles reinforce the idea that responsible AI is not just a technical discipline but it’s a societal one. They highlight the need for AI systems that enhance human capability rather than diminish it, and for governance that protects people as much as it protects organisations.

Together, GCRAI and G.R.A.C.E. represent a growing recognition that responsible AI cannot be solved by isolated efforts. Organisations need frameworks that are interoperable, globally recognised, and grounded in real‑world practice. They need standards that can be implemented, audited, and adapted as technology evolves. And they need governance models that reflect the complexity of modern AI systems and systems that learn continuously, behave unpredictably, and operate across boundaries.

For data and AI leaders, the emergence of GRAICE™ is a signal. The era of voluntary, principle‑only responsible AI is ending. The next phase is about operationalisation, measurement, and accountability. Whether organisations adopt GRAICE™ directly or use it as a benchmark, its influence will shape how responsible AI is defined, governed, and enforced in the years ahead. This is not just another framework but a part of a global shift toward responsible AI as a shared, enforceable standard.

G.R.A.C.E. is
GROUNDED
RESPONSIBLE
AUTHENTIC
COMPASSION
ETHICAL

Every decision involving AI should align with moral truth, respect for life, and integrity of purpose through moral align




https://www.graceglobalcouncil.com/
https://gcrai.ai/

Wednesday, 1 April 2026

How GRAICE™ and Microsoft’s Responsible AI Standard Shape the Future of AI Governance

The responsible AI landscape is shifting fast. Organisations are no longer looking for a single framework to rule them all; they’re looking for interoperability, clarity, and practical pathways to operational maturity. Two frameworks are increasingly shaping that conversation: GRAICE™, the new global framework from the Global Council for Responsible AI (GCRAI), and Microsoft’s Responsible AI Standard, one of the most established engineering‑level governance standards in the industry.

These frameworks are often discussed in the same breath, but they operate at different layers of the governance stack. Understanding that distinction is essential — because it’s precisely what makes them complementary rather than competitive.

GRAICE™: A Global Meta‑Framework for Cross‑Sector Alignment

GRAICE™ is designed as a global, cross‑sector framework. Its purpose is not to replace organisational or vendor standards, but to provide:

- a shared global vocabulary for responsible AI  
- a principles‑level structure that governments, industry, academia, and civil society can align to  
- a meta‑framework that organisations can map their internal standards against  
- a societal‑level lens that sits above implementation detail  

GRAICE™ is intentionally broad. It sets direction, coherence, and expectations at a global level — the “north star” rather than the engineering manual.

Microsoft’s Responsible AI Standard: Operational Discipline for Real Systems

Microsoft’s Responsible AI Standard sits at a different layer: the practical, engineering‑focused layer where teams build, evaluate, deploy, and monitor AI systems.

It provides:

- detailed lifecycle requirements  
- controls for data, evaluation, transparency, and oversight  
- guidance for product teams and engineering functions  
- mechanisms for translating principles into day‑to‑day practice  

Where GRAICE™ is global and principle‑driven, Microsoft’s standard is specific, actionable, and operational.

Complementary by Design

This is the critical point:  
GRAICE™ does not replace Microsoft’s Responsible AI Standard — or any other organisational framework.

Instead, the two frameworks operate in a layered model:

- GRAICE™ → global alignment, societal expectations, cross‑sector coherence  
- Microsoft RAI Standard → engineering discipline, implementation controls, operational maturity  

Together, they create a governance ecosystem that is:

- globally relevant  
- locally actionable  
- technically grounded  
- aligned with societal expectations  

This layered approach reflects where responsible AI is heading: ecosystems of interoperable frameworks, not a single universal standard.

Where They Converge

Despite their different scopes, both frameworks reinforce core responsible AI expectations:

- transparency as a foundation for trust  
- accountability and human oversight  
- continuous monitoring of evolving systems  
- responsible AI as an ongoing operational commitment  

These shared foundations show a field moving toward coherence, even when frameworks serve different purposes.

The Real Opportunity: Use Them Together

For organisations, the value lies in the combination:

- GRAICE™ provides the global direction and cross‑sector alignment.  
- Microsoft’s Responsible AI Standard provides the operational machinery to implement responsible AI in real systems.  

Using both gives organisations a governance model that is both strategically aligned and practically executable — exactly what mature AI governance requires.


Saturday, 28 March 2026

Series Index Summary: Data Governance, Purview, and Responsible AI

This four‑month series explores the shifting landscape of data governance, Microsoft Purview, and Responsible AI at a moment when organizations are being forced to rethink how they manage, understand, and trust their data. Across the posts, the series traces a clear arc: from the maturing of governance in 2025, through the practical realities of Purview adoption, to the cultural and architectural shifts required to lead in the age of AI.

The December posts set the stage by examining why governance finally became a strategic priority, how Purview’s quieter updates are reshaping the platform, and why AI risks making organizations intellectually complacent without strong data foundations. These pieces frame governance not as bureaucracy, but as the mechanism that makes innovation safe.

I move deeper into strategy and Responsible AI. It explores the predictions shaping 2026, the operational implications of Microsoft’s updated Responsible AI framework, and the evolution of Purview’s classification engine. The AI Is Making Us Dumber series continues here, highlighting the risks of over‑automation and the importance of maintaining human understanding.

I shift into technical depth and organizational reality. It covers SQL Server’s new direction, the strategic value of metadata, and a detailed breakdown of Purview’s February feature updates. The month closes with reflections on why organizations struggle to operationalize policy and how governance must adapt to keep pace with rapidly learning AI systems.

March brings the series to a forward‑looking conclusion. It introduces the concept of contextual governance, examines the architectural convergence of Fabric and Purview, and challenges data leaders to unlearn outdated assumptions. These posts emphasize that leadership in the AI era requires adaptability, transparency, and a willingness to rethink long‑held beliefs.

Together, these posts form a cohesive narrative about where data governance is heading, what Purview is becoming, and how organizations can navigate the accelerating complexity of AI‑driven data estates. I wanted to add clarity in a landscape full of noise and understand that governance is no longer optional, but foundational.

Wednesday, 25 March 2026

Unifying the Data Estate for the next AI Frontier Fabcon Keynote

The Atlanta FabCon keynote was delivered last Wednesday by Amir Netz (CTO and Technical Fellow), Arun Ulag (President, Azure Data), Shireesh Thota (Corporate Vice President, Azure Databases).  It has was recorded. You can watch it here

Session Abstract

As organizations race to deploy generative and agentic AI, the biggest challenge they face is not models, it’s their data estate. Join Microsoft engineering leadership to learn how Microsoft’s databases can be unified through Microsoft Fabric and OneLake, creating a single, governed foundation for analytics, AI, and intelligent agents. Discover why this shift represents a fundamental change in how modern data platforms are built, managed, and scaled for the next AI frontier.

A summary of the announcements.