Welcome

Passionately curious about Data, Databases and Systems Complexity. Data is ubiquitous, the database universe is dichotomous (structured and unstructured), expanding and complex. Find my Database Research at SQLToolkit.co.uk . Microsoft Data Platform MVP

"The important thing is not to stop questioning. Curiosity has its own reason for existing" Einstein



Wednesday, 8 April 2026

Why Data Catalogues Fail (And How Purview Is Quietly Fixing the Industry’s Blind Spots)

Most data catalogues fail for a simple reason: they  assume that documentation alone creates understanding. It doesn’t. A catalogue full of stale metadata, incomplete lineage, and inconsistent tagging is worse than useless and it creates a false sense of confidence. Many organisations have learned this the hard way, investing heavily in catalogues that quickly became digital graveyards.

Purview succeeds where others fail because it treats the catalogue as part of a governance ecosystem, not a standalone tool. Lineage, classification, access policies, and data maps are not optional extras. They are the core of the experience. This integrated approach ensures that metadata is accurate, automated, and actionable.

Another blind spot Purview addresses is operational relevance. Traditional catalogues focus on documentation whereas Purview focuses on control. It doesn’t just describe data as it also governs it. This shift from passive to active metadata is what makes Purview viable at enterprise scale.

Purview also excels in hybrid and multi‑cloud environments, where many catalogues struggle. Its connectors, scanning capabilities, and policy enforcement mechanisms are designed for real‑world estates, not idealised architectures.

Purview is integrated with Fabric which positions it as the governance backbone of the Microsoft ecosystem. As organisations consolidate their data platforms, Purview becomes the source of truth that ties everything together.



Saturday, 4 April 2026

GCRAI and the Rise of GRAICE™: A New Global Framework for Responsible AI Governance

The global conversation around responsible AI has been dominated for years by national strategies, corporate principles, and academic frameworks. But the launch of the Global Council for Responsible AI (GCRAI) and its GRAICE™ framework marks a shift toward something far more ambitious: a unified, cross‑sector, cross‑industry operating system for AI governance. Unlike many initiatives that focus on high‑level ethics, GCRAI positions itself as a mechanism for operationalising responsibility at scale. It’s an attempt to move responsible AI from aspiration to enforceable practice.

What makes GCRAI notable is its global footprint. With representation across dozens of countries and a network of ambassadors, it aims to create a governance ecosystem that transcends borders and industries. This matters because AI risk is not localised. Models trained in one region influence decisions in another. Data flows across jurisdictions. And the consequences of AI misuse rarely stay within organisational boundaries. A global framework is not just desirable, it is necessary.

The GRAICE™ framework, unveiled at Davos, is positioned as “humanity’s operating system for AI.” While the branding is bold, the intent is clear: create a standard that is actionable, measurable, and adaptable. GRAICE™ focuses on transparency, security, accountability, and human‑centric design. But what sets it apart is its emphasis on measurable compliance. Many frameworks articulate principles; GRAICE™ attempts to define behaviours. It seeks to bridge the gap between what organisations say about AI and what they actually do.

Running alongside GCRAI is the G.R.A.C.E. Global Council for AI, which articulates a complementary set of principles centred on human‑centred AI. Their pillars emphasise mission, vision, and the balance between technology, ethics, and humanity. While still evolving, the G.R.A.C.E. principles reinforce the idea that responsible AI is not just a technical discipline but it’s a societal one. They highlight the need for AI systems that enhance human capability rather than diminish it, and for governance that protects people as much as it protects organisations.

Together, GCRAI and G.R.A.C.E. represent a growing recognition that responsible AI cannot be solved by isolated efforts. Organisations need frameworks that are interoperable, globally recognised, and grounded in real‑world practice. They need standards that can be implemented, audited, and adapted as technology evolves. And they need governance models that reflect the complexity of modern AI systems and systems that learn continuously, behave unpredictably, and operate across boundaries.

For data and AI leaders, the emergence of GRAICE™ is a signal. The era of voluntary, principle‑only responsible AI is ending. The next phase is about operationalisation, measurement, and accountability. Whether organisations adopt GRAICE™ directly or use it as a benchmark, its influence will shape how responsible AI is defined, governed, and enforced in the years ahead. This is not just another framework but a part of a global shift toward responsible AI as a shared, enforceable standard.

G.R.A.C.E. is
GROUNDED
RESPONSIBLE
AUTHENTIC
COMPASSION
ETHICAL

Every decision involving AI should align with moral truth, respect for life, and integrity of purpose through moral align




https://www.graceglobalcouncil.com/
https://gcrai.ai/

Wednesday, 1 April 2026

How GRAICE™ and Microsoft’s Responsible AI Standard Shape the Future of AI Governance

As the responsible AI landscape matures, two frameworks are emerging as influential anchors in how organisations think about AI governance. The newly launched GRAICE™ framework from the Global Council for Responsible AI (GCRAI), and Microsoft’s long‑established Responsible AI Standard. Each framework reflects a different lineage, a different worldview, and a different set of priorities. Yet both are converging on the same fundamental truth: responsible AI is no longer a philosophical debate but an operational discipline.

GRAICE™ enters the scene with global ambition. It positions itself as a unifying operating system for responsible AI, designed to be adopted across governments, enterprises, and civil society. Its principles emphasise human‑centricity, societal impact, and global accountability. The tone is intentionally broad because the problems it aims to address are cross‑border data flows, global AI risk, and societal trust that cannot be solved by any single organisation or nation. GRAICE™ is built for the world stage.

Microsoft’s Responsible AI Standard, by contrast, is built for practitioners. It is grounded in engineering realities: data sourcing, model evaluation, transparency requirements, human oversight, and lifecycle monitoring. It is not trying to govern the world; it is trying to govern systems. Its strength lies in its specificity. It tells teams what to do, how to do it, and how to measure whether they have done it well. It is a framework forged in the crucible of product development.

The contrast between the two frameworks is striking. GRAICE™ is expansive, values driven, and globally oriented. Microsoft’s standard is precise, operational, and system‑oriented. One speaks the language of societal responsibility; the other speaks the language of engineering discipline. Yet this contrast is exactly what makes the comparison so valuable. Together, they represent the two halves of responsible AI, the why and the how.

Where the frameworks converge is equally important. Both insist on transparency as a prerequisite for trust. Both emphasise accountability, not as a slogan, but as a requirement for human oversight. Both recognise that AI systems evolve and therefore require continuous monitoring. And both acknowledge that responsible AI is not a one‑off certification but an ongoing commitment. These shared foundations signal a broader alignment across the industry where responsible AI is becoming standardised, measurable, and expected.

The real opportunity lies in how organisations combine the two. GRAICE™ provides the global context , the societal lens, the ethical north star, the cross‑sector alignment. Microsoft’s Responsible AI Standard provides the operational machinery and the processes, controls, and engineering practices that turn principles into behaviour. When used together, they create a governance model that is both globally relevant and locally actionable.

This is where the future of responsible AI is heading. Not toward a single universal framework, but toward an ecosystem of complementary standards that reinforce one another. GRAICE™ sets the direction; Microsoft’s standard provides the path. Organisations that embrace both will be better equipped to build AI systems that are trustworthy, transparent, and aligned with human values, not just in theory, but in practice.




Saturday, 28 March 2026

Series Index Summary: Data Governance, Purview, and Responsible AI

This four‑month series explores the shifting landscape of data governance, Microsoft Purview, and Responsible AI at a moment when organizations are being forced to rethink how they manage, understand, and trust their data. Across the posts, the series traces a clear arc: from the maturing of governance in 2025, through the practical realities of Purview adoption, to the cultural and architectural shifts required to lead in the age of AI.

The December posts set the stage by examining why governance finally became a strategic priority, how Purview’s quieter updates are reshaping the platform, and why AI risks making organizations intellectually complacent without strong data foundations. These pieces frame governance not as bureaucracy, but as the mechanism that makes innovation safe.

I move deeper into strategy and Responsible AI. It explores the predictions shaping 2026, the operational implications of Microsoft’s updated Responsible AI framework, and the evolution of Purview’s classification engine. The AI Is Making Us Dumber series continues here, highlighting the risks of over‑automation and the importance of maintaining human understanding.

I shift into technical depth and organizational reality. It covers SQL Server’s new direction, the strategic value of metadata, and a detailed breakdown of Purview’s February feature updates. The month closes with reflections on why organizations struggle to operationalize policy and how governance must adapt to keep pace with rapidly learning AI systems.

March brings the series to a forward‑looking conclusion. It introduces the concept of contextual governance, examines the architectural convergence of Fabric and Purview, and challenges data leaders to unlearn outdated assumptions. These posts emphasize that leadership in the AI era requires adaptability, transparency, and a willingness to rethink long‑held beliefs.

Together, these posts form a cohesive narrative about where data governance is heading, what Purview is becoming, and how organizations can navigate the accelerating complexity of AI‑driven data estates. I wanted to add clarity in a landscape full of noise and understand that governance is no longer optional, but foundational.

Wednesday, 25 March 2026

Unifying the Data Estate for the next AI Frontier Fabcon Keynote

The Atlanta FabCon keynote was delivered last Wednesday by Amir Netz (CTO and Technical Fellow), Arun Ulag (President, Azure Data), Shireesh Thota (Corporate Vice President, Azure Databases).  It has was recorded. You can watch it here

Session Abstract

As organizations race to deploy generative and agentic AI, the biggest challenge they face is not models, it’s their data estate. Join Microsoft engineering leadership to learn how Microsoft’s databases can be unified through Microsoft Fabric and OneLake, creating a single, governed foundation for analytics, AI, and intelligent agents. Discover why this shift represents a fundamental change in how modern data platforms are built, managed, and scaled for the next AI frontier.

A summary of the announcements.





Sunday, 22 March 2026

What Data Leaders Must Unlearn to Lead in the Age of AI

The hardest part of leading in the AI era isn’t learning new skills, it is unlearning old assumptions. Many of the beliefs that shaped data leadership over the past decade no longer apply. The pace of change, the complexity of modern estates, and the unpredictability of AI systems demand a different mindset. Leaders must be willing to let go of outdated models of control, certainty, and hierarchy.

One of the first assumptions to unlearn is that governance slows innovation. In reality, governance accelerates innovation by reducing risk, increasing clarity, and enabling responsible experimentation. When governance is embedded rather than imposed, it becomes a catalyst rather than a constraint. Leaders who cling to the old narrative will find themselves outpaced by those who embrace governance as a strategic enabler.

Another assumption to unlearn is that documentation equals understanding. In the AI era, understanding comes from lineage, monitoring, and behavioural metadata, not static documents. Leaders must shift from documenting after the fact to embedding governance into the system itself. This requires investment in tooling, automation, and literacy.

Leaders must also unlearn the idea that AI systems can be trusted without oversight. AI is probabilistic, not deterministic. It requires continuous monitoring, not one‑time validation. The organisations that thrive will be those that treat AI as a dynamic system requiring ongoing governance, not a product that can be finished.

Finally, leaders must unlearn the belief that expertise is static. In the AI era, expertise evolves. The best leaders will be those who remain curious, adaptable, and willing to challenge their own assumptions. Unlearning is not a weakness but a leadership skill.



Friday, 20 March 2026

Navigate AI on Your Data & Analytics Journey to Value - Gartner 2026

The Gartner Data & Analytics Summit (March 9–11, 2026, in Orlando) marked a significant shift from AI experimentation to AI industrialization. My post focuses on how governance is no longer a check-the-box activity but the literal engine for AI ROI.

​Here are some collated highlights that interested me.

1. The Core Keynote: Beyond the Hype to ROI

Analysts Adam Ronthal and Georgia O’Callaghan opened the summit by challenging the move fast and break things mentality. They argued that while AI is accelerating, success belongs to those who find a thoughtful approach to speed and direction.

Gartner emphasized that AI adoption follows an S-curve, a slow start, rapid acceleration, then stabilization. We are currently at the steep upward slope. Organizations that don't integrate governance now will face expensive catch-up efforts that turn AI from an asset into a liability.

Gartner categorized firms into three types: AI-First (aggressive), AI-Opportunistic (fast followers), and AI-Cautious (waiting for stability). They noted that regardless of the path, doing nothing is no longer an option.

2. Data Governance: The Move to Adaptive & Autonomous

A major takeaway was that traditional, manual data governance is dead. It cannot keep up with the volume and velocity of AI-driven data.

Gartner introduced the concept of Outcome-Based Governance. Instead of governing all data equally, teams should focus on high-value data products that directly impact AI outcomes.

A new AI-Ready Data Framework focuses on three pillars:

   Alignment: Ensuring data semantics and lineage are clear.

   Qualification: Continuous data quality validation for model training.

   Governance: Enforcing policies during the AI lifecycle.

The Rise of Governance Agents: A top 2026 prediction is that D&A leaders will begin using Data Governance Agents to automate the negotiation and orchestration of data pipelines.

3. AI Governance: Bridging the Trust Gap

The summit highlighted a looming crisis where 60% of organizations are predicted to fail at realizing AI value due to poor integration between data and AI governance.

Gartner warned against Registry-First Governance. Simply listing your AI models in a spreadsheet isn't enough. They called for Continuous Code-to-Cloud Visibility, where governance monitors data as it flows through APIs and AI agents in real-time.

A buzzword at the conference was the Unified Context Layer. To govern AI effectively, you need a layer that connects business meaning to raw data. This allows AI agents to act reliably because they understand the why and how, not just the what.

Gartner predicts spending on AI governance platforms will reach $492M in 2026, doubling to $1B by 2030, as companies realize that compliance is a trust dividend rather than a tax.

4. Responsible AI: Ethics as an Operational Metric

Responsible AI (RAI) moved from a philosophical discussion to a technical requirement.

Gartner warned that critical failures in managing synthetic data (used to train models when real data is scarce) are a major risk to AI governance. Without metadata tracking the lineage of synthetic data, models risk hallucination loops.

The keynote suggested that data organizations are being reshaped into fusion teams where humans and AI agents work together. Responsible AI here means defining clear boundaries of AI involvement in decision-making.

As we move toward Agentic AI (autonomous agents that can take actions), Gartner highlighted the need for explicit transparency capabilities with the ability to audit why an agent made a specific decision in real-time.

In summary by 2027, organizations that emphasize AI literacy for executives will achieve 20% higher financial performance than those that do not. (Gartner, March 2026). In 2026, AI strategy and Data strategy have become inseparable and you cannot scale the former without governing the latter.