Chaos, complexity, curiosity and database systems. A place where research meets industry
Welcome
"The important thing is not to stop questioning. Curiosity has its own reason for existing" Einstein
Wednesday, 8 April 2026
Why Data Catalogues Fail (And How Purview Is Quietly Fixing the Industry’s Blind Spots)
Saturday, 4 April 2026
GCRAI and the Rise of GRAICE™: A New Global Framework for Responsible AI Governance
Wednesday, 1 April 2026
How GRAICE™ and Microsoft’s Responsible AI Standard Shape the Future of AI Governance
Saturday, 28 March 2026
Series Index Summary: Data Governance, Purview, and Responsible AI
Wednesday, 25 March 2026
Unifying the Data Estate for the next AI Frontier Fabcon Keynote
The Atlanta FabCon keynote was delivered last Wednesday by Amir Netz (CTO and Technical Fellow), Arun Ulag (President, Azure Data), Shireesh Thota (Corporate Vice President, Azure Databases). It has was recorded. You can watch it here.
Session Abstract
As organizations race to deploy generative and agentic AI, the biggest challenge they face is not models, it’s their data estate. Join Microsoft engineering leadership to learn how Microsoft’s databases can be unified through Microsoft Fabric and OneLake, creating a single, governed foundation for analytics, AI, and intelligent agents. Discover why this shift represents a fundamental change in how modern data platforms are built, managed, and scaled for the next AI frontier.
A summary of the announcements.
Sunday, 22 March 2026
What Data Leaders Must Unlearn to Lead in the Age of AI
Friday, 20 March 2026
Navigate AI on Your Data & Analytics Journey to Value - Gartner 2026
Here are some collated highlights that interested me.
1. The Core Keynote: Beyond the Hype to ROI
Analysts Adam Ronthal and Georgia O’Callaghan opened the summit by challenging the move fast and break things mentality. They argued that while AI is accelerating, success belongs to those who find a thoughtful approach to speed and direction.
Gartner emphasized that AI adoption follows an S-curve, a slow start, rapid acceleration, then stabilization. We are currently at the steep upward slope. Organizations that don't integrate governance now will face expensive catch-up efforts that turn AI from an asset into a liability.
Gartner categorized firms into three types: AI-First (aggressive), AI-Opportunistic (fast followers), and AI-Cautious (waiting for stability). They noted that regardless of the path, doing nothing is no longer an option.
2. Data Governance: The Move to Adaptive & Autonomous
A major takeaway was that traditional, manual data governance is dead. It cannot keep up with the volume and velocity of AI-driven data.
Gartner introduced the concept of Outcome-Based Governance. Instead of governing all data equally, teams should focus on high-value data products that directly impact AI outcomes.
A new AI-Ready Data Framework focuses on three pillars:
Alignment: Ensuring data semantics and lineage are clear.
Qualification: Continuous data quality validation for model training.
Governance: Enforcing policies during the AI lifecycle.
The Rise of Governance Agents: A top 2026 prediction is that D&A leaders will begin using Data Governance Agents to automate the negotiation and orchestration of data pipelines.
3. AI Governance: Bridging the Trust Gap
The summit highlighted a looming crisis where 60% of organizations are predicted to fail at realizing AI value due to poor integration between data and AI governance.
Gartner warned against Registry-First Governance. Simply listing your AI models in a spreadsheet isn't enough. They called for Continuous Code-to-Cloud Visibility, where governance monitors data as it flows through APIs and AI agents in real-time.
A buzzword at the conference was the Unified Context Layer. To govern AI effectively, you need a layer that connects business meaning to raw data. This allows AI agents to act reliably because they understand the why and how, not just the what.
Gartner predicts spending on AI governance platforms will reach $492M in 2026, doubling to $1B by 2030, as companies realize that compliance is a trust dividend rather than a tax.
4. Responsible AI: Ethics as an Operational Metric
Responsible AI (RAI) moved from a philosophical discussion to a technical requirement.
Gartner warned that critical failures in managing synthetic data (used to train models when real data is scarce) are a major risk to AI governance. Without metadata tracking the lineage of synthetic data, models risk hallucination loops.
The keynote suggested that data organizations are being reshaped into fusion teams where humans and AI agents work together. Responsible AI here means defining clear boundaries of AI involvement in decision-making.
As we move toward Agentic AI (autonomous agents that can take actions), Gartner highlighted the need for explicit transparency capabilities with the ability to audit why an agent made a specific decision in real-time.
In summary by 2027, organizations that emphasize AI literacy for executives will achieve 20% higher financial performance than those that do not. (Gartner, March 2026). In 2026, AI strategy and Data strategy have become inseparable and you cannot scale the former without governing the latter.
Safeguarding the AI Frontier with Microsoft Purview & Fabric Innovations
Wednesday, 18 March 2026
FabCon and SQLCon 2026
- Fabric IQ brings together live business data.
- Work IQ pulls in productivity signals.
- Foundry IQ captures institutional knowledge.
Sunday, 15 March 2026
Metadata Is Not Optional: The Strategic Value Organisations Still Undervalue
Metadata has always been the unglamorous backbone of data governance, but in 2026 it becomes a strategic asset. AI systems depend on it, automation relies on it, and governance collapses without it. Yet many organisations still treat metadata as an afterthought, something to be documented later, if at all. This mindset is becoming increasingly untenable.
Metadata based on DAMA-DMBOK principles is 'data about data', meaning the descriptive information that defines, structures, and gives context to data so it can be understood, managed, and used effectively.
Inspirational STEM 1958-1968
From the moment I first understood the meaning of my mum’s , Joan Holt, school motto “Be strong and very courageous” I realised it wasn’t just a phrase she carried; it was a quiet force that shaped her life. Long before women in STEM were recognised or encouraged in the way they are today, she worked in a world of theoretical physics, numerical analysis, and early computing with a determination that still leaves me in awe. Hers was not the loud, celebrated courage of someone who set out to break barriers, but the steady, purposeful courage of someone who simply refused to accept that those barriers applied to her. With it being International Women’s Day last week we celebrated the women who paved the way and that reminds that one of those pioneers was my mum.
Her career reads like a living history of British computing. Her first days were working on IBM mainframes and analysing data, when computers filled whole rooms and printouts were the size of phone books to programming the Ferranti Mk1. She drafted manuals for the Elliott 503 and 4100, and solved problems with nothing but symbolic assembly code. She lived through the evolution of technology as few people did. She worked in rooms where magnetic tapes towered over her, where data meant punched cards and where a single mistake meant repunching a deck. She navigated machines that shook themselves off desks, deciphered the results of calculations that once took over 8 months, and wrote documentation that bridged engineers and the future operators. She was often the only woman in the room, one of only a handful among thousands of men at just nineteen. She simply worked hard proving herself indispensable through intelligence, persistence, and grace.
Today, on Mother’s Day, I think not only of the extraordinary work she did, but of the extraordinary woman she was. A role model who taught me that courage can be quiet, curiosity can be powerful, and that you can shape the world even if you never stand in the spotlight. While the world now celebrates women in STEM more visibly than ever, she lived those values when the path was far tougher and the recognition far thinner. Her achievements may sit in old manuals, early programs, and memories of rooms filled with tapes and valves, but her legacy is alive in me. I am proud beyond words to have been her daughter, and prouder still to share her personal story to help inspire future generations.
Saturday, 14 March 2026
Fabric, Purview, and the New Shape of Enterprise Data Architecture
Sunday, 8 March 2026
Purview and OneLake Govern tab change
The Purview Hub in Fabric insights have now moved to the OneLake catalog’s Govern tab. The change helps bring governance closer to where the data actually lives, rather than leaving them in a parallel experience that always that wasn't as helpful as it could have been. In the Govern tab, you now see the same posture summaries, recommended actions, and learning resources that were in Purview Hub, but framed within Fabric’s unified governance model. It is a cleaner, more coherent way of surfacing what core information about the health of your data estate.
Functionally, the Govern tab now gives you a consolidated view of governance status, recommended actions, sensitivity and endorsement insights, and links into deeper governance tooling. You can drill into items that need attention, track improvements over time, and understand how your organisation is using Fabric’s governance features. The experience also ties directly into the OneLake catalog, so governance isn’t an afterthought. It is embedded in the same place you explore, classify, and manage data assets.
Microsoft hasn’t yet published a formal retirement date for the Purview Hub. Fabric is now presenting a single, coherent story about how organisations should understand and manage their data estate.
You can learn more about it here.
Friday, 6 March 2026
Why Strategic Leaders are Pivoting to Contextual Governance
For decades, data governance has been treated as a static discipline with a set of rigid policies laid out in formal frameworks and applied uniformly across the enterprise. But in an era defined by decentralized architectures and the breakneck speed of AI adoption, this one-size-fits-all approach is inefficient and has the potential to increase business risk.
The mismatch between static governance and dynamic data estates is the primary reason why many digital transformation projects stall. It is time to move toward Contextual Governance.
The Governance Friction Paradox
Traditional governance models fail because they are binary. They treat data as a fixed asset rather than a fluid utility. This creates a paradox:
Over-governance: Smothering low-risk innovation with unnecessary red tape.
Under-governance: Missing the subtle, high-risk nuances of how data is actually used in the wild.
Static rules rely on metadata labels that are often outdated the moment they are applied. Contextual governance, however, shifts the focus from what the data is to how the data is behaving.
What is Contextual Governance?
Contextual governance is a move from policing to orchestration. It is an adaptive framework that evaluates risk in real-time based on the intersection of three pillars:
The Actor: Who is accessing the data, and what is their historical behaviour?
The Environment: Where is the data flowing? Is it a sandboxed R&D environment or a customer-facing LLM?
The Intent: Is the data being used for a routine report, or is it being fed into a model that could leak proprietary logic?
The Strategic Shift: We are moving from asking, Is this data protected? to asking, Is this data protected enough for this specific moment?
Beyond Compliance: The Competitive Edge
For the C-suite and strategic leads, contextual governance isn't just a compliance checkbox. It is a performance multiplier.
Agility at Scale: By automating the easy permissions and tightening controls risk is reduced and the bottlenecks a removed that frustrate engineering and data science teams.
AI Readiness: AI systems don't live in a vacuum. A model that is safe in a localized test may become dangerous when exposed to real-world edge cases. Contextual governance provides the guardrails necessary to deploy AI with confidence.
Intelligent Foundations: This shift forces a higher standard for metadata and lineage. You are mapping data and mapping the value stream of the entire organization.
The Path Forward
Transitioning to this model requires more than new software; it requires a cultural pivot. We must change how we view governance from firm control to see it as a central intelligent system of the enterprise.
The future of data doesn't belong to those with the thickest rulebooks. It belongs to those who can govern at the speed of business.
Monday, 2 March 2026
Purview Announcements Round‑Up for February Features
Saturday, 28 February 2026
Microsoft Purview Data Governance – Interactive Experience
This interactive Storylane provides a guided walkthrough of Microsoft Purview Data Governance, showcasing how organizations can establish clarity, accountability, and trust across their data estate using a modern, federated approach. The experience demonstrates how Purview brings together data discovery, governance domains, data products, access workflows, data quality, and estate health into a single, coherent governance platform.
The walkthrough highlights how business and technical users can discover and understand data through the Unified Catalog, using familiar business language, lineage, and context rather than low‑level technical metadata. It shows how data is organized into business domains and data products, helping teams govern data at scale while maintaining clear ownership and accountability.
The Storylane also illustrates Purview’s end‑to‑end governance lifecycle — from requesting access to governed data, through to defining critical data elements, managing data quality rules, and monitoring data estate health. A key theme throughout is Purview’s federated governance model, enabling central oversight while empowering domain teams to own and manage their data within agreed standards and controls.
Overall, the experience positions Microsoft Purview as a system of record for data governance, supporting organizations as they move toward cloud, analytics, and AI by ensuring data is discoverable, trusted, compliant, and ready for reuse.
Watch the video here
https://purviewdatagovernance.storylane.io/share/kfdnjhua9hlh
Friday, 27 February 2026
The Governance Gap and Why Organisations Still Struggle to Operationalise Policy
Thursday, 26 February 2026
AI Is Making Us Dumber, Part III: When Models Learn Faster Than Organisations Do
Tuesday, 3 February 2026
SQL Server’s Next Chapter: What the New Release Signals for Enterprise Data Estates
Friday, 30 January 2026
Data Toboggan Winter Edition 2026
It is that time of year again when Data Toboggan is running another 12 hour conference with 3 tracks with speakers from around the world. There are some amazing sessions to learn from. The conference is free to attend as usual.
I am speaking on something of interest and topical in my lightning talk in The Chalet on Data Literacy: The Human Advantage in an AI World.
AI is accelerating decision‑making across organisations, but it’s also accelerating how quickly mistakes can scale. This session explores how data literacy keeps humans in the loop, prevents over‑reliance on AI, and strengthens judgment, context, and critical thinking. Attendees will see real examples of AI hallucinations, learn how provenance and triangulation protect against bad outputs, and understand why cognitive skills weaken when tasks are automated. They will leave with a practical checklist for questioning AI outputs, a clear view of the risks of low data literacy, and a framework for building teams that use AI responsibly, confidently, and intelligently.
We have our usual Piste Maps with the agenda.
Wednesday, 28 January 2026
World Economic Forum 2026 in Davos Global Council for Responsible AI
At the 56th World Economic Forum 2026 in Davos between 19–23 January 2026 , the Global Council for Responsible AI officially unveiled GRAICE™ (Global Responsible AI Compliance & Ethics). It is designed as humanity’s operating system for AI. Introduced to global leaders and policymakers, GRAICE moves Responsible AI from principle to practice, integrating ethics, governance, compliance, and human-centric design into a unified, scalable framework.
The framework is an integrated system rather than a
collection of policies that are simple and repeatable.
- Foundational values established non-negotiable ethical and human centred boundaries
- Seven pillars translate values into operational requirements
- Assurance tears verify that requirements are met with evidence
- Governance structures assign accountability and decision authority
- Human dignity and autonomy
- Accountability and governance
- Fairness and justice
- Transparency an explain ability
- Reliability and security
- Inclusivity and social benefits
And the seven pillars for responsible AI define what responsibly I must achieve in practise
- Ethical leadership
- purpose driven innovation
- Human centric use
- responsible implementation
- AI literacy and workforce readiness
- Data governance and integrity


