Welcome

Passionately curious about Data, Databases and Systems Complexity. Data is ubiquitous, the database universe is dichotomous (structured and unstructured), expanding and complex. Find my Database Research at SQLToolkit.co.uk . Microsoft Data Platform MVP

"The important thing is not to stop questioning. Curiosity has its own reason for existing" Einstein



Friday, 1 May 2026

Operationalising Responsible AI: What Microsoft’s Approach Reveals

Responsible AI has become one of those phrases that organisations like to reference but rarely operationalise. It appears in strategy decks, risk registers, and conference panels, yet the practical mechanisms that make it real are often missing.  

Microsoft’s recent article on its internal responsible‑AI approach is useful not because it offers something radically new, but because it demonstrates what it looks like when a large organisation treats responsible AI as a discipline rather than a marketing narrative.

Below are the core lessons worth thinking about especially if you’re trying to move your organisation from aspiration to implementation.

1. Responsible AI is an organisational discipline, not a technical feature

The most important message is also the simplest: responsible AI only works when it is treated as a governing framework that shapes how AI is designed, deployed, and monitored.   This is not a “nice to have”. It is not a late‑stage review. It is not a compliance tick‑box.  It is a structural commitment that defines how decisions are made, how risks are surfaced, and how accountability is distributed. If organisations are still treating responsible AI as a technical add‑on, you will not scale safely.

2. A central authority is essential for coherence

Microsoft’s Office of Responsible AI functions as a single point of truth. It sets policy, interprets standards, and ensures that teams are aligned.  This matters because without a central authority, governance fragments. Different teams make different assumptions. Risk becomes inconsistent. Decisions become harder to audit. A central function does not need to be large, but it does need to be authoritative. It needs the mandate to say “no”, “not yet”, or “not like this”.

3. Distributed oversight is the only scalable model

A central team cannot carry the entire burden. Microsoft’s model. A senior council supported by a network of responsible‑AI champions is the only realistic way to scale oversight across a complex organisation. This mirrors how other disciplines have matured:  
- data protection officers and privacy champions  
- security teams supported by local security leads  
- governance functions with embedded practitioners  

The pattern is consistent with central clarity and distributed execution. If you want responsible AI to work, you need people embedded in delivery teams who understand the risks and know how to escalate them.

4. A unified workflow is the backbone of responsible AI operations

One of the most practical elements of Microsoft’s approach is its internal workflow tool. Every AI project is logged, assessed, and reviewed through a single structured process. This creates:  
- traceability  
- auditability  
- consistent risk categorisation  
- clear escalation routes  
- visibility across the portfolio  

Most organisations underestimate how much risk comes from fragmentation. If you don’t know what AI systems exist, you can’t govern them. A unified workflow is not optional. It is foundational.

5. Culture and process design matter more than tooling

The article makes a point that resonates strongly with anyone who has worked in governance, the tools support the work, but they do not define it. If you don’t have:  
- clear expectations  
- shared language  
- leadership commitment  
- a culture that values scrutiny  

no tool will save you. Responsible AI succeeds when the organisation behaves as if it matters — not when it installs a dashboard.

Thrre are some actionable steps for organisations to take to build their own responsible AI capability. These are the practical takeaways that any organisation can adopt immediately.

1. Start with a written standard
Define what “good” looks like. Set mandatory requirements. Clarify what triggers deeper review. This becomes your anchor.

2. Build a network of responsible AI practitioners. Identify people with the right instincts, governance‑minded, risk‑aware, delivery‑literate. Train them and Empower them.

3. Design the assessment process before you build tooling. Clarify the workflow:  
- What must every project declare?  
- Who reviews what?  
- How are risks escalated?  

Only then should you build or buy tools.

4. Integrate responsible AI checkpoints into delivery. Move away from late‑stage reviews. Embed assessments into initiation, design, and release readiness.

5. Treat bias detection and data quality as non‑negotiable. Bias is rarely intentional; it is inherited. Build structured checks into your evaluation pipeline.

6. Assign responsibility for monitoring regulatory change. Someone needs to track global AI regulation and translate it into internal practice. This prevents compliance surprises.

7. Use the open resources already available
Microsoft’s Responsible AI Toolbox, Human‑AI Experience guidance, and impact‑assessment templates provide a strong foundation. Use them to accelerate maturity.

Responsible AI is not about slowing innovation. It is about enabling it safely, predictably, and sustainably.  The organisations that will thrive in the next decade are those that treat responsible AI as a discipline with structure, clarity, and accountability, rather than a slogan.

Read more here.

Thursday, 30 April 2026

Data Governance explained

I had a very fun packed day in Manchester a few weeks ago talking on my favourite topic Data Governance, AI Governance and Microsoft Purview. Watch my recording here to help you get started.



Sunday, 26 April 2026

Inside Microsoft’s Responsible AI Framework: What Matters for Data Governance

Microsoft’s updated Responsible AI framework represents a significant evolution in how organisations are expected to approach AI oversight. While the principles themselves, fairness, reliability, safety, privacy, inclusiveness, transparency, and accountability are familiar, the operational expectations behind them have deepened. This isn’t a philosophical document; it’s a practical guide for embedding responsibility into the lifecycle of AI systems.

For data governance leaders, the most important shift is the emphasis on traceability. The framework makes it clear that organisations must be able to explain how data flows into models, how those models behave, and how decisions are made. This requires robust lineage, versioning, and monitoring. Without these, transparency becomes impossible.

Another critical element is human oversight. The framework reinforces that AI should augment, not replace, human judgement. This means governance must ensure that humans remain in the loop for high‑impact decisions, and that they have the context needed to interpret model outputs. Oversight is not a checkbox, it is a design requirement.

The framework also highlights the importance of data quality and representativeness. Poor data leads to poor models, and poor models lead to poor outcomes. Governance must ensure that training data is accurate, relevant, and free from harmful bias. This is where stewardship, classification, and quality controls become essential.

Finally, the framework calls for ongoing monitoring, not one‑time validation. Models evolve, data changes, and risks shift. Governance must be continuous, adaptive, and embedded into operational workflows.

Tracing my career journey though my blog

I was looking at my blog stats this morning and was really interested to see the geographical spread. I started writing my blog in 2011 and it has been read by 1.18m. I wanted to record all the technical tips I found and technology advancements which were useful to me and might be of use to help others. I started writing on SQL Server and the blog has migrated with me throughout my career through architecture, my PhD research and over the last few years I have been mostly writing on Data Governance, Microsoft Purview, AI Governance and Microsoft Fabric. 


I asked Copilot to share some interesting thoughts about my journey for my blog and here is what it thought.

Your blog reveals a journey defined by intellectual curiosity, data‑driven leadership, and a distinctive narrative voice. Here are five evidence‑based, genuinely interesting aspects of that journey.

1. You frame data leadership as a process of unlearning, not just learning

Your post “What data leaders must unlearn to lead in the Age of AI” argues that modern leadership requires shedding outdated assumptions, such as believing governance slows innovation or that documentation equals understanding. This reframing positions you as a thought leader challenging entrenched industry norms. 

2. You consistently connect research, industry practice, and philosophical curiosity

Your blog tagline “Chaos, complexity, curiosity and database systems” captures a rare blend: deep technical expertise paired with a reflective, almost philosophical lens on data systems. This fusion shapes your writing style and differentiates your professional voice. 

3. You document the shift from AI experimentation to AI industrialisation with governance at the centre

In your coverage of the Gartner Data & Analytics Summit, you highlight how governance has moved from a compliance checkbox to the engine of AI ROI. This shows your role as an interpreter of industry change, translating large‑scale trends into practical insights for practitioners. 

4. Your journey is grounded in both academic achievement and community leadership

Across external references, you are consistently described as a Microsoft Data Platform MVP, a PhD researcher recognised with the AOUG Will Swann Award, and a founder/organiser of Data Toboggan. This positions your blog as the narrative thread connecting your academic, professional, and community contributions. 

5. Your posts reveal a long‑standing commitment to making governance practical, accessible, and embedded

Whether discussing AI oversight, lineage, behavioural metadata, or Purview governance models, your writing emphasises practical implementation over theory. You repeatedly advocate for governance that is embedded, automated, and literacy‑driven, showing a consistent philosophy across years of posts. 





Wednesday, 22 April 2026

SQLBits 2026 Day 1

 SQLBits in Wales is happening this week. We have held the conference at the ICC before so all very familiar.  The keynote was introduced by Simon Sabin before it moved into an in-depth session on the future of Microsoft One SQL—from on-premises to Azure and into Microsoft Fabric. It delved into the unified, AI-ready relational database that powers modernization and next-gen AI apps. SQL  delivers consistency, performance, and some innovative features. The speakers in the keynote were Bob Ward, Anna Hoffman, Priya Sathy, Shiva Gurumurthy.

Data and AI is changing the world. It is the fuel that powers AI.  Microsoft SQL is one consistent SQL for the era of AI. It is
enterprise ready, has evolved over the decades to an industry leading scalable, dynamic platform with high availability and  best in class price performance. 

The keynote delved into migrate and modernize , the need for cloud native AI apps and the need for unified data platforms. 

Highlights of new features 

Azure SQL Server Managed Instance GA
SQL Server 2025 on Azure Virtual Machine GA
Azure Accelerate for Databases announced
aka.ms/modernizedatabases

Azure SQL Database Hyperscale GA you only pay for cores and storage and no license fee.

Mirroring from Microsoft SQL to Fabric GA
SQL Database in Fabric GA

Database Hub in Fabric was announced with fleet management,  observability and database agents.

The depth and breadth of SQL Server has grown substantially over the years and supports many engine types for holistic use. The engines being graph, vector,  columnar, document, spatial, key value, hierarchical, in memory and ledger.

Many sessions today delved into SQL migrations in various forms. There was a fun session talking about Databricks vs Fabric. There are many differences and business needs and business technology stack skills in house often influence the choice of technology.

The Azure SQL Server Hyperscale session talked about Hyperscale which is about the architecture design,  not the engine. It is truly a distributed , cloud native architecture  with boundless storage that grows automatically with elastic compute at two speed. It uses SQL Server as caches.

More sessions for day 2 tomorrow. 




Saturday, 11 April 2026

GRAICE Foundation Training Principles of Responsible AI Governance

I am pleased to share I have completed the GRAICE Foundation Training Principles of Responsible AI Governance and am certified for foundational competency in GRACIE, Humanity's Operating System for AI. 

GRAICE is a robust governance operating system geared toward instilling confidence and accountability in AI systems on a global scale. It has 6 foundational values, 7 operational pillars and a 3 teir assurance model. 



Wednesday, 8 April 2026

Operationalising Responsible AI: What Microsoft Purview Actually Enables and How to Use It Well

The conversation around Responsible AI is accelerating, but many organisations still struggle with the same practical gap: How do we turn principles into operational behaviour inside real systems?  
Frameworks like GRAICE™ and Microsoft’s Responsible AI Standard set the expectations,  but they don’t tell you how to wire those expectations into your data estate.

This is where Microsoft Purview plays a meaningful, but often misunderstood, role. Purview is not an end‑to‑end Responsible AI lifecycle platform. It doesn’t manage model development, evaluation, or fairness testing. What it does provide is the governance and security foundation that ensures AI systems interact with enterprise data safely, consistently, and in line with organisational policy.

Below are three actionable ways organisations can use Purview to strengthen Responsible AI practice without overstating its scope.

1. Use Purview to establish data boundaries for AI systems
AI systems are only as responsible as the data they can see. Purview’s classification, sensitivity labels, and access policies give organisations the ability to:

- identify sensitive or regulated data  
- prevent AI systems (including Copilot and internal agents) from accessing inappropriate content  
- enforce information barriers and least‑privilege access  
- ensure data minimisation by design  

Why this matters:  
GRAICE™ and Microsoft’s RAI Standard both emphasise data minimisation, privacy, and controlled access. Purview doesn’t enforce RAI principles directly — but it does enforce the data boundaries those principles depend on.

Action:  
Map your AI use cases to Purview sensitivity labels and access policies. Treat this as a precondition for deploying any AI capability.

2. Use Purview’s lineage and scanning to understand AI‑related data risk
Purview lineage is often misunderstood as “AI lifecycle traceability”. It isn’t.  
But it is a powerful mechanism for:

- understanding where sensitive data originates  
- seeing how data flows across systems AI may interact with  
- identifying shadow data sources that could introduce risk  
- supporting DSPM (Data Security Posture Management) for AI workloads  

Why this matters:  
Responsible AI requires organisations to understand the provenance, quality, and risk profile of the data AI systems rely on. Purview provides visibility into the data estate, not the model estate — and that visibility is essential for any RAI programme.

Action:  
Enable automated scanning and lineage for all data sources used by AI applications. Use lineage to identify high‑risk flows before enabling AI access.

3. Use Purview’s AI usage governance to monitor and control how AI behaves with your data
The newest Purview capabilities focus on AI usage governance — including Copilot and internal AI agents. This includes:

- monitoring AI interactions with sensitive data  
- detecting risky prompts or behaviours  
- applying data‑loss prevention controls to AI usage  
- generating audit trails for compliance and oversight  

Why this matters:  
Responsible AI is not just about how models are built — it’s about how they are used. Purview provides the observability and guardrails needed to ensure AI systems behave safely in production.

Action:  
Enable Purview’s AI usage governance features for all enterprise AI tools. Treat AI usage logs as part of your RAI assurance evidence.

In summary Purview does not operationalise Responsible AI on its own — and it shouldn’t be positioned as a lifecycle governance platform.  
What it does provide is the data governance, security, and AI‑usage oversight that Responsible AI frameworks rely on.

If you use Purview to:

1. Set data boundaries for AI  
2. Understand data risk and provenance  
3. Monitor and govern AI usage  

you create the conditions in which Responsible AI can actually function.


Why Data Catalogues Fail (And How Purview Is Quietly Fixing the Industry’s Blind Spots)

Most data catalogues fail for a simple reason: they  assume that documentation alone creates understanding. It doesn’t. A catalogue full of stale metadata, incomplete lineage, and inconsistent tagging is worse than useless and it creates a false sense of confidence. Many organisations have learned this the hard way, investing heavily in catalogues that quickly became digital graveyards.

Purview succeeds where others fail because it treats the catalogue as part of a governance ecosystem, not a standalone tool. Lineage, classification, access policies, and data maps are not optional extras. They are the core of the experience. This integrated approach ensures that metadata is accurate, automated, and actionable.

Another blind spot Purview addresses is operational relevance. Traditional catalogues focus on documentation whereas Purview focuses on control. It doesn’t just describe data as it also governs it. This shift from passive to active metadata is what makes Purview viable at enterprise scale.

Purview also excels in hybrid and multi‑cloud environments, where many catalogues struggle. Its connectors, scanning capabilities, and policy enforcement mechanisms are designed for real‑world estates, not idealised architectures.

Purview is integrated with Fabric which positions it as the governance backbone of the Microsoft ecosystem. As organisations consolidate their data platforms, Purview becomes the source of truth that ties everything together.



Saturday, 4 April 2026

GCRAI and the Rise of GRAICE™: A New Global Framework for Responsible AI Governance

The global conversation around responsible AI has been dominated for years by national strategies, corporate principles, and academic frameworks. But the launch of the Global Council for Responsible AI (GCRAI) and its GRAICE™ framework marks a shift toward something far more ambitious: a unified, cross‑sector, cross‑industry operating system for AI governance. Unlike many initiatives that focus on high‑level ethics, GCRAI positions itself as a mechanism for operationalising responsibility at scale. It’s an attempt to move responsible AI from aspiration to enforceable practice.

What makes GCRAI notable is its global footprint. With representation across dozens of countries and a network of ambassadors, it aims to create a governance ecosystem that transcends borders and industries. This matters because AI risk is not localised. Models trained in one region influence decisions in another. Data flows across jurisdictions. And the consequences of AI misuse rarely stay within organisational boundaries. A global framework is not just desirable, it is necessary.

The GRAICE™ framework, unveiled at Davos, is positioned as “humanity’s operating system for AI.” While the branding is bold, the intent is clear: create a standard that is actionable, measurable, and adaptable. GRAICE™ focuses on transparency, security, accountability, and human‑centric design. But what sets it apart is its emphasis on measurable compliance. Many frameworks articulate principles; GRAICE™ attempts to define behaviours. It seeks to bridge the gap between what organisations say about AI and what they actually do.

Running alongside GCRAI is the G.R.A.C.E. Global Council for AI, which articulates a complementary set of principles centred on human‑centred AI. Their pillars emphasise mission, vision, and the balance between technology, ethics, and humanity. While still evolving, the G.R.A.C.E. principles reinforce the idea that responsible AI is not just a technical discipline but it’s a societal one. They highlight the need for AI systems that enhance human capability rather than diminish it, and for governance that protects people as much as it protects organisations.

Together, GCRAI and G.R.A.C.E. represent a growing recognition that responsible AI cannot be solved by isolated efforts. Organisations need frameworks that are interoperable, globally recognised, and grounded in real‑world practice. They need standards that can be implemented, audited, and adapted as technology evolves. And they need governance models that reflect the complexity of modern AI systems and systems that learn continuously, behave unpredictably, and operate across boundaries.

For data and AI leaders, the emergence of GRAICE™ is a signal. The era of voluntary, principle‑only responsible AI is ending. The next phase is about operationalisation, measurement, and accountability. Whether organisations adopt GRAICE™ directly or use it as a benchmark, its influence will shape how responsible AI is defined, governed, and enforced in the years ahead. This is not just another framework but a part of a global shift toward responsible AI as a shared, enforceable standard.

G.R.A.C.E. is
GROUNDED
RESPONSIBLE
AUTHENTIC
COMPASSION
ETHICAL

Every decision involving AI should align with moral truth, respect for life, and integrity of purpose through moral align




https://www.graceglobalcouncil.com/
https://gcrai.ai/

Wednesday, 1 April 2026

How Responsible AI frameworks shape the future of AI Governance

The responsible AI landscape is shifting fast. Organisations are no longer looking for a single framework to rule them all; they’re looking for interoperability, clarity, and practical pathways to operational maturity. Two frameworks are increasingly shaping that conversation: GRAICE™, the new global framework from the Global Council for Responsible AI (GCRAI), and Microsoft’s Responsible AI Standard, one of the most established engineering‑level governance standards in the industry.

These frameworks are often discussed in the same breath, but they operate at different layers of the governance stack. Understanding that distinction is essential — because it’s precisely what makes them complementary rather than competitive.

GRAICE™: A Global Meta‑Framework for Cross‑Sector Alignment

GRAICE™ is designed as a global, cross‑sector framework. Its purpose is not to replace organisational or vendor standards, but to provide:

- a shared global vocabulary for responsible AI  
- a principles‑level structure that governments, industry, academia, and civil society can align to  
- a meta‑framework that organisations can map their internal standards against  
- a societal‑level lens that sits above implementation detail  

GRAICE™ is intentionally broad. It sets direction, coherence, and expectations at a global level — the “north star” rather than the engineering manual.

Microsoft’s Responsible AI Standard: Operational Discipline for Real Systems

Microsoft’s Responsible AI Standard sits at a different layer: the practical, engineering‑focused layer where teams build, evaluate, deploy, and monitor AI systems.

It provides:

- detailed lifecycle requirements  
- controls for data, evaluation, transparency, and oversight  
- guidance for product teams and engineering functions  
- mechanisms for translating principles into day‑to‑day practice  

Where GRAICE™ is global and principle‑driven, Microsoft’s standard is specific, actionable, and operational.

Complementary by Design

This is the critical point:  
GRAICE™ does not replace Microsoft’s Responsible AI Standard — or any other organisational framework.

Instead, the two frameworks operate in a layered model:

- GRAICE™ → global alignment, societal expectations, cross‑sector coherence  
- Microsoft RAI Standard → engineering discipline, implementation controls, operational maturity  

Together, they create a governance ecosystem that is:

- globally relevant  
- locally actionable  
- technically grounded  
- aligned with societal expectations  

This layered approach reflects where responsible AI is heading: ecosystems of interoperable frameworks, not a single universal standard.

Where They Converge

Despite their different scopes, both frameworks reinforce core responsible AI expectations:

- transparency as a foundation for trust  
- accountability and human oversight  
- continuous monitoring of evolving systems  
- responsible AI as an ongoing operational commitment  

These shared foundations show a field moving toward coherence, even when frameworks serve different purposes.

The Real Opportunity: Use Them Together

For organisations, the value lies in the combination:

- GRAICE™ provides the global direction and cross‑sector alignment.  
- Microsoft’s Responsible AI Standard provides the operational machinery to implement responsible AI in real systems.  

Using both gives organisations a governance model that is both strategically aligned and practically executable — exactly what mature AI governance requires.


Saturday, 28 March 2026

Series Index Summary: Data Governance, Purview, and Responsible AI

This four‑month series explores the shifting landscape of data governance, Microsoft Purview, and Responsible AI at a moment when organizations are being forced to rethink how they manage, understand, and trust their data. Across the posts, the series traces a clear arc: from the maturing of governance in 2025, through the practical realities of Purview adoption, to the cultural and architectural shifts required to lead in the age of AI.

The December posts set the stage by examining why governance finally became a strategic priority, how Purview’s quieter updates are reshaping the platform, and why AI risks making organizations intellectually complacent without strong data foundations. These pieces frame governance not as bureaucracy, but as the mechanism that makes innovation safe.

I move deeper into strategy and Responsible AI. It explores the predictions shaping 2026, the operational implications of Microsoft’s updated Responsible AI framework, and the evolution of Purview’s classification engine. The AI Is Making Us Dumber series continues here, highlighting the risks of over‑automation and the importance of maintaining human understanding.

I shift into technical depth and organizational reality. It covers SQL Server’s new direction, the strategic value of metadata, and a detailed breakdown of Purview’s February feature updates. The month closes with reflections on why organizations struggle to operationalize policy and how governance must adapt to keep pace with rapidly learning AI systems.

March brings the series to a forward‑looking conclusion. It introduces the concept of contextual governance, examines the architectural convergence of Fabric and Purview, and challenges data leaders to unlearn outdated assumptions. These posts emphasize that leadership in the AI era requires adaptability, transparency, and a willingness to rethink long‑held beliefs.

Together, these posts form a cohesive narrative about where data governance is heading, what Purview is becoming, and how organizations can navigate the accelerating complexity of AI‑driven data estates. I wanted to add clarity in a landscape full of noise and understand that governance is no longer optional, but foundational.

Wednesday, 25 March 2026

Unifying the Data Estate for the next AI Frontier Fabcon Keynote

The Atlanta FabCon keynote was delivered last Wednesday by Amir Netz (CTO and Technical Fellow), Arun Ulag (President, Azure Data), Shireesh Thota (Corporate Vice President, Azure Databases).  It has was recorded. You can watch it here

Session Abstract

As organizations race to deploy generative and agentic AI, the biggest challenge they face is not models, it’s their data estate. Join Microsoft engineering leadership to learn how Microsoft’s databases can be unified through Microsoft Fabric and OneLake, creating a single, governed foundation for analytics, AI, and intelligent agents. Discover why this shift represents a fundamental change in how modern data platforms are built, managed, and scaled for the next AI frontier.

A summary of the announcements.





Sunday, 22 March 2026

What Data Leaders Must Unlearn to Lead in the Age of AI

The hardest part of leading in the AI era isn’t learning new skills, it is unlearning old assumptions. Many of the beliefs that shaped data leadership over the past decade no longer apply. The pace of change, the complexity of modern estates, and the unpredictability of AI systems demand a different mindset. Leaders must be willing to let go of outdated models of control, certainty, and hierarchy.

One of the first assumptions to unlearn is that governance slows innovation. In reality, governance accelerates innovation by reducing risk, increasing clarity, and enabling responsible experimentation. When governance is embedded rather than imposed, it becomes a catalyst rather than a constraint. Leaders who cling to the old narrative will find themselves outpaced by those who embrace governance as a strategic enabler.

Another assumption to unlearn is that documentation equals understanding. In the AI era, understanding comes from lineage, monitoring, and behavioural metadata, not static documents. Leaders must shift from documenting after the fact to embedding governance into the system itself. This requires investment in tooling, automation, and literacy.

Leaders must also unlearn the idea that AI systems can be trusted without oversight. AI is probabilistic, not deterministic. It requires continuous monitoring, not one‑time validation. The organisations that thrive will be those that treat AI as a dynamic system requiring ongoing governance, not a product that can be finished.

Finally, leaders must unlearn the belief that expertise is static. In the AI era, expertise evolves. The best leaders will be those who remain curious, adaptable, and willing to challenge their own assumptions. Unlearning is not a weakness but a leadership skill.



Friday, 20 March 2026

Navigate AI on Your Data & Analytics Journey to Value - Gartner 2026

The Gartner Data & Analytics Summit (March 9–11, 2026, in Orlando) marked a significant shift from AI experimentation to AI industrialization. My post focuses on how governance is no longer a check-the-box activity but the literal engine for AI ROI.

​Here are some collated highlights that interested me.

1. The Core Keynote: Beyond the Hype to ROI

Analysts Adam Ronthal and Georgia O’Callaghan opened the summit by challenging the move fast and break things mentality. They argued that while AI is accelerating, success belongs to those who find a thoughtful approach to speed and direction.

Gartner emphasized that AI adoption follows an S-curve, a slow start, rapid acceleration, then stabilization. We are currently at the steep upward slope. Organizations that don't integrate governance now will face expensive catch-up efforts that turn AI from an asset into a liability.

Gartner categorized firms into three types: AI-First (aggressive), AI-Opportunistic (fast followers), and AI-Cautious (waiting for stability). They noted that regardless of the path, doing nothing is no longer an option.

2. Data Governance: The Move to Adaptive & Autonomous

A major takeaway was that traditional, manual data governance is dead. It cannot keep up with the volume and velocity of AI-driven data.

Gartner introduced the concept of Outcome-Based Governance. Instead of governing all data equally, teams should focus on high-value data products that directly impact AI outcomes.

A new AI-Ready Data Framework focuses on three pillars:

   Alignment: Ensuring data semantics and lineage are clear.

   Qualification: Continuous data quality validation for model training.

   Governance: Enforcing policies during the AI lifecycle.

The Rise of Governance Agents: A top 2026 prediction is that D&A leaders will begin using Data Governance Agents to automate the negotiation and orchestration of data pipelines.

3. AI Governance: Bridging the Trust Gap

The summit highlighted a looming crisis where 60% of organizations are predicted to fail at realizing AI value due to poor integration between data and AI governance.

Gartner warned against Registry-First Governance. Simply listing your AI models in a spreadsheet isn't enough. They called for Continuous Code-to-Cloud Visibility, where governance monitors data as it flows through APIs and AI agents in real-time.

A buzzword at the conference was the Unified Context Layer. To govern AI effectively, you need a layer that connects business meaning to raw data. This allows AI agents to act reliably because they understand the why and how, not just the what.

Gartner predicts spending on AI governance platforms will reach $492M in 2026, doubling to $1B by 2030, as companies realize that compliance is a trust dividend rather than a tax.

4. Responsible AI: Ethics as an Operational Metric

Responsible AI (RAI) moved from a philosophical discussion to a technical requirement.

Gartner warned that critical failures in managing synthetic data (used to train models when real data is scarce) are a major risk to AI governance. Without metadata tracking the lineage of synthetic data, models risk hallucination loops.

The keynote suggested that data organizations are being reshaped into fusion teams where humans and AI agents work together. Responsible AI here means defining clear boundaries of AI involvement in decision-making.

As we move toward Agentic AI (autonomous agents that can take actions), Gartner highlighted the need for explicit transparency capabilities with the ability to audit why an agent made a specific decision in real-time.

In summary by 2027, organizations that emphasize AI literacy for executives will achieve 20% higher financial performance than those that do not. (Gartner, March 2026). In 2026, AI strategy and Data strategy have become inseparable and you cannot scale the former without governing the latter.

Safeguarding the AI Frontier with Microsoft Purview & Fabric Innovations

The speed of AI transformation is accelerating. However, for many organizations, that speed is throttled by a critical concern, that of Data Governance. At the Microsoft Fabric Community Conference this week, Microsoft unveiled a suite of innovations designed to bridge the gap between rapid AI adoption and robust data security. By deepening the integration between Microsoft Purview and Microsoft Fabric, they are providing a secure-by-design foundation for the AI era.
Here is a breakdown of the major announcements.

Data Security: From Protection to Prevention

In an AI-driven world, data oversharing is a primary risk. Microsoft is addressing this by extending Purview’s sophisticated security controls directly into the Fabric ecosystem.

Information Protection Policies: Security admins can now define policies in Purview that automatically enforce access permissions based on sensitivity labels. If a file is labeled Highly Confidential, Fabric respects those boundaries automatically.
 
Data Loss Prevention (DLP) for Fabric is now in preview, DLP policies can identify sensitive information (like SSNs or credit card numbers) as it is uploaded to Fabric. This allows for automatic risk remediation, preventing data leaks before they happen.
 
Trusted Workspace Access allows for secure connections to data sources behind firewalls, ensuring that OneLake remains a secure environment even when pulling from complex network topologies.

Risk Management: Visibility into the Human Element

Governance isn't just about locking down files; it is about understanding user behavior.
 
Purview Insider Risk Management is one of the most significant announcements integrating with Fabric. This allows organizations to detect, investigate, and act on potentially malicious or inadvertent activities, such as mass data downloads or unauthorized sharing directly within the Fabric environment.

Data Discovery & Curation: The Reimagined Governance Experience

Microsoft is moving away from the policing model of governance toward an enabling model. They’ve introduced a reimagined governance experience that is business-friendly and federated.

Unified Catalog & Metadata: Fabric’s built-in metadata and lineage are now seamlessly reflected in Purview. This gives a single pane of glass view across your entire multi-cloud estate, making it easier for users to find the data they need while ensuring it meets quality standards.

Item Tagging is a new feature allowing users to add tags to Fabric items. This significantly enhances discoverability and encourages the reuse of high-quality data assets across the organization.

AI Readiness: Building on a Trusted Foundation

The ultimate goal of these updates is AI Transformation. You cannot have a reliable Copilot if it is grounded in unverified or ungoverned data. By automating discovery, classification, and protection, Microsoft Purview ensures that the data fueling your AI models is:
 Accurate: Through better curation and quality checks.
 Compliant: Adhering to regional and industry regulations.
 Secure: Only accessible to the right people (and the right AI agents).

Final Thoughts

The announcements from FabCon 2026 signal a shift. Data governance is no longer a hurdle to be cleared. It is the engine that allows AI to run safely. For those of us managing data estates, the tighter synergy between Purview and Fabric offers a clear roadmap to innovate with confidence.


Wednesday, 18 March 2026

FabCon and SQLCon 2026

It is the third Fabcon event and first ever SQLCon in Atlanta this week. This week Microsoft didn't just make a small change, they made a larger shift. The convergence of where data lives and what data does has been the holy grail of database management and the biggest hurdle to AI isn't the model itself, it is the data ingest quality and the fragmentation of the estate. The announcements from the conference help answer the current chaos and complexity that exists. This is my take on the key shifts that will matter most when navigating the database landscape.

The Single Pane of Glass Arrives in the Database Hub
For years, we have managed our estates in silos, Azure SQL over here, Cosmos DB over there, and SQL Server on-premises (hopefully via Azure Arc) somewhere else. The new Database Hub in Microsoft Fabric (now in early access) is a game-changer for governance. It provides a unified view to explore and optimize the entire estate. But the real interest is the agent-assisted management. Using intelligent agents to reason over signals and explain why something changed. It keeps the human in the loop but removes the manual drudgery.















Microsoft IQ: The Semantic Layer for AI

One of the biggest announcements is how Fabric is becoming the intelligence layer for the enterprise.
  •  Fabric IQ brings together live business data.
  •  Work IQ pulls in productivity signals.
  •  Foundry IQ captures institutional knowledge.
This is critical because AI agents are only as good as the context they have. By creating a unified semantic meaning, we are finally moving away from hunting for data and toward activating data.

OneLake is Closing the Gap on Silos
The OneLake vision continues to expand with more native mirroring capabilities (SharePoint lists and Dremio are now in preview; Oracle and SAP Datasphere are GA).
The standout for me, however, is the Shortcut transformations. The ability to shape data, like converting Excel to Delta tables, automatically as it connects to OneLake is a massive win for data quality. We know that without good quality data at the start, the AI journey hits a wall. These automated gatekeepers help ensure the lake doesn't become a swamp.

Mission Critical Apps with connected SQL and Fabric
With SQL Server 2025 growing faster than any previous version, the integration with Fabric is no longer a maybe. The announcements focused heavily on a converged platform that unifies transactional and analytical data.
For developers, the new Migration Assistant for SQL databases (using AI to resolve compatibility issues via DACPACs) is a pragmatic approach to modernization.

Beyond the Hype it is easy to get lost in the agentic AI buzzwords. But looking at the technical roadmap from FabCon/SQLCon, the focus is clearly on Usability, Empowerment, and Security.

We are moving toward a world where the database is taking a more active place in the business. Whether we are managing a legacy SQL estate or building a greenfield Fabric environment, the wall between our operational databases and analytics is getting less.

 You can read more about the announcements: 


Sunday, 15 March 2026

Metadata Is Not Optional: The Strategic Value Organisations Still Undervalue

Metadata has always been the unglamorous backbone of data governance, but in 2026 it becomes a strategic asset. AI systems depend on it, automation relies on it, and governance collapses without it. Yet many organisations still treat metadata as an afterthought, something to be documented later, if at all. This mindset is becoming increasingly untenable.

Metadata based on DAMA-DMBOK principles is 'data about data', meaning the descriptive information that defines, structures, and gives context to data so it can be understood, managed, and used effectively.

The rise of AI has exposed the consequences of weak metadata practices. When organisations cannot explain where data came from, how it has changed, or who has access to it, they cannot trust the outputs of their models. Metadata is the connective tissue that links data to meaning, context, and accountability. Without it, even the most sophisticated AI systems become brittle.

Metadata also plays a critical role in operational efficiency. Automated classification, lineage, and policy enforcement all depend on accurate metadata. When metadata is missing or inconsistent, governance becomes manual, slow, and error‑prone. When metadata is rich and reliable, governance becomes scalable.

The organisations that succeed in 2026 will be those that treat metadata as a first‑class citizen. This means investing in automation, stewardship, and tooling that captures metadata at the point of creation and not months later. Metadata is not optional. It is the foundation of trustworthy data and Responsible AI.

Inspirational STEM 1958-1968

From the moment I first understood the meaning of my mum’s , Joan Holt, school motto “Be strong and very courageous” I realised it wasn’t just a phrase she carried; it was a quiet force that shaped her life. Long before women in STEM were recognised or encouraged in the way they are today, she worked in a world of theoretical physics, numerical analysis, and early computing with a determination that still leaves me in awe. Hers was not the loud, celebrated courage of someone who set out to break barriers, but the steady, purposeful courage of someone who simply refused to accept that those barriers applied to her. With it being International Women’s Day last week we celebrated the women who paved the way and that reminds that one of those pioneers was my mum.

Her career reads like a living history of British computing. Her first days were working on IBM mainframes and analysing data, when computers filled whole rooms and printouts were the size of phone books to programming the Ferranti Mk1. She drafted manuals for the Elliott 503 and 4100, and solved problems with nothing but symbolic assembly code. She lived through the evolution of technology as few people did. She worked in rooms where magnetic tapes towered over her, where data meant punched cards and where a single mistake meant repunching a deck. She navigated machines that shook themselves off desks, deciphered the results of calculations that once took over 8 months, and wrote documentation that bridged engineers and the future operators. She was often the only woman in the room, one of only a handful among thousands of men at just nineteen. She simply worked hard proving herself indispensable through intelligence, persistence, and grace.

Today, on Mother’s Day, I think not only of the extraordinary work she did, but of the extraordinary woman she was. A role model who taught me that courage can be quiet, curiosity can be powerful, and that you can shape the world even if you never stand in the spotlight. While the world now celebrates women in STEM more visibly than ever, she lived those values when the path was far tougher and the recognition far thinner. Her achievements may sit in old manuals, early programs, and memories of rooms filled with tapes and valves, but her legacy is alive in me. I am proud beyond words to have been her daughter, and prouder still to share her personal story to help inspire future generations.





Saturday, 14 March 2026

Fabric, Purview, and the New Shape of Enterprise Data Architecture

Fabric has reshaped the Microsoft data landscape by unifying analytics, engineering, and storage into a single experience. But unification alone does not create coherence. The real transformation happens when Fabric is paired with Purview. Together, they form an architecture where data movement, governance, and analytics operate as one system rather than disconnected components.

This convergence matters because modern data estates are too complex to govern manually. Data flows across pipelines, notebooks, semantic models, and AI workloads. Without integrated governance, organisations end up with pockets of visibility rather than a complete picture. Purview provides the lineage, classification, and policy enforcement that Fabric alone cannot deliver.

One of the most powerful aspects of this integration is the alignment between data products and governance. Fabric encourages teams to think in terms of products that are curated, reusable assets with clear ownership. Purview reinforces this by providing the metadata, stewardship, and controls that make data products trustworthy. Governance becomes part of the product lifecycle, not an afterthought.

This new architecture also supports hybrid and multi‑cloud realities. Many organisations are not all using Fabric, nor should they be. Purview’s ability to govern across environments ensures that Fabric becomes a strategic hub rather than a silo. The result is an architecture that is unified, not monolithic but flexible.

As organisations modernise their estates, the combination of Fabric and Purview will become the default pattern. It is not just a technical alignment but it is a governance first architecture for the AI era.

It is FABCON and SQLCON in Atlanta March 16 - 20, 2026. 



Sunday, 8 March 2026

Purview and OneLake Govern tab change

The Purview Hub in Fabric insights have now moved to the OneLake catalog’s Govern tab. The change helps bring governance closer to where the data actually lives, rather than leaving them in a parallel experience that always that wasn't as helpful as it could have been. In the Govern tab, you now see the same posture summaries, recommended actions, and learning resources that were in Purview Hub, but framed within Fabric’s unified governance model. It is a cleaner, more coherent way of surfacing what core information about the health of your data estate.

Functionally, the Govern tab now gives you a consolidated view of governance status, recommended actions, sensitivity and endorsement insights, and links into deeper governance tooling. You can drill into items that need attention, track improvements over time, and understand how your organisation is using Fabric’s governance features. The experience also ties directly into the OneLake catalog, so governance isn’t an afterthought. It is embedded in the same place you explore, classify, and manage data assets.

Microsoft hasn’t yet published a formal retirement date for the Purview Hub. Fabric is now  presenting a single, coherent story about how organisations should understand and manage their data estate.






You can learn more about it here.

Friday, 6 March 2026

Why Strategic Leaders are Pivoting to Contextual Governance

For decades, data governance has been treated as a static discipline with a set of rigid policies laid out in formal frameworks and applied uniformly across the enterprise. But in an era defined by decentralized architectures and the breakneck speed of AI adoption, this one-size-fits-all approach is inefficient and has the potential to increase business risk.

The mismatch between static governance and dynamic data estates is the primary reason why many digital transformation projects stall. It is time to move toward Contextual Governance.

The Governance Friction Paradox

Traditional governance models fail because they are binary. They treat data as a fixed asset rather than a fluid utility. This creates a paradox:

  • Over-governance: Smothering low-risk innovation with unnecessary red tape.

  • Under-governance: Missing the subtle, high-risk nuances of how data is actually used in the wild.

Static rules rely on metadata labels that are often outdated the moment they are applied. Contextual governance, however, shifts the focus from what the data is to how the data is behaving.

What is Contextual Governance?

Contextual governance is a move from policing to orchestration. It is an adaptive framework that evaluates risk in real-time based on the intersection of three pillars:

  1. The Actor: Who is accessing the data, and what is their historical behaviour?

  2. The Environment: Where is the data flowing? Is it a sandboxed R&D environment or a customer-facing LLM?

  3. The Intent: Is the data being used for a routine report, or is it being fed into a model that could leak proprietary logic?

The Strategic Shift: We are moving from asking, Is this data protected? to asking, Is this data protected enough for this specific moment?

Beyond Compliance: The Competitive Edge

For the C-suite and strategic leads, contextual governance isn't just a compliance checkbox. It is a performance multiplier.

  • Agility at Scale: By automating the easy permissions and tightening controls risk is reduced and  the bottlenecks a removed that frustrate engineering and data science teams.

  • AI Readiness: AI systems don't live in a vacuum. A model that is safe in a localized test may become dangerous when exposed to real-world edge cases. Contextual governance provides the guardrails necessary to deploy AI with confidence.

  • Intelligent Foundations: This shift forces a higher standard for metadata and lineage. You are mapping data and mapping the value stream of the entire organization.

The Path Forward

Transitioning to this model requires more than new software; it requires a cultural pivot. We must change how we view governance from firm control to see it as a central intelligent system of the enterprise.

The future of data doesn't belong to those with the thickest rulebooks. It belongs to those who can govern at the speed of business.




Monday, 2 March 2026

Purview Announcements Round‑Up for February Features

February’s Purview updates delivered a series of enhancements that, while not flashy, significantly strengthen the platform’s ability to support enterprise‑scale governance. These updates reflect a clear pattern: Microsoft is investing in the operational realities of governance, not just the conceptual frameworks. The result is a more mature, more capable, and more integrated Purview experience.

One of the standout improvements is the expansion of lineage depth and clarity. Complex estates often struggle with lineage that is either too shallow to be useful or too dense to interpret. The new enhancements strike a better balance, offering more detail without overwhelming users. This is particularly valuable for organisations trying to trace data through Fabric pipelines or hybrid architectures.

Another important update is the introduction of more granular policy controls. Organisations increasingly need to apply nuanced rules across different environments, data types, and sensitivity levels. The new controls make it easier to enforce policies that reflect real‑world complexity rather than idealised models. This is governance that adapts to context, not governance that forces the business to adapt to the tool.

Purview also continues to strengthen its integration with Fabric. As more organisations consolidate their analytics and data engineering workloads into Fabric, the need for seamless governance becomes critical. The February updates improve the consistency of metadata flow between the platforms, reducing gaps and improving trust.

Finally, the enhancements to scanning and classification performance will make a noticeable difference for large estates. Faster scans, more accurate tagging, and improved connector reliability all contribute to a smoother governance experience.

Saturday, 28 February 2026

Microsoft Purview Data Governance – Interactive Experience

This interactive Storylane provides a guided walkthrough of Microsoft Purview Data Governance, showcasing how organizations can establish clarity, accountability, and trust across their data estate using a modern, federated approach. The experience demonstrates how Purview brings together data discovery, governance domains, data products, access workflows, data quality, and estate health into a single, coherent governance platform.

The walkthrough highlights how business and technical users can discover and understand data through the Unified Catalog, using familiar business language, lineage, and context rather than low‑level technical metadata. It shows how data is organized into business domains and data products, helping teams govern data at scale while maintaining clear ownership and accountability.

The Storylane also illustrates Purview’s end‑to‑end governance lifecycle — from requesting access to governed data, through to defining critical data elements, managing data quality rules, and monitoring data estate health. A key theme throughout is Purview’s federated governance model, enabling central oversight while empowering domain teams to own and manage their data within agreed standards and controls.

Overall, the experience positions Microsoft Purview as a system of record for data governance, supporting organizations as they move toward cloud, analytics, and AI by ensuring data is discoverable, trusted, compliant, and ready for reuse.



Watch the video here

https://purviewdatagovernance.storylane.io/share/kfdnjhua9hlh

Friday, 27 February 2026

The Governance Gap and Why Organisations Still Struggle to Operationalise Policy

Most organisations don’t have a policy problem, they have an operationalisation problem. Policies exist, but they’re not enforced, monitored, or embedded into workflows. Governance becomes a theoretical exercise rather than a practical one. Teams know what they should do, but the mechanisms to ensure they actually do it are missing.

This gap often emerges because governance is treated as documentation rather than behaviour. Policies are written in isolation, disconnected from the systems and processes they’re meant to govern. Without automation, policies rely on human discipline, and human discipline is inconsistent at best.

Microsoft Purview helps close this gap by making policy enforcement automatic and auditable. When classification, lineage, and access controls are integrated, policies become part of the system rather than an external expectation. This shifts governance from aspiration to execution.

But technology alone isn’t enough. Organisations need stewardship, accountability, and a culture that treats governance as part of delivery, not a hurdle to clear. Operationalising policy requires alignment across teams, clarity of ownership, and a commitment to continuous improvement.

The governance gap is not inevitable. It’s a symptom of misalignment. When organisations align policy, technology, and behaviour, governance becomes a strategic enabler rather than a compliance burden.