Welcome

Passionately curious about Data, Databases and Systems Complexity. Data is ubiquitous, the database universe is dichotomous (structured and unstructured), expanding and complex. Find my Database Research at SQLToolkit.co.uk . Microsoft Data Platform MVP

"The important thing is not to stop questioning. Curiosity has its own reason for existing" Einstein



Saturday, 9 May 2026

How Data Governance Frameworks Converge

From DAMA to ISO to the EU AI Act how Data Governance frameworks converge and how Microsoft Purview operationalises them is important to understand. Organizations rarely struggle because they lack frameworks. They struggle because frameworks remain theoretical while data, AI and regulation operate at scale.

DAMA‑DMBOK, ISO data governance standards, and the EU AI Act all address the same core problem from different angles:

  • DAMA defines what good data management looks like
  • ISO defines how governance should be assured and audited
  • The AI Act defines where governance becomes legally mandatory

Understanding where these overlap and how tooling like Microsoft Purview can operationalise them is now essential for any organization deploying analytics, automation, or AI in production.

DAMA‑DMBOK: The authoritative body of knowledge

DAMA‑DMBOK is a vendor‑neutral reference framework that defines data management as an enterprise capability, with Data Governance at its core. It establishes what must exist, without prescribing technology. [dama.org]

Key DAMA governance expectations

  • Ownership and accountability for data assets
  • Enterprise metadata and lineage
  • Data quality management
  • Security, privacy, and ethical data use
  • Stewardship and domain governance

Critically, DAMA positions metadata, lineage, and quality as foundational the same elements now required by AI regulation and ISO assurance.

ISO standards: Governing data as an accountable asset

ISO standards translate governance principles into assurable controls.

Key standards relevant to data & AI governance

  • ISO/IEC 38505‑1: Governance of data within IT governance
  • ISO 8000: Data quality management
  • ISO/IEC 25642: Data collaboration and controlled data reuse

ISO explicitly frames data as a managed, governed organizational asset that should consider value, risk, and compliance. 

Where DAMA explains what to govern, ISO defines:

  • Who is accountable
  • How governance is monitored
  • How conformance is evidenced

This distinction becomes critical for regulatory audits.

The EU AI Act is when governance becomes mandatory

The EU AI Act, particularly Article 10, legally mandates data governance for high‑risk AI systems. 

Article 10 explicitly requires:

  • Documented data sources and provenance
  • Training, validation, and test data quality controls
  • Bias detection and mitigation
  • Dataset representativeness and contextual relevance
  • Ongoing governance across the AI lifecycle

In effect, the AI Act codifies long‑standing DAMA and ISO principles into law. Non‑compliance now carries legal, financial, and reputational risk.

There is an update to the EU AI Act where EU leaders have agreed to amendments.  The official regulation it is hoped will be passed before the 2 August 2026. A delay of enforcement date has been shared for high-risk AI systems from 2 August 2026 to 2 December 2027 for AI systems listed in Annex III and 2 August 2028 for AI systems covered by Annex I). 

Where the frameworks align

Governance ConcernDAMA‑DMBOKISOEU AI Act
Data ownership & accountability
Metadata & lineage✔ (Article 10)
Data quality management✔ (ISO 8000)
Bias & ethical useEmergingPartial✔ Explicit
Audit & assuranceIndirect✔ Core✔ Mandatory
Lifecycle governance

This convergence means organizations no longer need separate governance programs, they need one operating model that satisfies all three.

Where Microsoft Purview fits

Microsoft Purview does not replace DAMA, ISO, or the AI Act. It operationalises them.

Purview provides:

  • Metadata capture and lineage at scale
  • Policy‑driven classification and protection
  • Evidence‑based compliance reporting
  • Continuous monitoring across data and AI usage

This allows governance teams to move from declared compliance to demonstrable controlDAMA tells you what good looks like. ISO tells auditors how you prove it. The AI Act tells regulators what you must do. The future of data governance is not choosing between these, it is designing one governance model that satisfies all three.

Friday, 8 May 2026

Data governance explained: Tools, pitfalls and how to get it right

Here are the key action points for establishing a successful data governance strategy based on the video "Data governance explained: Tools, pitfalls and how to get it right," 

Align with Business Strategy
​Start with the "Why": Identify the specific business needs, such as compliance (GDPR, AI Act), fundraising, or medical records [04:18].
​Treat Data as an Asset: Data governance must flow from the top of the organization down to the bottom, rather than being treated as a side project [02:21].
​Focus on Use Cases: Define how data governance will help the business specifically—whether it's for better reporting, compliance, or making costly business decisions [03:33].

Establish Roles and Ownership
​Identify Data Owners: Determine who is responsible for your "critical business assets." You cannot govern everything at once, so prioritize the most important data [10:24].
​Appoint Data Stewards: Assign individuals to manage the day-to-day operational quality of the data within their specific departments (e.g., Finance, Product, Service teams) [10:38].
​Create a Governance Model: Set up a central "meeting place" or committee where people from different parts of the business can talk and manage data issues together [09:44].

Build the Foundation (Before Tooling)
​Develop a Business Glossary: Create clear, shared definitions for terms. Different departments often use the same words to mean different things, leading to confusion [05:18].
​Assess Data Quality: Be honest about the current state of your data. Talk to team members to identify which data sets are trusted and which are "bad" [10:49].
​Avoid "Boiling the Ocean": Don't try to govern all 100+ tools at once. Start small with 3-5 business-critical assets and scale from there [12:32].

Implement the Right Tools
​Automate to Scale: Use tools like Microsoft Purview to scan data sets and automate processes that are too large to handle manually [02:54].
​Bridge the Gap: Ensure the technical team (who deploys the tool) and the business users (who use the data) are not disconnected. The tool is only effective if the business processes are mapped into it [03:01].
​Leverage Frameworks: Use established frameworks (like the CDMC framework) to guide your rollout and ensure you are meeting industry standards [11:49].

Foster a Data Culture
​Prioritize Data Literacy: Invest in training so that employees understand the importance of data and how to manage it as part of their daily operations [08:19].
​Be Proactive: Move away from "reactive" governance (only fixing things when they break or for audits) toward a proactive culture where governance is embedded in every project, especially AI [08:39].
​Watch the full video here: 




Sunday, 3 May 2026

Microsoft Purview a Unified Platform

Modern organizations no longer struggle with a lack of data  they struggle with lack of control, visibility, and trust in that data. Data now spans SaaS platforms, cloud analytics services, collaboration tools, AI systems, and on‑prem environments. At the same time, regulatory pressure, security risk, and AI‑driven data reuse continue to increase.

Microsoft Purview addresses this challenge by providing a single, integrated data governance, security, and compliance control plane across the enterprise. Rather than deploying disconnected tools for cataloguing, classification, protection, policy enforcement, investigation, and audit, Purview enables organizations to manage the entire data lifecycle consistently  from discovery and understanding, through protection and monitoring, to legal and regulatory response.

From an executive perspective, the value of Purview is not its individual features, but its ability to:

  • Reduce risk through centralised visibility
  • Enable scale through automation and policy‑driven controls
  • Support innovation and AI adoption without losing governance
  • Provide defensible evidence for regulators, auditors, and boards

Thus Purview allows organizations to move faster with data, safely and to do so using native tooling already embedded across Microsoft 365, Azure, Fabric, and the broader cloud estate. I wanted to share a current state of the tools as there have been many changes of the last couple of years.

Microsoft Purview – Data Governance Tools

The purpose is to understand, trust, and responsibly reuse data across the enterprise. Microsoft Purview’s data governance capabilities focus on metadata, not the data itself. They provide a federated governance model that enables central standards while allowing data ownership to remain close to the business. These are core tools required for AI success.

Data Map

The Data Map scans and inventories data assets across Azure, Microsoft 365, on‑premises systems, and supported multi‑cloud platforms. It captures technical metadata, classifications, and relationships without copying underlying data. From a technical standpoint, the Data Map:

  • Maintains a continuously updated inventory of data assets
  • Supports automated classification during scan operations
  • Acts as the backbone for lineage, catalog, and insight services

Unified Catalog

The Unified Catalog is the business‑facing layer of Purview data governance. It allows users to search, understand, and request access to data using business language rather than technical system names. Key technical capabilities include:

  • Metadata curation and endorsement workflows
  • Business glossary alignment
  • Ownership and stewardship assignment
  • Data quality and health indicators

The catalog does not grant data access itself it integrates with platform security controls to ensure governance without breaking separation of duties.

Data Lineage

Purview lineage provides end‑to‑end visibility of data flows, showing how data moves from source systems through transformations to consumption layers such as analytics or AI models. Technically, this supports:

  • Impact analysis for change management
  • Root‑cause analysis for data quality issues
  • Explainability for analytics and AI outcomes

Microsoft Purview – Data Security Tools

There purpose is to help protect sensitive data dynamically, wherever it lives or moves. Microsoft Purview data security solutions are designed around the principle that data protection must follow the data, not rely solely on perimeter security.

Information Protection

Information Protection enables classification and protection through sensitivity labels that persist with the data. From a technical perspective:

  • Labels can trigger encryption, access restrictions, and visual markings
  • Labels are consistently enforced across Microsoft 365 services
  • Labels integrate downstream with DLP, Insider Risk, and eDiscovery

Sensitivity labels act as the policy anchor for most Purview controls.

Data Loss Prevention (DLP)

Purview DLP enforces policy‑based controls to prevent accidental or intentional leakage of sensitive data across:

  • Email and collaboration tools
  • Endpoints and browsers
  • Cloud applications and AI experiences

DLP evaluates content, user context, and activity in real time to determine policy actions.

Insider Risk Management

This capability correlates user behaviour, activity signals, and data sensitivity to identify potential internal risks. Technically, it:

  • Analyses sequences of risky actions rather than single events
  • Integrates with Information Protection and DLP signals
  • Supports adaptive policy enforcement

Data Security Posture Management (DSPM)

DSPM provides aggregated, AI‑driven visibility into data risk across the estate, including traditional workloads and AI applications. It enables:

  • Discovery of unknown or unmanaged sensitive data
  • Policy coverage gap analysis
  • Prioritised remediation recommendations

Microsoft Purview – Data Compliance Tools

The purpose is to meet legal, regulatory, and internal policy obligations with defensible controls. Purview’s compliance capabilities focus on evidence, monitoring, and response, rather than prevention alone.

Compliance Manager

Compliance Manager maps regulatory requirements (e.g. GDPR, ISO, industry standards) to technical and organizational controls. From a technical view:

  • Controls link to implemented Purview configurations
  • Evidence can be centrally tracked and reported
  • Progress scoring supports audit readiness

Audit

The unified audit log captures user and admin activities across Microsoft services, providing the foundation for investigations and compliance reporting. It supports:

  • Forensic investigation
  • Long‑term retention of activity records
  • Correlation with security and compliance incidents

eDiscovery (Standard & Premium)

eDiscovery enables legal teams to identify, preserve, collect, and review data associated with legal or internal investigations. Technically, it integrates:

  • Sensitivity labels and retention policies
  • Advanced search and review workflows
  • Role‑based access for legal operations

Records & Data Lifecycle Management

These tools manage data retention, deletion, and record declaration based on business, legal, and regulatory requirements. They ensure:

  • Defensible retention policies
  • Automated disposition
  • Reduced data sprawl and risk surface

Microsoft Purview is a data control framework that underpins modern analytics, AI, and digital transformation initiatives. When implemented correctly, Purview allows organizations to:

  • Govern data without slowing delivery
  • Secure data without blocking productivity
  • Prove compliance without manual evidence gathering

That combination visibility, control, and defensibility at scale is why organizations choose an integrated platform rather than isolated tools. Microsoft documentation and architecture descriptions can be found at learn.microsoft.com









Friday, 1 May 2026

Operationalising Responsible AI: What Microsoft’s Approach Reveals

Responsible AI has become one of those phrases that organisations like to reference but rarely operationalise. It appears in strategy decks, risk registers, and conference panels, yet the practical mechanisms that make it real are often missing.  

Microsoft’s recent article on its internal responsible‑AI approach is useful not because it offers something radically new, but because it demonstrates what it looks like when a large organisation treats responsible AI as a discipline rather than a marketing narrative.

Below are the core lessons worth thinking about especially if you’re trying to move your organisation from aspiration to implementation.

1. Responsible AI is an organisational discipline, not a technical feature

The most important message is also the simplest: responsible AI only works when it is treated as a governing framework that shapes how AI is designed, deployed, and monitored.   This is not a “nice to have”. It is not a late‑stage review. It is not a compliance tick‑box.  It is a structural commitment that defines how decisions are made, how risks are surfaced, and how accountability is distributed. If organisations are still treating responsible AI as a technical add‑on, you will not scale safely.

2. A central authority is essential for coherence

Microsoft’s Office of Responsible AI functions as a single point of truth. It sets policy, interprets standards, and ensures that teams are aligned.  This matters because without a central authority, governance fragments. Different teams make different assumptions. Risk becomes inconsistent. Decisions become harder to audit. A central function does not need to be large, but it does need to be authoritative. It needs the mandate to say “no”, “not yet”, or “not like this”.

3. Distributed oversight is the only scalable model

A central team cannot carry the entire burden. Microsoft’s model. A senior council supported by a network of responsible‑AI champions is the only realistic way to scale oversight across a complex organisation. This mirrors how other disciplines have matured:  
- data protection officers and privacy champions  
- security teams supported by local security leads  
- governance functions with embedded practitioners  

The pattern is consistent with central clarity and distributed execution. If you want responsible AI to work, you need people embedded in delivery teams who understand the risks and know how to escalate them.

4. A unified workflow is the backbone of responsible AI operations

One of the most practical elements of Microsoft’s approach is its internal workflow tool. Every AI project is logged, assessed, and reviewed through a single structured process. This creates:  
- traceability  
- auditability  
- consistent risk categorisation  
- clear escalation routes  
- visibility across the portfolio  

Most organisations underestimate how much risk comes from fragmentation. If you don’t know what AI systems exist, you can’t govern them. A unified workflow is not optional. It is foundational.

5. Culture and process design matter more than tooling

The article makes a point that resonates strongly with anyone who has worked in governance, the tools support the work, but they do not define it. If you don’t have:  
- clear expectations  
- shared language  
- leadership commitment  
- a culture that values scrutiny  

no tool will save you. Responsible AI succeeds when the organisation behaves as if it matters — not when it installs a dashboard.

Thrre are some actionable steps for organisations to take to build their own responsible AI capability. These are the practical takeaways that any organisation can adopt immediately.

1. Start with a written standard
Define what “good” looks like. Set mandatory requirements. Clarify what triggers deeper review. This becomes your anchor.

2. Build a network of responsible AI practitioners. Identify people with the right instincts, governance‑minded, risk‑aware, delivery‑literate. Train them and Empower them.

3. Design the assessment process before you build tooling. Clarify the workflow:  
- What must every project declare?  
- Who reviews what?  
- How are risks escalated?  

Only then should you build or buy tools.

4. Integrate responsible AI checkpoints into delivery. Move away from late‑stage reviews. Embed assessments into initiation, design, and release readiness.

5. Treat bias detection and data quality as non‑negotiable. Bias is rarely intentional; it is inherited. Build structured checks into your evaluation pipeline.

6. Assign responsibility for monitoring regulatory change. Someone needs to track global AI regulation and translate it into internal practice. This prevents compliance surprises.

7. Use the open resources already available
Microsoft’s Responsible AI Toolbox, Human‑AI Experience guidance, and impact‑assessment templates provide a strong foundation. Use them to accelerate maturity.

Responsible AI is not about slowing innovation. It is about enabling it safely, predictably, and sustainably.  The organisations that will thrive in the next decade are those that treat responsible AI as a discipline with structure, clarity, and accountability, rather than a slogan.

Read more here.

Thursday, 30 April 2026

Data Governance explained

I had a very fun packed day in Manchester a few weeks ago talking on my favourite topic Data Governance, AI Governance and Microsoft Purview. Watch my recording here to help you get started.



Sunday, 26 April 2026

Inside Microsoft’s Responsible AI Framework: What Matters for Data Governance

Microsoft’s updated Responsible AI framework represents a significant evolution in how organisations are expected to approach AI oversight. While the principles themselves, fairness, reliability, safety, privacy, inclusiveness, transparency, and accountability are familiar, the operational expectations behind them have deepened. This isn’t a philosophical document; it’s a practical guide for embedding responsibility into the lifecycle of AI systems.

For data governance leaders, the most important shift is the emphasis on traceability. The framework makes it clear that organisations must be able to explain how data flows into models, how those models behave, and how decisions are made. This requires robust lineage, versioning, and monitoring. Without these, transparency becomes impossible.

Another critical element is human oversight. The framework reinforces that AI should augment, not replace, human judgement. This means governance must ensure that humans remain in the loop for high‑impact decisions, and that they have the context needed to interpret model outputs. Oversight is not a checkbox, it is a design requirement.

The framework also highlights the importance of data quality and representativeness. Poor data leads to poor models, and poor models lead to poor outcomes. Governance must ensure that training data is accurate, relevant, and free from harmful bias. This is where stewardship, classification, and quality controls become essential.

Finally, the framework calls for ongoing monitoring, not one‑time validation. Models evolve, data changes, and risks shift. Governance must be continuous, adaptive, and embedded into operational workflows.

Tracing my career journey though my blog

I was looking at my blog stats this morning and was really interested to see the geographical spread. I started writing my blog in 2011 and it has been read by 1.18m. I wanted to record all the technical tips I found and technology advancements which were useful to me and might be of use to help others. I started writing on SQL Server and the blog has migrated with me throughout my career through architecture, my PhD research and over the last few years I have been mostly writing on Data Governance, Microsoft Purview, AI Governance and Microsoft Fabric. 


I asked Copilot to share some interesting thoughts about my journey for my blog and here is what it thought.

Your blog reveals a journey defined by intellectual curiosity, data‑driven leadership, and a distinctive narrative voice. Here are five evidence‑based, genuinely interesting aspects of that journey.

1. You frame data leadership as a process of unlearning, not just learning

Your post “What data leaders must unlearn to lead in the Age of AI” argues that modern leadership requires shedding outdated assumptions, such as believing governance slows innovation or that documentation equals understanding. This reframing positions you as a thought leader challenging entrenched industry norms. 

2. You consistently connect research, industry practice, and philosophical curiosity

Your blog tagline “Chaos, complexity, curiosity and database systems” captures a rare blend: deep technical expertise paired with a reflective, almost philosophical lens on data systems. This fusion shapes your writing style and differentiates your professional voice. 

3. You document the shift from AI experimentation to AI industrialisation with governance at the centre

In your coverage of the Gartner Data & Analytics Summit, you highlight how governance has moved from a compliance checkbox to the engine of AI ROI. This shows your role as an interpreter of industry change, translating large‑scale trends into practical insights for practitioners. 

4. Your journey is grounded in both academic achievement and community leadership

Across external references, you are consistently described as a Microsoft Data Platform MVP, a PhD researcher recognised with the AOUG Will Swann Award, and a founder/organiser of Data Toboggan. This positions your blog as the narrative thread connecting your academic, professional, and community contributions. 

5. Your posts reveal a long‑standing commitment to making governance practical, accessible, and embedded

Whether discussing AI oversight, lineage, behavioural metadata, or Purview governance models, your writing emphasises practical implementation over theory. You repeatedly advocate for governance that is embedded, automated, and literacy‑driven, showing a consistent philosophy across years of posts. 





Wednesday, 22 April 2026

SQLBits 2026 Day 1

 SQLBits in Wales is happening this week. We have held the conference at the ICC before so all very familiar.  The keynote was introduced by Simon Sabin before it moved into an in-depth session on the future of Microsoft One SQL—from on-premises to Azure and into Microsoft Fabric. It delved into the unified, AI-ready relational database that powers modernization and next-gen AI apps. SQL  delivers consistency, performance, and some innovative features. The speakers in the keynote were Bob Ward, Anna Hoffman, Priya Sathy, Shiva Gurumurthy.

Data and AI is changing the world. It is the fuel that powers AI.  Microsoft SQL is one consistent SQL for the era of AI. It is
enterprise ready, has evolved over the decades to an industry leading scalable, dynamic platform with high availability and  best in class price performance. 

The keynote delved into migrate and modernize , the need for cloud native AI apps and the need for unified data platforms. 

Highlights of new features 

Azure SQL Server Managed Instance GA
SQL Server 2025 on Azure Virtual Machine GA
Azure Accelerate for Databases announced
aka.ms/modernizedatabases

Azure SQL Database Hyperscale GA you only pay for cores and storage and no license fee.

Mirroring from Microsoft SQL to Fabric GA
SQL Database in Fabric GA

Database Hub in Fabric was announced with fleet management,  observability and database agents.

The depth and breadth of SQL Server has grown substantially over the years and supports many engine types for holistic use. The engines being graph, vector,  columnar, document, spatial, key value, hierarchical, in memory and ledger.

Many sessions today delved into SQL migrations in various forms. There was a fun session talking about Databricks vs Fabric. There are many differences and business needs and business technology stack skills in house often influence the choice of technology.

The Azure SQL Server Hyperscale session talked about Hyperscale which is about the architecture design,  not the engine. It is truly a distributed , cloud native architecture  with boundless storage that grows automatically with elastic compute at two speed. It uses SQL Server as caches.

More sessions for day 2 tomorrow. 




Saturday, 11 April 2026

GRAICE Foundation Training Principles of Responsible AI Governance

I am pleased to share I have completed the GRAICE Foundation Training Principles of Responsible AI Governance and am certified for foundational competency in GRACIE, Humanity's Operating System for AI. 

GRAICE is a robust governance operating system geared toward instilling confidence and accountability in AI systems on a global scale. It has 6 foundational values, 7 operational pillars and a 3 teir assurance model. 



Wednesday, 8 April 2026

Operationalising Responsible AI: What Microsoft Purview Actually Enables and How to Use It Well

The conversation around Responsible AI is accelerating, but many organisations still struggle with the same practical gap: How do we turn principles into operational behaviour inside real systems?  
Frameworks like GRAICE™ and Microsoft’s Responsible AI Standard set the expectations,  but they don’t tell you how to wire those expectations into your data estate.

This is where Microsoft Purview plays a meaningful, but often misunderstood, role. Purview is not an end‑to‑end Responsible AI lifecycle platform. It doesn’t manage model development, evaluation, or fairness testing. What it does provide is the governance and security foundation that ensures AI systems interact with enterprise data safely, consistently, and in line with organisational policy.

Below are three actionable ways organisations can use Purview to strengthen Responsible AI practice without overstating its scope.

1. Use Purview to establish data boundaries for AI systems
AI systems are only as responsible as the data they can see. Purview’s classification, sensitivity labels, and access policies give organisations the ability to:

- identify sensitive or regulated data  
- prevent AI systems (including Copilot and internal agents) from accessing inappropriate content  
- enforce information barriers and least‑privilege access  
- ensure data minimisation by design  

Why this matters:  
GRAICE™ and Microsoft’s RAI Standard both emphasise data minimisation, privacy, and controlled access. Purview doesn’t enforce RAI principles directly — but it does enforce the data boundaries those principles depend on.

Action:  
Map your AI use cases to Purview sensitivity labels and access policies. Treat this as a precondition for deploying any AI capability.

2. Use Purview’s lineage and scanning to understand AI‑related data risk
Purview lineage is often misunderstood as “AI lifecycle traceability”. It isn’t.  
But it is a powerful mechanism for:

- understanding where sensitive data originates  
- seeing how data flows across systems AI may interact with  
- identifying shadow data sources that could introduce risk  
- supporting DSPM (Data Security Posture Management) for AI workloads  

Why this matters:  
Responsible AI requires organisations to understand the provenance, quality, and risk profile of the data AI systems rely on. Purview provides visibility into the data estate, not the model estate — and that visibility is essential for any RAI programme.

Action:  
Enable automated scanning and lineage for all data sources used by AI applications. Use lineage to identify high‑risk flows before enabling AI access.

3. Use Purview’s AI usage governance to monitor and control how AI behaves with your data
The newest Purview capabilities focus on AI usage governance — including Copilot and internal AI agents. This includes:

- monitoring AI interactions with sensitive data  
- detecting risky prompts or behaviours  
- applying data‑loss prevention controls to AI usage  
- generating audit trails for compliance and oversight  

Why this matters:  
Responsible AI is not just about how models are built — it’s about how they are used. Purview provides the observability and guardrails needed to ensure AI systems behave safely in production.

Action:  
Enable Purview’s AI usage governance features for all enterprise AI tools. Treat AI usage logs as part of your RAI assurance evidence.

In summary Purview does not operationalise Responsible AI on its own — and it shouldn’t be positioned as a lifecycle governance platform.  
What it does provide is the data governance, security, and AI‑usage oversight that Responsible AI frameworks rely on.

If you use Purview to:

1. Set data boundaries for AI  
2. Understand data risk and provenance  
3. Monitor and govern AI usage  

you create the conditions in which Responsible AI can actually function.


Why Data Catalogues Fail (And How Purview Is Quietly Fixing the Industry’s Blind Spots)

Most data catalogues fail for a simple reason: they  assume that documentation alone creates understanding. It doesn’t. A catalogue full of stale metadata, incomplete lineage, and inconsistent tagging is worse than useless and it creates a false sense of confidence. Many organisations have learned this the hard way, investing heavily in catalogues that quickly became digital graveyards.

Purview succeeds where others fail because it treats the catalogue as part of a governance ecosystem, not a standalone tool. Lineage, classification, access policies, and data maps are not optional extras. They are the core of the experience. This integrated approach ensures that metadata is accurate, automated, and actionable.

Another blind spot Purview addresses is operational relevance. Traditional catalogues focus on documentation whereas Purview focuses on control. It doesn’t just describe data as it also governs it. This shift from passive to active metadata is what makes Purview viable at enterprise scale.

Purview also excels in hybrid and multi‑cloud environments, where many catalogues struggle. Its connectors, scanning capabilities, and policy enforcement mechanisms are designed for real‑world estates, not idealised architectures.

Purview is integrated with Fabric which positions it as the governance backbone of the Microsoft ecosystem. As organisations consolidate their data platforms, Purview becomes the source of truth that ties everything together.



Saturday, 4 April 2026

GCRAI and the Rise of GRAICE™: A New Global Framework for Responsible AI Governance

The global conversation around responsible AI has been dominated for years by national strategies, corporate principles, and academic frameworks. But the launch of the Global Council for Responsible AI (GCRAI) and its GRAICE™ framework marks a shift toward something far more ambitious: a unified, cross‑sector, cross‑industry operating system for AI governance. Unlike many initiatives that focus on high‑level ethics, GCRAI positions itself as a mechanism for operationalising responsibility at scale. It’s an attempt to move responsible AI from aspiration to enforceable practice.

What makes GCRAI notable is its global footprint. With representation across dozens of countries and a network of ambassadors, it aims to create a governance ecosystem that transcends borders and industries. This matters because AI risk is not localised. Models trained in one region influence decisions in another. Data flows across jurisdictions. And the consequences of AI misuse rarely stay within organisational boundaries. A global framework is not just desirable, it is necessary.

The GRAICE™ framework, unveiled at Davos, is positioned as “humanity’s operating system for AI.” While the branding is bold, the intent is clear: create a standard that is actionable, measurable, and adaptable. GRAICE™ focuses on transparency, security, accountability, and human‑centric design. But what sets it apart is its emphasis on measurable compliance. Many frameworks articulate principles; GRAICE™ attempts to define behaviours. It seeks to bridge the gap between what organisations say about AI and what they actually do.

Running alongside GCRAI is the G.R.A.C.E. Global Council for AI, which articulates a complementary set of principles centred on human‑centred AI. Their pillars emphasise mission, vision, and the balance between technology, ethics, and humanity. While still evolving, the G.R.A.C.E. principles reinforce the idea that responsible AI is not just a technical discipline but it’s a societal one. They highlight the need for AI systems that enhance human capability rather than diminish it, and for governance that protects people as much as it protects organisations.

Together, GCRAI and G.R.A.C.E. represent a growing recognition that responsible AI cannot be solved by isolated efforts. Organisations need frameworks that are interoperable, globally recognised, and grounded in real‑world practice. They need standards that can be implemented, audited, and adapted as technology evolves. And they need governance models that reflect the complexity of modern AI systems and systems that learn continuously, behave unpredictably, and operate across boundaries.

For data and AI leaders, the emergence of GRAICE™ is a signal. The era of voluntary, principle‑only responsible AI is ending. The next phase is about operationalisation, measurement, and accountability. Whether organisations adopt GRAICE™ directly or use it as a benchmark, its influence will shape how responsible AI is defined, governed, and enforced in the years ahead. This is not just another framework but a part of a global shift toward responsible AI as a shared, enforceable standard.

G.R.A.C.E. is
GROUNDED
RESPONSIBLE
AUTHENTIC
COMPASSION
ETHICAL

Every decision involving AI should align with moral truth, respect for life, and integrity of purpose through moral align




https://www.graceglobalcouncil.com/
https://gcrai.ai/

Wednesday, 1 April 2026

How Responsible AI frameworks shape the future of AI Governance

The responsible AI landscape is shifting fast. Organisations are no longer looking for a single framework to rule them all; they’re looking for interoperability, clarity, and practical pathways to operational maturity. Two frameworks are increasingly shaping that conversation: GRAICE™, the new global framework from the Global Council for Responsible AI (GCRAI), and Microsoft’s Responsible AI Standard, one of the most established engineering‑level governance standards in the industry.

These frameworks are often discussed in the same breath, but they operate at different layers of the governance stack. Understanding that distinction is essential — because it’s precisely what makes them complementary rather than competitive.

GRAICE™: A Global Meta‑Framework for Cross‑Sector Alignment

GRAICE™ is designed as a global, cross‑sector framework. Its purpose is not to replace organisational or vendor standards, but to provide:

- a shared global vocabulary for responsible AI  
- a principles‑level structure that governments, industry, academia, and civil society can align to  
- a meta‑framework that organisations can map their internal standards against  
- a societal‑level lens that sits above implementation detail  

GRAICE™ is intentionally broad. It sets direction, coherence, and expectations at a global level — the “north star” rather than the engineering manual.

Microsoft’s Responsible AI Standard: Operational Discipline for Real Systems

Microsoft’s Responsible AI Standard sits at a different layer: the practical, engineering‑focused layer where teams build, evaluate, deploy, and monitor AI systems.

It provides:

- detailed lifecycle requirements  
- controls for data, evaluation, transparency, and oversight  
- guidance for product teams and engineering functions  
- mechanisms for translating principles into day‑to‑day practice  

Where GRAICE™ is global and principle‑driven, Microsoft’s standard is specific, actionable, and operational.

Complementary by Design

This is the critical point:  
GRAICE™ does not replace Microsoft’s Responsible AI Standard — or any other organisational framework.

Instead, the two frameworks operate in a layered model:

- GRAICE™ → global alignment, societal expectations, cross‑sector coherence  
- Microsoft RAI Standard → engineering discipline, implementation controls, operational maturity  

Together, they create a governance ecosystem that is:

- globally relevant  
- locally actionable  
- technically grounded  
- aligned with societal expectations  

This layered approach reflects where responsible AI is heading: ecosystems of interoperable frameworks, not a single universal standard.

Where They Converge

Despite their different scopes, both frameworks reinforce core responsible AI expectations:

- transparency as a foundation for trust  
- accountability and human oversight  
- continuous monitoring of evolving systems  
- responsible AI as an ongoing operational commitment  

These shared foundations show a field moving toward coherence, even when frameworks serve different purposes.

The Real Opportunity: Use Them Together

For organisations, the value lies in the combination:

- GRAICE™ provides the global direction and cross‑sector alignment.  
- Microsoft’s Responsible AI Standard provides the operational machinery to implement responsible AI in real systems.  

Using both gives organisations a governance model that is both strategically aligned and practically executable — exactly what mature AI governance requires.


Saturday, 28 March 2026

Series Index Summary: Data Governance, Purview, and Responsible AI

This four‑month series explores the shifting landscape of data governance, Microsoft Purview, and Responsible AI at a moment when organizations are being forced to rethink how they manage, understand, and trust their data. Across the posts, the series traces a clear arc: from the maturing of governance in 2025, through the practical realities of Purview adoption, to the cultural and architectural shifts required to lead in the age of AI.

The December posts set the stage by examining why governance finally became a strategic priority, how Purview’s quieter updates are reshaping the platform, and why AI risks making organizations intellectually complacent without strong data foundations. These pieces frame governance not as bureaucracy, but as the mechanism that makes innovation safe.

I move deeper into strategy and Responsible AI. It explores the predictions shaping 2026, the operational implications of Microsoft’s updated Responsible AI framework, and the evolution of Purview’s classification engine. The AI Is Making Us Dumber series continues here, highlighting the risks of over‑automation and the importance of maintaining human understanding.

I shift into technical depth and organizational reality. It covers SQL Server’s new direction, the strategic value of metadata, and a detailed breakdown of Purview’s February feature updates. The month closes with reflections on why organizations struggle to operationalize policy and how governance must adapt to keep pace with rapidly learning AI systems.

March brings the series to a forward‑looking conclusion. It introduces the concept of contextual governance, examines the architectural convergence of Fabric and Purview, and challenges data leaders to unlearn outdated assumptions. These posts emphasize that leadership in the AI era requires adaptability, transparency, and a willingness to rethink long‑held beliefs.

Together, these posts form a cohesive narrative about where data governance is heading, what Purview is becoming, and how organizations can navigate the accelerating complexity of AI‑driven data estates. I wanted to add clarity in a landscape full of noise and understand that governance is no longer optional, but foundational.

Wednesday, 25 March 2026

Unifying the Data Estate for the next AI Frontier Fabcon Keynote

The Atlanta FabCon keynote was delivered last Wednesday by Amir Netz (CTO and Technical Fellow), Arun Ulag (President, Azure Data), Shireesh Thota (Corporate Vice President, Azure Databases).  It has was recorded. You can watch it here

Session Abstract

As organizations race to deploy generative and agentic AI, the biggest challenge they face is not models, it’s their data estate. Join Microsoft engineering leadership to learn how Microsoft’s databases can be unified through Microsoft Fabric and OneLake, creating a single, governed foundation for analytics, AI, and intelligent agents. Discover why this shift represents a fundamental change in how modern data platforms are built, managed, and scaled for the next AI frontier.

A summary of the announcements.





Sunday, 22 March 2026

What Data Leaders Must Unlearn to Lead in the Age of AI

The hardest part of leading in the AI era isn’t learning new skills, it is unlearning old assumptions. Many of the beliefs that shaped data leadership over the past decade no longer apply. The pace of change, the complexity of modern estates, and the unpredictability of AI systems demand a different mindset. Leaders must be willing to let go of outdated models of control, certainty, and hierarchy.

One of the first assumptions to unlearn is that governance slows innovation. In reality, governance accelerates innovation by reducing risk, increasing clarity, and enabling responsible experimentation. When governance is embedded rather than imposed, it becomes a catalyst rather than a constraint. Leaders who cling to the old narrative will find themselves outpaced by those who embrace governance as a strategic enabler.

Another assumption to unlearn is that documentation equals understanding. In the AI era, understanding comes from lineage, monitoring, and behavioural metadata, not static documents. Leaders must shift from documenting after the fact to embedding governance into the system itself. This requires investment in tooling, automation, and literacy.

Leaders must also unlearn the idea that AI systems can be trusted without oversight. AI is probabilistic, not deterministic. It requires continuous monitoring, not one‑time validation. The organisations that thrive will be those that treat AI as a dynamic system requiring ongoing governance, not a product that can be finished.

Finally, leaders must unlearn the belief that expertise is static. In the AI era, expertise evolves. The best leaders will be those who remain curious, adaptable, and willing to challenge their own assumptions. Unlearning is not a weakness but a leadership skill.



Friday, 20 March 2026

Navigate AI on Your Data & Analytics Journey to Value - Gartner 2026

The Gartner Data & Analytics Summit (March 9–11, 2026, in Orlando) marked a significant shift from AI experimentation to AI industrialization. My post focuses on how governance is no longer a check-the-box activity but the literal engine for AI ROI.

​Here are some collated highlights that interested me.

1. The Core Keynote: Beyond the Hype to ROI

Analysts Adam Ronthal and Georgia O’Callaghan opened the summit by challenging the move fast and break things mentality. They argued that while AI is accelerating, success belongs to those who find a thoughtful approach to speed and direction.

Gartner emphasized that AI adoption follows an S-curve, a slow start, rapid acceleration, then stabilization. We are currently at the steep upward slope. Organizations that don't integrate governance now will face expensive catch-up efforts that turn AI from an asset into a liability.

Gartner categorized firms into three types: AI-First (aggressive), AI-Opportunistic (fast followers), and AI-Cautious (waiting for stability). They noted that regardless of the path, doing nothing is no longer an option.

2. Data Governance: The Move to Adaptive & Autonomous

A major takeaway was that traditional, manual data governance is dead. It cannot keep up with the volume and velocity of AI-driven data.

Gartner introduced the concept of Outcome-Based Governance. Instead of governing all data equally, teams should focus on high-value data products that directly impact AI outcomes.

A new AI-Ready Data Framework focuses on three pillars:

   Alignment: Ensuring data semantics and lineage are clear.

   Qualification: Continuous data quality validation for model training.

   Governance: Enforcing policies during the AI lifecycle.

The Rise of Governance Agents: A top 2026 prediction is that D&A leaders will begin using Data Governance Agents to automate the negotiation and orchestration of data pipelines.

3. AI Governance: Bridging the Trust Gap

The summit highlighted a looming crisis where 60% of organizations are predicted to fail at realizing AI value due to poor integration between data and AI governance.

Gartner warned against Registry-First Governance. Simply listing your AI models in a spreadsheet isn't enough. They called for Continuous Code-to-Cloud Visibility, where governance monitors data as it flows through APIs and AI agents in real-time.

A buzzword at the conference was the Unified Context Layer. To govern AI effectively, you need a layer that connects business meaning to raw data. This allows AI agents to act reliably because they understand the why and how, not just the what.

Gartner predicts spending on AI governance platforms will reach $492M in 2026, doubling to $1B by 2030, as companies realize that compliance is a trust dividend rather than a tax.

4. Responsible AI: Ethics as an Operational Metric

Responsible AI (RAI) moved from a philosophical discussion to a technical requirement.

Gartner warned that critical failures in managing synthetic data (used to train models when real data is scarce) are a major risk to AI governance. Without metadata tracking the lineage of synthetic data, models risk hallucination loops.

The keynote suggested that data organizations are being reshaped into fusion teams where humans and AI agents work together. Responsible AI here means defining clear boundaries of AI involvement in decision-making.

As we move toward Agentic AI (autonomous agents that can take actions), Gartner highlighted the need for explicit transparency capabilities with the ability to audit why an agent made a specific decision in real-time.

In summary by 2027, organizations that emphasize AI literacy for executives will achieve 20% higher financial performance than those that do not. (Gartner, March 2026). In 2026, AI strategy and Data strategy have become inseparable and you cannot scale the former without governing the latter.

Safeguarding the AI Frontier with Microsoft Purview & Fabric Innovations

The speed of AI transformation is accelerating. However, for many organizations, that speed is throttled by a critical concern, that of Data Governance. At the Microsoft Fabric Community Conference this week, Microsoft unveiled a suite of innovations designed to bridge the gap between rapid AI adoption and robust data security. By deepening the integration between Microsoft Purview and Microsoft Fabric, they are providing a secure-by-design foundation for the AI era.
Here is a breakdown of the major announcements.

Data Security: From Protection to Prevention

In an AI-driven world, data oversharing is a primary risk. Microsoft is addressing this by extending Purview’s sophisticated security controls directly into the Fabric ecosystem.

Information Protection Policies: Security admins can now define policies in Purview that automatically enforce access permissions based on sensitivity labels. If a file is labeled Highly Confidential, Fabric respects those boundaries automatically.
 
Data Loss Prevention (DLP) for Fabric is now in preview, DLP policies can identify sensitive information (like SSNs or credit card numbers) as it is uploaded to Fabric. This allows for automatic risk remediation, preventing data leaks before they happen.
 
Trusted Workspace Access allows for secure connections to data sources behind firewalls, ensuring that OneLake remains a secure environment even when pulling from complex network topologies.

Risk Management: Visibility into the Human Element

Governance isn't just about locking down files; it is about understanding user behavior.
 
Purview Insider Risk Management is one of the most significant announcements integrating with Fabric. This allows organizations to detect, investigate, and act on potentially malicious or inadvertent activities, such as mass data downloads or unauthorized sharing directly within the Fabric environment.

Data Discovery & Curation: The Reimagined Governance Experience

Microsoft is moving away from the policing model of governance toward an enabling model. They’ve introduced a reimagined governance experience that is business-friendly and federated.

Unified Catalog & Metadata: Fabric’s built-in metadata and lineage are now seamlessly reflected in Purview. This gives a single pane of glass view across your entire multi-cloud estate, making it easier for users to find the data they need while ensuring it meets quality standards.

Item Tagging is a new feature allowing users to add tags to Fabric items. This significantly enhances discoverability and encourages the reuse of high-quality data assets across the organization.

AI Readiness: Building on a Trusted Foundation

The ultimate goal of these updates is AI Transformation. You cannot have a reliable Copilot if it is grounded in unverified or ungoverned data. By automating discovery, classification, and protection, Microsoft Purview ensures that the data fueling your AI models is:
 Accurate: Through better curation and quality checks.
 Compliant: Adhering to regional and industry regulations.
 Secure: Only accessible to the right people (and the right AI agents).

Final Thoughts

The announcements from FabCon 2026 signal a shift. Data governance is no longer a hurdle to be cleared. It is the engine that allows AI to run safely. For those of us managing data estates, the tighter synergy between Purview and Fabric offers a clear roadmap to innovate with confidence.


Wednesday, 18 March 2026

FabCon and SQLCon 2026

It is the third Fabcon event and first ever SQLCon in Atlanta this week. This week Microsoft didn't just make a small change, they made a larger shift. The convergence of where data lives and what data does has been the holy grail of database management and the biggest hurdle to AI isn't the model itself, it is the data ingest quality and the fragmentation of the estate. The announcements from the conference help answer the current chaos and complexity that exists. This is my take on the key shifts that will matter most when navigating the database landscape.

The Single Pane of Glass Arrives in the Database Hub
For years, we have managed our estates in silos, Azure SQL over here, Cosmos DB over there, and SQL Server on-premises (hopefully via Azure Arc) somewhere else. The new Database Hub in Microsoft Fabric (now in early access) is a game-changer for governance. It provides a unified view to explore and optimize the entire estate. But the real interest is the agent-assisted management. Using intelligent agents to reason over signals and explain why something changed. It keeps the human in the loop but removes the manual drudgery.















Microsoft IQ: The Semantic Layer for AI

One of the biggest announcements is how Fabric is becoming the intelligence layer for the enterprise.
  •  Fabric IQ brings together live business data.
  •  Work IQ pulls in productivity signals.
  •  Foundry IQ captures institutional knowledge.
This is critical because AI agents are only as good as the context they have. By creating a unified semantic meaning, we are finally moving away from hunting for data and toward activating data.

OneLake is Closing the Gap on Silos
The OneLake vision continues to expand with more native mirroring capabilities (SharePoint lists and Dremio are now in preview; Oracle and SAP Datasphere are GA).
The standout for me, however, is the Shortcut transformations. The ability to shape data, like converting Excel to Delta tables, automatically as it connects to OneLake is a massive win for data quality. We know that without good quality data at the start, the AI journey hits a wall. These automated gatekeepers help ensure the lake doesn't become a swamp.

Mission Critical Apps with connected SQL and Fabric
With SQL Server 2025 growing faster than any previous version, the integration with Fabric is no longer a maybe. The announcements focused heavily on a converged platform that unifies transactional and analytical data.
For developers, the new Migration Assistant for SQL databases (using AI to resolve compatibility issues via DACPACs) is a pragmatic approach to modernization.

Beyond the Hype it is easy to get lost in the agentic AI buzzwords. But looking at the technical roadmap from FabCon/SQLCon, the focus is clearly on Usability, Empowerment, and Security.

We are moving toward a world where the database is taking a more active place in the business. Whether we are managing a legacy SQL estate or building a greenfield Fabric environment, the wall between our operational databases and analytics is getting less.

 You can read more about the announcements: 


Sunday, 15 March 2026

Metadata Is Not Optional: The Strategic Value Organisations Still Undervalue

Metadata has always been the unglamorous backbone of data governance, but in 2026 it becomes a strategic asset. AI systems depend on it, automation relies on it, and governance collapses without it. Yet many organisations still treat metadata as an afterthought, something to be documented later, if at all. This mindset is becoming increasingly untenable.

Metadata based on DAMA-DMBOK principles is 'data about data', meaning the descriptive information that defines, structures, and gives context to data so it can be understood, managed, and used effectively.

The rise of AI has exposed the consequences of weak metadata practices. When organisations cannot explain where data came from, how it has changed, or who has access to it, they cannot trust the outputs of their models. Metadata is the connective tissue that links data to meaning, context, and accountability. Without it, even the most sophisticated AI systems become brittle.

Metadata also plays a critical role in operational efficiency. Automated classification, lineage, and policy enforcement all depend on accurate metadata. When metadata is missing or inconsistent, governance becomes manual, slow, and error‑prone. When metadata is rich and reliable, governance becomes scalable.

The organisations that succeed in 2026 will be those that treat metadata as a first‑class citizen. This means investing in automation, stewardship, and tooling that captures metadata at the point of creation and not months later. Metadata is not optional. It is the foundation of trustworthy data and Responsible AI.