Friday, 30 January 2026

Data Toboggan Winter Edition 2026

It is that time of year again when Data Toboggan is running another 12 hour conference with 3 tracks with speakers from around the world. There are some amazing sessions to learn from. The conference is free to attend as usual. 

I am speaking on something of interest and topical in my lightning talk in The Chalet on Data Literacy: The Human Advantage in an AI World.

AI is accelerating decision‑making across organisations, but it’s also accelerating how quickly mistakes can scale. This session explores how data literacy keeps humans in the loop, prevents over‑reliance on AI, and strengthens judgment, context, and critical thinking. Attendees will see real examples of AI hallucinations, learn how provenance and triangulation protect against bad outputs, and understand why cognitive skills weaken when tasks are automated. They will leave with a practical checklist for questioning AI outputs, a clear view of the risks of low data literacy, and a framework for building teams that use AI responsibly, confidently, and intelligently.



We have our usual Piste Maps with the agenda.






Wednesday, 28 January 2026

World Economic Forum 2026 in Davos Global Council for Responsible AI

At the 56th World Economic Forum 2026 in Davos between 19–23 January 2026 , the Global Council for Responsible AI officially unveiled GRAICE™ (Global Responsible AI Compliance & Ethics). It is designed as humanity’s operating system for AI. Introduced to global leaders and policymakers, GRAICE moves Responsible AI from principle to practice, integrating ethics, governance, compliance, and human-centric design into a unified, scalable framework. 

The framework is an integrated system rather than a collection of policies that are simple and repeatable.

  • Foundational values established non-negotiable ethical and human centred boundaries
  • Seven pillars translate values into operational requirements
  • Assurance tears verify that requirements are met with evidence
  • Governance structures assign accountability and decision authority

 The six foundational grounded values are  

  • Human dignity and autonomy
  • Accountability and governance 
  • Fairness and justice
  • Transparency an explain ability
  • Reliability and security
  • Inclusivity and social benefits

And the seven pillars for responsible AI define what responsibly I must achieve in practise

  • Ethical leadership
  • purpose driven innovation
  • Human centric use
  • responsible implementation
  • AI literacy and workforce readiness
  • Data governance and integrity



Thursday, 22 January 2026

AI Is Making Us Dumber, Part II: When Automation Replaces Understanding

The second wave of AI dependency is more subtle than the first. It is not about hallucinations or bias, it’ is about the erosion of organisational understanding. As AI tools become more capable, teams increasingly rely on them to summarise, interpret, and decide. Over time, this creates a dangerous dynamic: people stop interrogating the underlying data and start accepting outputs at face value.

This shift is particularly risky in environments where data quality is inconsistent or poorly governed. When teams don’t understand the lineage, context, or limitations of the data feeding their models, they lose the ability to challenge results. AI becomes a black box, and decisions become detached from reality.

Governance is the antidote. By enforcing lineage, quality checks, and human‑in‑the‑loop review, organisations ensure that automation enhances rather than replaces understanding. Governance creates the conditions for informed oversight, not blind trust.

The goal is not to reduce AI usage, it is to elevate human capability alongside it. AI should accelerate insight, not diminish expertise. When governance is strong, AI becomes a partner. When governance is weak, AI becomes a crutch.



Monday, 5 January 2026

The Governance Reset: Five Data Strategy Predictions for 2026

Every January brings a wave of predictions, but 2026 feels different. The pace of change in data and AI has outstripped the pace of organisational adaptation, and leaders are beginning to recognise that their existing strategies are no longer fit for purpose. The old model of annual planning cycles, static governance frameworks, and siloed ownership simply cannot keep up with the velocity of modern data estates. This year will force a reset.

Continuous Governance
The first major shift will be toward continuous governance. Organisations can no longer rely on periodic reviews or manual controls. Governance must operate at the speed of data creation, not the speed of committee meetings. Automated lineage, dynamic classification, and policy‑driven access will become baseline expectations rather than advanced capabilities.

Clarity in areas of data Management
Second, we’ll see a rise in data contracts as a mechanism for aligning producers and consumers. Contracts bring clarity to ownership, quality expectations, and change management. They also reduce friction between teams by making responsibilities explicit. This is governance embedded into delivery, not bolted on afterward.

AI‑driven Metadata Enrichment
Third, AI‑driven metadata enrichment will become essential. Manual documentation has never scaled, and 2026 will be the year organisations finally stop pretending it can. Automated tagging, relationship inference, and behavioural metadata will fill the gaps humans never had time to address.

Cross‑functional Stewardship will Mature
Fourth, cross‑functional stewardship will mature. Governance will no longer sit with a single team; it will be distributed across product, engineering, analytics, and compliance. This shift will require cultural change, but it’s the only sustainable model.

Embrace adaptive policies
Finally, organisations will embrace adaptive policies and rules that adjust based on context, sensitivity, and risk. Static rules cannot govern dynamic estates. Adaptive governance will become the new normal.