Behind the scenes photo of the GAEA Talks podcast with guest Robyn Agoston talking about AI and the Future of Work
AI and the Future of Work: Why Humans Won’t Be Replaced, But Transformed
September 24, 2025

October 14, 2025

The AI Safety Crisis: Why Your GDPR and Cybersecurity Defences Won’t Stop the Next Generation of AI

GAEA Talks Podcast Summary: Dr. Paul Dongha, Head of Responsible AI & AI Strategy at NatWest Banking Group

In the latest episode of GAEA Talks, we dived into the most urgent issue facing technology today: AI safety and ethics. Guest Dr. Paul Dongha, Head of Responsible AI & AI Strategy for NatWest Banking Group and a pioneering technologist since the 80s, delivered a sobering wake-up call, asserting that the current pace of AI development has created a crisis that our traditional safety nets are simply not built to handle.

“We’ve Got a Big Problem”: Why AI is Different

Dr. Dongha’s return to AI, after a 20-year career hiatus in the City, was prompted the explosive growth of AI everywhere. He began exploring AI ethics and quickly realised the immense scope of the challenge.

His core message is one of urgency: AI safety and ethical AI are “super important,” yet many organisations don’t truly appreciate the risks involved.

The Flaws in Current Corporate Defences

The systems we currently rely on to protect against technological harm are proving insufficient against the new wave of AI, particularly those that emerged post-ChatGPT.

  • Cybersecurity and Privacy: Our existing cybersecurity defences and privacy safeguards, like GDPR, are not working on this new type of technology.
  • Ethical Risks: Ethical risks that were thought to be contained with older systems no longer hold true in the AI era.

The Two Biggest Threats: Bias and Automation

Dr. Dongha highlighted two major concerns that pose threats to both organisations and “wider citizenship”:

  1. Loss of Critical Thinking (Automation Bias): As AI systems become more complex and autonomous, they lead to a dangerous loss of critical thinking in humans – a phenomenon also known as automation bias. This over-reliance can compromise human judgment.
  2. Power Centralisation: Dr. Dongha fears that immense power and deep pockets of some of the biggest tech companies will allow them to harness AI for their own gain, rather than for the benefit of society.

The host further drove this point home by referencing the devastating Horizon IT scandal, noting that even today, the legal system struggles to recognise that an IT system can be wrong, a problem amplified when considering the speed and scale of modern AI.

The Path to Responsible AI

For Dr. Dongha, the only way forward is to dedicate time and resources to working with companies to ensure that when they build and deploy AI systems, they do so safely. This must involve actively mitigating risks and paying dedicated attention to ethical issues like unfairness and bias.

He concludes that because there will be no end to people building AI systems, the focus must shift to building robust, ethical, and safe frameworks that address the nuance of how AI is constructed and deployed.

Essential Reading for the AI Era

Dr. Dongha co-authored the essential new book, Governing the Machine: How to navigate the risks of AI and unlock its true potential, which provides practical guidance for organisations navigating this complex landscape.

The book is available for pre-order now and launches on October 23rd in hardback, audio, and e-book formats. Don’t wait for the next crisis – start building your Responsible AI strategy today.

Order from your preferred book seller – click here.

A picture of the Governing the machine book cover

Listen to the full episode of GAEA Talks with Dr. Paul Dongha for a comprehensive look at the AI Safety Crisis.