ARTICLE
1 August 2025

UK Cabinet Office Publishes New AI Guidance And Toolkits To Support Safe And Effective AI Adoption

GW
Gowling WLG

Contributor

Gowling WLG is an international law firm built on the belief that the best way to serve clients is to be in tune with their world, aligned with their opportunity and ambitious for their success. Our 1,400+ legal professionals and support teams apply in-depth sector expertise to understand and support our clients’ businesses.
The UK Cabinet Office has released two major resources to support organisations in the responsible deployment of generative AI...
United Kingdom Technology

The UK Cabinet Office has released two major resources to support organisations in the responsible deployment of generative AI (GenAI) tools:

These publications draw on the UK Government's experience developing and scaling its in-house GenAI tool, Assist, across more than 200 public sector organisations. Together, they offer a practical, evidence-based framework for organisations seeking to adopt AI responsibly and maximise its benefits while managing behavioural, organisational, and ethical risks.

In this insight, we highlight how these frameworks can help organisations manage the everyday challenges of using generative AI tools. From building confidence and trust among teams to spotting risks early and making sure AI tools are used in the right way. We also look at how we support clients in applying this guidance through straightforward, practical steps.

Why this matters and how we can help?

As organisations increasingly adopt generative AI tools, the focus has often been on technical safeguards and headline risks. However, the Cabinet Office's new toolkit highlights a critical blind spot: the behavioural and organisational risks that emerge not from malicious intent, but from everyday use by well-meaning professionals. These "hidden" risks range from overreliance on AI outputs to erosion of human oversight, which can quietly undermine productivity, trust, and fairness if left unaddressed.

This matters because AI implementation is not just a technical challenge - it's a human one. As the new guide makes clear, successful AI adoption (as for any digital transformation or change programme) depends on cultural readiness, inclusive design, thoughtful risk management and continuous support. Without these, even the most advanced tools risk going unused or misused or causing harm. We highly recommend reviewing these toolkits and considering how their frameworks can be adapted to your organisation's AI strategy.

As legal advisors with a deep understanding of both AI governance and the commercial realities of AI development, we help clients navigate this complexity by translating the Cabinet Office's frameworks into actionable governance strategies. We advise on how to embed safeguards into AI deployment, assess organisational readiness, and ensure compliance with ethical and legal standards. Whether you're implementing AI tools, reviewing internal policies, or managing risk across teams, we can support you in applying these toolkits to build a responsible and resilient AI strategy.

The people factor: a human-centred approach to scaling AI tools

This guide introduces the Adopt-Sustain-Optimise (ASO) framework, designed to help organisations embed AI tools into daily workflows and foster safe, high-quality use. It emphasises that AI adoption is not just a technical challenge, it requires cultural, behavioural, and organisational change.

Key components of the ASO framework include:

  • Adopt: Drive initial uptake through targeted communications, onboarding, and leadership engagement.
  • Sustain: Build habits and embed AI into routine tasks through user support, feedback loops, and training.
  • Optimise: Ensure safe and effective use by identifying risks, tailoring training, and measuring impact.

The guide also highlights the importance of leadership in modelling responsible AI use and fostering trust, and provides practical tools such as user journey maps, adoption metrics, and evaluation strategies.

Notably, the Cabinet Office reports a 70% adoption rate of Assist across government, with a 180% increase in training completion and a 23% improvement in user confidence. These results were attributed to applying this framework.

The mitigating hidden AI risks toolkit

This toolkit focuses on identifying and managing the less visible but potentially high-impact risks that arise from how people interact with AI tools, referred to as "hidden risks". While much attention has been given to headline-grabbing AI risks, such as deepfakes or algorithmic bias, the toolkit focuses on subtler, systemic risks that often go unnoticed until they escalate. These include issues like overreliance on AI outputs, task-tool mismatches, reduced job satisfaction, and erosion of human oversight or ethical standards.

The framework identifies six categories of 'hidden' risks:

  1. Quality assurance: Risks from unverified or poor-quality AI outputs.
  2. Task-tool mismatch: Using AI tools for unsuitable tasks.
  3. Perceptions, emotions and signalling: How AI adoption affects morale, trust, and organisational culture.
  4. Workflow and organisational challenges: Barriers to adoption, skill gaps, and unintended workload increases.
  5. Ethics: Risks of reinforcing bias or undermining legal and ethical standards.
  6. Human connection and technological overreliance: Loss of human interaction and critical skills.

The toolkit provides practical guidance, including prompt questions, mitigation strategies, and a step-by-step framework for surfacing and managing risks. It is designed for teams implementing AI tools, particularly those with direct user interaction, and is applicable across both public and private sectors.

Organisations are encouraged to adopt a proactive, human-centred approach to AI governance anticipating risks before they materialise and embedding safeguards throughout the AI lifecycle.

Read the original article on GowlingWLG.com

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More