ARTICLE
5 August 2025

America's AI Action Plan Emphasizes Governance And Risk Management To Promote The Secure And Safe Adoption Of AI Tools

GP
Goodwin Procter LLP

Contributor

At Goodwin, we partner with our clients to practice law with integrity, ingenuity, agility, and ambition. Our 1,600 lawyers across the United States, Europe, and Asia excel at complex transactions, high-stakes litigation and world-class advisory services in the technology, life sciences, real estate, private equity, and financial industries. Our unique combination of deep experience serving both the innovators and investors in a rapidly changing, technology-driven economy sets us apart.
On July 23, 2025, the Trump Administration released its AI Action Plan ("the Plan"), a long-anticipated roadmap for the federal government's approach to AI governance that presents a number of implications for businesses globally.
United States Technology

On July 23, 2025, the Trump Administration released its AI Action Plan ("the Plan"), a long-anticipated roadmap for the federal government's approach to AI governance that presents a number of implications for businesses globally. While Goodwin has covered the Plan and its three pillars in depth here, we discuss here how the Plan aims to promote rapid adoption of AI tools supported by strong governance and risk management practices, especially those related to safety and security.

The AI Action Plan Both Calls for and Constitutes AI Governance and Risk Management

The Plan is organized in three core pillars (Innovation, Infrastructure, and International Leadership), and contains many of the key components that would comprise a governance and risk management framework for AI adoption. It recommends policy actions to manage the government's AI approach (governance1), while also identifying key risks and recommending policy actions to mitigate them (risk management2). In this Alert, we highlight the components of the AI Action Plan most likely to be relevant for companies, consolidated in a manner we would expect to see in a governance and risk management framework. While innovation and speed is touted as the name of the game, the path forward is based on core risk management principles.

The AI Action Plan, through its pillars, forms a layered governance structure, blending regulatory updates, procurement policy, and diplomacy to accelerate AI development while addressing systemic risks across technical, institutional, and global domains.

Enable AI Adoption – How? Governance and Risk Management over Regulatory Bottlenecks

The Plan emphasizes a governance and risk management approach to AI safety, trust, and security over a strict regulatory mechanism to address AI risks and harms. As Pillar 1 of the Plan states (with emphasis added): "Today, the bottleneck to harnessing AI's full potential is not necessarily the availability of models, tools, or applications. Rather, it is the limited and slow adoption of AI, particularly within large, established organizations. Many of America's most critical sectors, such as healthcare, are especially slow to adopt due to a variety of factors, including distrust or lack of understanding of the technology, a complex regulatory landscape, and a lack of clear governance and risk mitigation standards. A coordinated Federal effort would be beneficial in establishing a dynamic, 'try-first' culture for AI across American industry."

Accelerating AI Adoption

The Plan builds upon the White House Office of Management and Budget's earlier governance framework for federal agency AI use, including OMB Memorandum M-25-21 on managing risks in agency AI deployments and OMB Memorandum M-25-22 on procurement and oversight of federal AI systems, which we've previously covered here. Businesses – especially those working with or selling into the federal ecosystem – should prepare for enhanced expectations around transparency, vendor accountability, and AI risk controls.

1. Oversight – Chief AI Officer Council

A new Chief AI Officer Council (CAIOC) will coordinate AI efforts across federal agencies, creating and implementing new governance frameworks. This signals a shift toward centralized oversight within the federal government, with downstream implications for federal contractors and vendors expected to align with emerging safety, fairness, and transparency benchmarks.

2. Identification of Key Applications for AI and AI Strategy

The Plan identifies priority AI applications in areas like scientific research3 and national security, which will receive early regulatory attention and infrastructure investment. Clients in adjacent sectors – especially defense, biotech, and data services – should expect increased scrutiny amidst an increase in partnership opportunities.

3. Identification of Infrastructure Need and AI tools

The Plan emphasizes the need for:

  • Compute resources, including the development of a healthy financial market for compute, with potential impact on financial services businesses.
  • High-quality datasets, as poor data governance undermines performance, product, and compliance.
  • Open-source and open-weight models, which promote innovation but raise traceability and misuse concerns.

4. Assess Risks

Federal efforts will focus on risks such as:

  • Interpretability: opaque AI systems causing mistrust, biased outcomes, and unintended harmful decisions due to lack of clear explanations
  • Cybersecurity: adversarial attacks, data breaches, deepfakes, and compromise of critical AI infrastructure disrupting essential services
  • National Security: misuse of AI by hostile actors for cyber warfare, espionage, and destabilizing technologies
  • Job Displacement: workforce disruption, economic inequality, and challenges from AI-driven automation of human tasks

To mitigate these risks, the Plan proposes implementation of an AI evaluations ecosystem guided by NIST and its Center for AI Standards and Innovation (CAISI), stronger IP protections, and new initiatives, including enforcement of the TAKE IT DOWN Act (passed on May 19, 2025) to combat synthetic media such as deepfakes – which may carry important implications for tech, media, political, and other businesses across industries. It also reinforces AI workforce upskilling, with planned programs to "expand AI literacy and skills development, continuously evaluate AI's impact on the labor market, and pilot new innovations to rapidly retrain and help workers thrive in an AI-driven economy." Notably, the Plan also calls for the development of high-security data centers that meet federal cybersecurity standards, support sensitive workloads, and are resistant to the most determined and capable nation-state actors.

5. Watch the Supply Chain (Vendor Risk Management)

Procurement reform is central to the Plan. Agencies must now evaluate AI vendors on safety, neutrality, and transparency – not just cost and performance. This elevates vendor risk management and will likely drive updates to contracting standards and compliance expectations across industries. Beyond direct acquisition, the Plan emphasizes securing the entire AI value chain, from semiconductor fabrication and compute resources, to data center infrastructure, to software supply chains. Agencies are instructed to streamline permitting for critical facilities, increase domestic chip production, and strengthen supply chain visibility for essential hardware and services. These infrastructure efforts aim to reduce dependence on foreign suppliers and ensure reliable access to trusted computing and tooling.

Safety and Security

The Plan places significant emphasis on securing AI systems from attack and misuse. It also highlights the need to protect critical infrastructure, promote "secure by design" AI technologies, and develop tailored AI incident response frameworks. In addition, the Plan includes biosecurity screening for models that could contribute to the development of biological threats, extending cybersecurity principles to AI risks. Together, alongside the administration's executive order on cybersecurity released in June (which Goodwin has covered here), the Plan demonstrates an awareness of the critical risks and harms that can result when AI is adopted without a management of cybersecurity and safety considerations.

Conclusion

Overall, the AI Action Plan serves as an example of AI governance and risk management in the context of governing a nation. While the application of governance and risk management in the private sector should be tailored to an organization's needs, the AI Action Plan (as well as governance measures coming out of other jurisdictions such as the EU) illustrates that having such a framework in place is supportive of accelerating innovation, and that caring for security goes hand in hand with prudent AI adoption.

Footnotes

1 Governance: the systems, processes, and structures by which organizations or groups make and implement decisions, exercise authority, and are held accountable for achieving their objectives.

2 Risk Management: the systematic process of identifying, assessing, and controlling risks, which are potential events or situations that could negatively or positively impact the achievement of objectives.

3 Goodwin's inaugural AI & Drug Discovery Symposium took place on June 16, 2025, with a session dedicated to AI governance and security that is available here.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More