ARTICLE
16 July 2025

Texas Enacts Responsible AI Governance Act: What Companies Need To Know

BB
Baker Botts LLP

Contributor

Baker Botts is a leading global law firm. The foundation for our differentiated client support rests on our deep business acumen and technical experience built over decades of focused leadership in our sectors and practices. For more information, please visit bakerbotts.com.
Governor Greg Abbott signed HB 149, the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), into law on June 22, 2025.
United States Technology

Executive Summary

Governor Greg Abbott signed HB 149, the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), into law on June 22, 2025. With its passage, Texas becomes the third state to adopt a comprehensive AI law. Notably, the enacted version represents a dramatic evolution from the original December 2020 proposal, which would have imposed sweeping EU AI Act-style requirements. The final law instead focuses on prohibiting specific harmful AI practices through an intent-based liability framework, reflecting Texas's effort to balance innovation with consumer protection. While enshrining several familiar principles and provisions, TRAIGA diverges from other AI laws in emphasizing intentional misconduct over impact-based liability. It also includes several safe harbors and preempts local AI regulation. The Act takes effect January 1, 2026, giving companies approximately 6 months to prepare and establish compliance programs.

Key Highlights:

  • Broad applicability to AI "developers" and "deployers" conducting business in Texas;
  • Prohibition of AI systems intended for discrimination, constitutional rights violations, or harmful behavior manipulation;
  • Creation of a 36-month regulatory sandbox for AI testing;
  • Exclusive enforcement authority vested in the Texas Attorney General; and
  • Substantial civil penalties for violations with some safe harbor provisions.

Scope and Applicability

The Act applies to any person that "promotes, advertises, or conducts business" in Texas, offers products or services to Texas residents, or "develops or deploys" an AI system in the state. Developers are entities that create AI systems, which are "offered, sold, leased, or otherwise provided in Texas," whereas deployers put "an AI system into service or use in the state."

An "artificial intelligence system" under TRAIGA is "any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs including content, decisions, predictions, or recommendations, that can influence physical or virtual environments." This broad, technology-neutral definition encompasses generative AI, agentic AI, recommender systems, biometric technologies, and a range of other artificial intelligence applications.

Core Regulatory Framework

Prohibited Practices

TRAIGA establishes clear prohibitions against AI systems designed with specific harmful intents:

  • Behavioral Manipulation: Organizations cannot develop or deploy AI systems "in a manner that intentionally aims to incite or encourage a person to (i) commit physical self-harm, including suicide, (ii) harm another person; or (iii) engage in criminal activity."
  • Constitutional Rights Violations: The law prohibits AI systems developed with the sole intent of infringing upon constitutional rights.
  • Discriminatory AI: Systems cannot be developed or deployed with intent to unlawfully discriminate against protected classes. Critically, the law requires proof of discriminatory intent rather than mere disparate impact, providing businesses with greater legal certainty.
  • Exploitation of Minors: AI systems cannot be developed with the sole intent of creating, distributing, or facilitating child sexual abuse material or explicit deepfake content involving minors.

Intent-Based Liability Standard

TRAIGA's most significant innovation lies in its intent-based liability framework. Unlike impact-focused regulations that create strict liability for discriminatory outcomes, Texas requires proof of intentional misconduct. This approach provides businesses with clearer compliance guidelines while maintaining consumer protections against deliberate abuse. However, this framework creates practical documentation imperatives: while TRAIGA doesn't explicitly mandate extensive record-keeping, proving lack of discriminatory or harmful intent effectively requires organizations to maintain detailed documentation of AI system purposes, design decisions, and intended use cases. Companies should consider documenting their legitimate business purposes for AI systems, testing protocols that demonstrate efforts to prevent prohibited uses, and clear policies restricting system deployment to lawful purposes. This documentation becomes critical evidence in defending against enforcement actions, as the absence of such records could make it difficult to refute allegations of improper intent.

Government Entity Requirements

For governmental entities, TRAIGA imposes additional obligations:

  • Mandatory disclosure to consumers (defined as Texas residents acting in an individual or household context and not a commercial or employment context) that they are interacting with an AI system before or at the point of interaction;
  • Prohibition on AI systems that categorize individuals to assign a "social score" that could lead to detrimental treatment; and
  • Restrictions on using AI to uniquely identify persons via biometric data obtained from publicly available sources without consent.

Innovation-Friendly Provisions

Safe Harbor Protections

The law provides multiple safe harbor provisions for organizations demonstrating good faith compliance efforts. Organizations may not be found liable if they:

  • Discover violations through internal testing, including adversarial testing and red team exercises;
  • Substantially comply with the NIST AI Risk Management Framework or other recognized standards;
  • Are in compliance with state agency guidelines; or
  • Experience third-party misuse of their AI systems.

Regulatory Sandbox Program

TRAIGA establishes a first-in-the-nation state AI regulatory sandbox. The sandbox is administered by the Department of Information Resources and allows approved participants to test innovative AI applications for 36 months without obtaining any applicable standard state licenses. The application process requires potential participants to describe their AI systems, outlining their benefits, risks, and approach to risk assessment and mitigation. Certain laws and regulations are waived or suspended for sandbox participants, and during the testing period, neither the Attorney General nor state agencies may pursue punitive action for violations. However, TRAIGA's core prohibitions are not waived and must be adhered to during the testing period. Participants will submit quarterly performance reports during their time in the sandbox, and these reports will include feedback from consumers and other stakeholders testing the AI systems.

Healthcare and Biometric Privacy

Healthcare providers face enhanced obligations under both TRAIGA and companion legislation SB 1188, creating a comprehensive regulatory framework for AI in healthcare settings. Under TRAIGA, healthcare providers must clearly disclose AI system use in treatment contexts, ensuring patient awareness and informed consent for AI-assisted medical decisions. SB 1188 adds specific requirements including mandatory provider review of all AI-generated medical records according to Texas Medical Board standards, restrictions on physical offshoring of electronic medical records, and detailed patient notification procedures.

TRAIGA also creates new amendments to Texas's Capture or Use of Biometric Identifiers (CUBI). The Act clarifies that individuals do not consent to biometric capture merely because media containing their identifiers is publicly available unless they made it public themselves. TRAIGA also amends CUBI to allow the use of biometric identifiers to develop AI systems, provided the systems do not uniquely identify individuals.

TRIAGA carves out another CUBI exemption for the use of biometrics in AI systems deployed for the purposes of security, fraud detection, or the prevention of illegal activity. These systems, however, are not exempt from TRIAGA's statutory requirements.

Enforcement and Penalties

Texas' Attorney General holds exclusive enforcement authority over TRAIGA, and before pursuing enforcement action, the Attorney General must provide notice and allow a 60-day cure period for violations. The Act provides no private right of action and expressly nullifies any city or county ordinances regulating AI, aiming to prevent a local patchwork.

Penalty Structure

Civil penalties follow a tiered structure:

  • Curable violations (but not cured or those that breach statements submitted to the Attorney General that the violation has been cured): $10,000 to $12,000 per violation
  • Uncurable violations: $80,000 to $200,000 per violation
  • Continuing violations (or those that continue beyond the cure period without a submitted statement to the Attorney General): $2,000 to $40,000 per day

How to Prepare

Given the January 1, 2026, effective date, organizations that develop or deploy AI should begin compliance preparations now:

Immediate Actions

  1. Inventory AI Systems: Conduct a comprehensive audit of AI systems developed or deployed in Texas;
  2. Risk Assessment: Stratify AI use cases by risk level and potential TRAIGA implications; and
  3. Policy Development: Establish internal AI governance policies aligned with TRAIGA requirements and industry best practices.

Medium-Term Preparations

  1. Compliance Framework: Align enterprise-wide AI governance with the NIST AI Risk Management Framework;
  2. Testing Protocols: Establish adversarial testing and "red team" testing procedures;
  3. Documentation Systems: Create system and process documentation for defensible responses to regulatory inquiries;
  4. Incident Response: Develop procedures for addressing potential violations;
  5. Employee Training and Internal Awareness: Implement targeted training programs for teams involved in AI development, deployment, or oversight, with a focus on TRAIGA's core requirements, documentation practices, and internal reporting procedures; and
  6. Third-Party Vendor Management: Strengthen third-party risk management by updating vendor policies, contracts, and due diligence processes to ensure external partners understand and align with TRAIGA obligations, even though organizations are not liable for third-party misuse.

Ongoing Monitoring

  1. Attorney General Guidance: Watch for Texas Attorney General implementation guidance and enforcement priorities;
  2. Harmonization: Coordinate TRAIGA compliance with other state and international AI regulations for comprehensive governance; and
  3. Sandbox Participation: Evaluate opportunities to participate in Texas's regulatory sandbox for high-risk or novel AI use cases.

TRAIGA represents a thoughtful approach to AI regulation that balances innovation with consumer protection. The law's intent-based liability standards, comprehensive safe harbor provisions, and regulatory sandbox lay the foundation for a more business-friendly environment than many alternative regulatory frameworks. However, the substantial penalties underscore the importance of proactive compliance efforts. Organizations that proactively embrace TRAIGA's requirements will be well-positioned for the evolving regulatory landscape while maintaining competitive advantages in AI innovation.

Comparison Chart of Major AI Regulatory Regimes

Jurisdiction Texas (TRAIGA) Colorado AI Act Utah (AI Policy Act) California (CCPA ADMT) European Union (AI Act)
Implementation Date January 1, 2026 February 1, 2026 May 1, 2024 August 1, 2025 Phased (2024-2027)
Regulatory Philosophy Intent-based liability with innovation focus Impact-based algorithmic discrimination prevention Minimal disclosure requirements Consumer privacy rights for automated decision-making Comprehensive risk-tiered approach
Covered Entities Developers and deployers Developers and deployers Limited scope entities Businesses processing personal data via automated systems Providers, deployers, importers, distributors
Liability Standard Requires proof of discriminatory intent Algorithmic discrimination (impact-focused) Transparency violations only Consumer privacy violations, opt-out right violations Fundamental rights impact assessments
Innovation Incentives 36-month regulatory sandbox with legal immunity None specified AI development promotion Small business revenue thresholds ($25M) Member state sandboxes required by 2026
Enforcement Authority Texas AG exclusive Colorado AG exclusive State agencies CPPA + private actions Tiered penalties up to €35M or 7% revenue
Safe Harbor Provisions Extensive (testing, NIST compliance, third-party misuse) Limited Minimal compliance requirements Honoring consumer opt-out requests Conformity assessments and standards compliance
Government Use Restrictions Social scoring ban, biometric limitations, disclosure requirements Not Specified Limited provisions Not specified Prohibited and high-risk classifications
Private Right of Action No No No Yes (statutory damages) No (administrative enforcement)
Cure Period 60-day notice and cure To be determined Not applicable 30-day notice and cure No general cure provision

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More