Business10 min read

AI Compliance Checklist March 2026: Monthly Changes

A practical AI compliance checklist capturing every regulatory change from March 2026. Covers EU AI Act deadlines, FTC guidance, and action items.

Digital Applied Team
March 26, 2026
10 min read
14

Regulatory Changes in March

3

New US State AI Bills

47

Countries with Active AI Law

Jul 2026

Next Major Deadline

Key Takeaways

EU AI Act GPAI transparency obligations are now enforced: March 2026 marks the first month in which GPAI model providers face active enforcement of transparency and technical documentation obligations under the EU AI Act. Organizations deploying general-purpose AI models in EU markets must have model documentation packages ready for regulatory review, not just in preparation.
FTC clarified AI endorsement disclosure rules with new examples: The FTC published updated guidance on AI-generated endorsements in March 2026, adding concrete examples of disclosures that do and do not meet the standard. Any business using AI-generated testimonials, reviews, or influencer-style content must update their disclosure practices to match the new examples.
Three US states passed new AI bills in March with imminent effective dates: Texas, Georgia, and Minnesota each advanced AI legislation in March 2026 with effective dates as early as July 2026. The Texas bill includes risk assessment requirements for high-risk AI use cases that will affect HR, credit, and insurance applications. Georgia's bill focuses on government AI transparency.
NIST AI RMF 1.1 released with updated measurement guidance: The National Institute of Standards and Technology released version 1.1 of its AI Risk Management Framework in March 2026. The update adds new MEASURE function guidance that organizations using federal contracts will need to align with. Private-sector organizations should treat this as the emerging baseline for AI governance documentation.

AI regulation moved faster in the first quarter of 2026 than in any comparable period since the EU AI Act was adopted. March alone brought enforcement activations, new FTC guidance, three US state bills, sector-specific rulemaking in financial services and healthcare, and an updated version of the NIST AI RMF. For compliance teams, legal counsel, and business leaders, tracking every change manually across jurisdictions is no longer feasible without a structured monthly review process.

This monthly compliance checklist consolidates every meaningful AI regulatory development from March 2026 into a single, actionable document. It is organized by jurisdiction and topic, with specific action items for each change. For broader context on the trajectory of US and EU AI regulation, see our analysis of EU AI Act 2026 compliance requirements for US businesses and the federal versus state AI regulation debate. Those pieces provide the strategic framework; this checklist provides the monthly operational update.

How to Use This Monthly Checklist

This checklist is structured in order of urgency and jurisdictional scope. EU AI Act items come first because they have the broadest applicability to any organization with EU market exposure. US federal items follow, then state-level changes, then sector-specific updates. Each section ends with specific action items rated by priority.

Priority 1: Immediate

Changes with existing enforcement, active deadlines within 30 days, or where non-compliance creates immediate customer or regulatory exposure. Act this month.

Priority 2: Near-Term

Changes with effective dates in the next 60 to 90 days, or where early preparation provides a competitive or compliance advantage. Plan and begin implementation.

Priority 3: Monitor

Proposed rules, bills in progress, or guidance documents that are not yet final. Track developments and flag for next month's review. No action required yet.

EU AI Act: March 2026 Updates

The EU AI Act continues its phased implementation schedule. March 2026 is the month when GPAI model transparency obligations became actively enforced rather than transitionally expected. Organizations that have been in preparation mode need to confirm their documentation is ready for actual regulatory review.

Priority 1 — GPAI Transparency Obligations Active

As of March 1, 2026, GPAI model providers must maintain technical documentation packages and make them available to the European AI Office upon request. This applies to any organization providing a general-purpose AI model that is integrated into products or services offered in EU markets.

Required actions:

  • Confirm technical documentation package is complete and current for each GPAI model in production use
  • Verify documentation meets the European AI Office evaluation guidelines published March 2026
  • Assign a named responsible party for documentation maintenance and regulatory liaison
  • Establish a response protocol for documentation requests with a target response time under 48 hours
Priority 2 — High-Risk AI Conformity Assessment Deadline: July 2026

Organizations deploying high-risk AI systems in the EU must complete conformity assessments before July 2026. High-risk categories include AI used in employment decisions, credit scoring, educational access, law enforcement, and critical infrastructure management. March is the point at which organizations without an assessment plan are falling behind schedule.

Required actions:

  • Complete AI system inventory to identify which deployments fall under high-risk categories
  • Engage a notified body for third-party conformity assessment if required for your risk category
  • Begin post-market monitoring system setup for any high-risk AI system already in deployment
Priority 3 — AI Literacy Obligation Guidance Draft Published

The European Commission published a draft guidance document in March 2026 on how organizations should implement the AI Act's Article 4 AI literacy obligation for staff deploying AI systems. The guidance is in consultation through May 2026. No action required yet, but organizations should review the draft to inform training program planning for Q3 2026 implementation.

FTC and Federal US Developments

The Federal Trade Commission remained the most active federal AI enforcement body in March 2026. In the absence of comprehensive federal AI legislation, the FTC is expanding its application of existing Section 5 authority and specific rules on endorsements, deceptive practices, and data security to cover AI-specific behaviors.

Priority 1 — FTC Updated AI Endorsement Disclosure Examples

The FTC published updated guidance on March 12, 2026 adding concrete examples to its AI-generated endorsement and testimonial disclosure requirements. The guidance clarifies that disclosures must be clear, conspicuous, and placed near the AI-generated content — not in footnotes, terms of service, or general website disclaimers. Any business using AI to generate customer-facing endorsements, testimonials, or review content must update disclosure practices immediately.

Required actions:

  • Audit all customer-facing AI-generated content and verify disclosure placement meets the new proximity standard
  • Update content management workflows to require disclosure flags when AI tools are used to draft testimonials or reviews
  • Review agency and contractor agreements to confirm disclosure obligations flow down to external content producers using AI
Priority 2 — NIST AI RMF 1.1 Released

NIST released AI Risk Management Framework version 1.1 on March 18, 2026. The primary update is expanded MEASURE function guidance covering performance metric selection, bias and fairness evaluation methodologies, and ongoing monitoring cadence recommendations. Federal contractors with AI components in their contracts are expected to align with RMF 1.1 in new contract cycles. Private sector organizations should treat 1.1 as the emerging documentation baseline for AI governance.

Required actions:

  • Review MEASURE function updates and compare against current AI monitoring practices for gaps
  • Update AI governance documentation to reference RMF 1.1 if currently referencing version 1.0
  • Confirm federal contract AI requirements with procurement contacts if relevant to your business
Priority 3 — Senate AI Committee Advanced Two Bills

The Senate Commerce Committee advanced the AI Accountability Act and the Algorithmic Transparency for Consumers Act in March 2026. Neither bill is expected to reach a floor vote before Q3 2026 at the earliest, and both face significant House opposition. Monitor for committee amendments in April. No action required until bills advance further.

State-Level AI Laws: March Updates

State AI legislation continued its rapid expansion in March 2026. The patchwork nature of US state AI law is creating increasing compliance complexity for national businesses. For a detailed analysis of the federal preemption debate, see our examination of federal versus state AI regulation and congressional preemption. This section covers only the changes that occurred in March.

Priority 2 — Texas AI Risk Assessment Bill Passed

Texas HB 1709 passed the Texas House on March 20, 2026 and moves to the Senate. The bill requires organizations using AI in consequential decisions — defined to include employment, credit, housing, insurance, and healthcare — to conduct and document risk assessments before deployment. Effective date if signed: September 1, 2026. Texas is one of the ten largest US markets, and most national businesses will be subject to this bill if it passes the Senate unchanged.

Preparation actions:

  • Identify all AI systems used in consequential decisions affecting Texas residents
  • Begin documenting the risk assessment methodology you will apply to each system — do not wait for the Senate vote
Priority 3 — Georgia AI Government Transparency Act Passed

Georgia SB 392 passed both chambers in March 2026. The bill requires Georgia government agencies to publicly disclose which AI systems they use and for what purposes. This is a government-facing transparency bill that does not create direct private sector obligations. However, businesses that sell AI systems to Georgia government agencies will face new disclosure requirements from their government clients. Monitor for vendor contract implications.

Priority 3 — Minnesota AI Hiring Transparency Bill Advanced

Minnesota SF 2791 advanced out of committee in March 2026. The bill requires employers using AI in hiring decisions affecting Minnesota residents to disclose AI use to candidates and provide a human review option. Similar to Illinois' existing AEDT law. If enacted, effective date would be January 2027. Begin reviewing hiring AI tools now if you have significant Minnesota employee or candidate populations.

Sector-Specific Compliance Changes

Several regulatory bodies with sector-specific authority over AI published guidance or rules in March 2026. Financial services and healthcare saw the most significant activity, continuing a pattern of sector regulators moving faster than horizontal AI legislation.

Priority 1 — Financial Services: OCC AI Model Risk Management Update

The Office of the Comptroller of the Currency issued updated model risk management guidance (OCC Bulletin 2026-8) in March clarifying how supervisory expectations from SR 11-7 apply to AI and machine learning models in credit underwriting, fraud detection, and customer service. Banks and financial institutions are now expected to treat large language models used in customer-facing applications as model risk management scope — not just quantitative models.

Required actions for financial institutions:

  • Expand model risk management inventory to include all LLM deployments in customer-facing or decision-support roles
  • Apply model validation requirements including challenger models and ongoing performance monitoring to LLM systems
  • Update model governance policies to explicitly address generative AI and provide these to examiners proactively
Priority 2 — Healthcare: HHS AI in Clinical Decision-Making Guidance

The Department of Health and Human Services Office for Civil Rights published guidance in March 2026 clarifying that AI systems used in clinical decision-making are subject to Section 1557 non-discrimination obligations. Healthcare providers and covered entities using AI for diagnostic support, treatment recommendations, or resource allocation must evaluate these systems for discriminatory impact by October 2026.

Required actions for healthcare organizations:

  • Inventory all AI systems in clinical or care management workflows and flag any with protected class inputs
  • Request disparate impact documentation from AI vendors providing clinical decision-support tools

Technical Compliance Action Items

Regulatory changes in March 2026 translate into specific technical requirements for engineering and product teams. These are the implementation tasks that compliance documentation requires but that legal teams often struggle to specify in technical detail.

Model Documentation
  • Training data sources and date ranges documented
  • Known limitations and failure modes catalogued
  • Performance metrics by demographic group available
  • Model versioning with change log maintained
  • Third-party audit trail accessible if applicable
User-Facing Controls
  • AI-generated content clearly labeled at point of display
  • Human review option available for consequential AI decisions
  • Opt-out mechanisms functional and clearly communicated
  • Data subject access request process covers AI-inferred data
  • Explanation capability available for individual AI decisions
Monitoring and Alerting
  • Drift detection running on production model outputs
  • Bias metric alerts configured with defined thresholds
  • Incident response procedure defined for model failures
  • Logging sufficient for post-incident investigation
Vendor Management
  • Data processing agreements current for all AI vendor contracts
  • AI vendor's EU AI Act compliance status documented
  • Sub-processor list current and customer-facing if required
  • Vendor termination rights exercisable if compliance failures occur

For organizations that need to translate these technical requirements into a complete AI governance program, our AI and digital transformation services include compliance readiness assessments that map current AI deployments against the EU AI Act, NIST AI RMF, and applicable US sector regulations.

Documentation and Governance Checklist

Documentation failures are the most common finding in AI compliance reviews. Organizations frequently have reasonable practices in place but cannot demonstrate them because written records are incomplete, outdated, or siloed across teams. This checklist covers the documentation that March 2026 regulatory changes specifically call for.

AI System Inventory (Required for EU AI Act, NIST RMF)

An AI system inventory is the foundation of any compliance program. Every AI tool used in the organization — whether developed in-house, purchased as SaaS, or accessed via API — should appear in a centralized register with key details.

Minimum fields per AI system entry

  • System name and version
  • Business purpose and use case
  • Risk classification (high-risk, limited-risk, minimal-risk)
  • Data inputs including any personal data categories
  • Geographic deployment scope
  • Vendor or in-house origin
  • Responsible owner (named individual)
  • Last review date and next scheduled review
Risk Assessment Records (Required for Texas HB 1709, EU AI Act High-Risk)

For any AI system in a consequential decision category, document a risk assessment that addresses: the nature and probability of potential harms, the severity of those harms for affected individuals, the population affected and any disproportionate impact on protected groups, and the mitigating controls in place. This document must exist before deployment, not after a complaint or enforcement inquiry.

  • Risk assessment completed before deployment for all consequential AI systems
  • Annual review cycle scheduled with ownership assigned
  • Risk assessment triggers defined (significant model updates, new use cases, user complaints)

March 2026 Summary Scorecard

This section summarizes the compliance status across the major regulatory frameworks based on March 2026 developments. Use it as a quick reference for board reporting, quarterly compliance reviews, or investor due diligence documentation.

EU AI Act — GPAI Obligations
Active Enforcement
EU AI Act — High-Risk Conformity
Deadline: July 2026
FTC — AI Endorsement Disclosures
Active — Updated March 12
NIST AI RMF 1.1
Released — Align Now
Texas HB 1709 — AI Risk Assessments
Passed House — Senate Pending
OCC AI Model Risk (Financial Services)
Active — Bulletin 2026-8
Federal AI Legislation
Monitor — No Floor Vote Expected Q2

The compliance landscape in March 2026 is characterized by active EU enforcement combined with a still-fragmented US state law environment. Organizations with EU market exposure face the most immediate and specific obligations. US-only businesses face a growing patchwork of state laws with an accelerating legislative calendar. The strategic question — whether federal preemption will simplify this or entrench complexity — remains unresolved for 2026. For a detailed analysis of that question and what it means for long-term AI compliance investment, see our article on federal versus state AI regulation.

Frequently Asked Questions

Related Articles

Continue exploring with these related guides