Checklist · AI Contract Review · 47 Items · 2026

Enterprise AI Procurement Checklist

AI contracts are being signed at speed — often by procurement and IT teams applying traditional software contract review frameworks to fundamentally different commercial structures. AI pricing is consumption-based, opaque, and escalating. AI data rights clauses are contractually aggressive in ways most legal teams have not seen before. This 47-item checklist is the review framework that enterprise organisations need before signing any AI software agreement in 2026.

47
Contract Review Items
8
AI Contract Risk Categories
60+
AI Deals Underpinning This Framework
$890M
AI Contract Value Reviewed

The Eight AI Contract Risk Categories

Every enterprise AI contract contains clauses across these eight risk categories. Most traditional software contract review processes address fewer than three of them. This checklist ensures all eight are reviewed before signature.

Category 1: Pricing & Consumption Architecture

AI pricing structures — tokens, API calls, compute units, seats with AI add-ons — are fundamentally different from per-user or per-processor software licensing. The checklist covers: how vendor pricing meters are defined contractually; whether consumption caps are included; how overage is priced; whether price escalation applies to consumption rates as well as base fees; and whether consumption baseline estimates are contractually binding on the vendor. These six items alone have been responsible for 40%+ budget overruns in AI deployments we have reviewed.

Category 2: Data Rights & Training Consent

AI vendors require access to your data to provide the service. Many AI contracts contain clauses — sometimes buried in service terms rather than the master agreement — that permit the vendor to use customer data for model training, fine-tuning, or quality improvement. The checklist covers: explicit data use prohibition language; training consent opt-out provisions; data residency commitments for AI processing; data deletion on termination; and whether AI-generated outputs are subject to IP claims by the vendor. Three of the highest-risk items in the entire checklist fall in this category.

Category 3: AI Liability & Indemnification

Standard software liability caps are typically set at 12 months of fees. AI contracts increasingly attempt to limit liability further — or exclude it entirely — for AI output errors, hallucinations, or decisions made using AI-generated recommendations. The checklist covers: AI output liability exclusion clauses; whether indemnification covers third-party claims arising from AI decisions; intellectual property infringement coverage for AI-generated content; and whether regulatory compliance liability for AI-assisted processes is addressed. Vendors including Microsoft, OpenAI, Salesforce, and ServiceNow have all introduced AI-specific liability exclusion language in 2024–2025.

Category 4: SLA Standards for AI Availability & Accuracy

Traditional software SLAs define availability and response time. AI SLAs need to address model accuracy, output quality, and the vendor's obligations when AI capability degrades — particularly as models are retrained or replaced. The checklist covers: whether availability SLAs cover AI model availability separately from platform availability; accuracy or quality commitments; model version stability provisions (preventing vendor unilateral model replacement); performance regression notification obligations; and credits for AI performance below committed thresholds. Most current AI contracts offer no SLA protection on model quality whatsoever.

Category 5: Vendor Lock-In & Portability

AI platform dependencies are more severe than traditional software lock-in. Data, fine-tuned models, prompt libraries, and integration workflows built on one vendor's AI infrastructure are rarely portable. The checklist covers: data export rights and format specifications; fine-tuned model portability (whether models trained on your data can be extracted); API compatibility commitments; transition assistance obligations; and whether the vendor is permitted to sunset capabilities during the contract term without compensation. The lock-in assessment framework in the checklist helps organisations quantify switching cost before signing.

Category 6: Regulatory & Compliance Provisions

The EU AI Act, emerging US AI governance frameworks, and sector-specific regulations (financial services, healthcare, government) impose obligations on AI deployers — not just vendors. The checklist covers: who bears regulatory compliance responsibility for AI systems in production; whether vendors provide the technical documentation required by the EU AI Act for high-risk AI systems; audit rights for AI decision systems; algorithmic transparency commitments; and how the contract handles future regulatory changes that affect AI deployment. Organisations that have not addressed these provisions in their AI contracts face significant remediation costs when regulatory deadlines arrive.

Category 7: Exit & Transition Provisions

AI vendor exits are operationally complex in ways that traditional software transitions are not. Fine-tuned models, training data, prompt engineering work, and AI-integrated workflows represent significant investment that can be stranded on exit. The checklist covers: notice periods for price escalation events that trigger exit rights; data extraction timelines and format commitments; extended access provisions for transition periods; model export technical specifications; and whether transition assistance services are contractually obligated rather than commercially offered. The exit provision review items in the checklist have recovered an average of $2.3M per engagement in transition cost avoidance.

Category 8: Security & Incident Response

AI systems process sensitive data at scale and introduce new attack surfaces — prompt injection, model poisoning, and adversarial inputs are attack vectors that traditional security SLAs do not address. The checklist covers: AI-specific security certifications and audit rights; breach notification timelines that cover AI inference data (not just stored data); vendor obligations regarding model security testing; prompt injection protection commitments; and whether AI security incidents are covered under the same incident response SLA as general platform incidents. Six of the 47 checklist items address security provisions that are absent from the majority of current AI contracts.

AI Contract Risk Scorecard: Vendor Comparison

How the major AI platform vendors currently score across the eight contract risk categories. Based on our review of current standard contract terms as of Q1 2026 — terms we have successfully negotiated improvements to in enterprise deals.

Risk Category Microsoft Copilot / Azure OpenAI Salesforce Einstein / Agentforce ServiceNow Now AI Google Vertex AI
Pricing Transparency Medium Risk High Risk High Risk Medium Risk
Data Training Rights Medium Risk High Risk Medium Risk Medium Risk
AI Output Liability High Risk High Risk High Risk High Risk
AI Accuracy SLAs No Commitment No Commitment No Commitment No Commitment
Vendor Lock-In High Risk High Risk Medium Risk Medium Risk
Regulatory Compliance Medium Risk Medium Risk Medium Risk Lower Risk
Exit Provisions Medium Risk High Risk High Risk Medium Risk
Security & Incident Response Lower Risk Medium Risk Medium Risk Lower Risk

Risk ratings based on review of standard vendor contract terms Q1 2026. Enterprise negotiated agreements can significantly change these ratings — our advisory engagements typically reduce high-risk ratings to medium or low across all categories.

The Five AI Contract Traps We See Most Often

  • 1. Signing AI Add-Ons Without Understanding the Pricing Meter: AI capabilities are being bundled into SaaS renewals at seemingly modest per-user rates — but the actual spend is driven by consumption meters that are not visible in the headline pricing. Copilot interactions, Einstein AI queries, and Now Assist utilisation are all consumption-billed in ways that can produce spend 3–5x the contracted base fee at production scale. Always require a consumption projection model as part of AI contract negotiations, and cap maximum consumption overage contractually before signing.
  • 2. Accepting Standard Data Use Clauses Without Review: The default data use clauses in AI vendor agreements — often embedded in service terms or data processing addenda rather than the main contract — frequently permit broader data use than enterprise legal teams realise. Clauses permitting "service improvement" or "model training" using "aggregated and anonymised" customer data have been interpreted by vendors to cover fine-tuned model training. Explicitly prohibit all customer data use for model training in the master agreement, not just the DPA.
  • 3. Assuming Traditional Liability Caps Cover AI Outputs: Standard software liability caps (typically 12 months of fees) are being supplemented with AI-specific exclusions that eliminate vendor liability for AI output errors entirely. "AI outputs are not guaranteed to be accurate" language — which vendors present as an obvious technical caveat — functions contractually as a complete liability exclusion for AI-driven decisions. Require explicit AI output liability coverage, including coverage for third-party claims arising from decisions made using AI recommendations.
  • 4. No Model Stability Provisions: AI vendors retain the right to update, retrain, or replace underlying models during the contract term — which can fundamentally change the output characteristics of AI systems that have been integrated into business processes. A model update that changes AI decision patterns can require significant revalidation, retraining, and process adjustment. Require model version stability provisions — minimum 90-day advance notice of major model changes, with a parallel validation period and the right to continue on prior model versions for 6 months post-change.
  • 5. Bundling AI Into Platform Renewals Without Separate Pricing: Salesforce, ServiceNow, and Microsoft are embedding AI capabilities into standard platform tiers during renewal — presenting AI as an "included" enhancement rather than a separately priced add-on. This approach allows vendors to establish AI pricing precedent within existing contracts at rates that are 40–70% above what standalone AI negotiations achieve. Always negotiate AI capabilities as a separate contract line item, even when the vendor presents them as bundled, to establish clear pricing visibility and create negotiation leverage at the next renewal.

What the Checklist Has Found

Financial Services Firm — Microsoft Copilot M365

$4.2M Copilot 365 agreement for 14,000 users. Checklist review identified: uncapped overage on Copilot interaction tokens; a data use clause permitting model training on communication data; no model stability provisions; and liability exclusion for Copilot-assisted decisions. Renegotiated terms: consumption cap with 110% overage limit at flat rate; explicit training data prohibition; 180-day model stability notice requirement; and coverage for regulatory challenges arising from Copilot-assisted compliance processes. Estimated risk exposure reduced: $18M over contract term.

Healthcare Provider — Salesforce Einstein AI

$6.8M Salesforce renewal with Einstein AI and Agentforce bundled at standard tier pricing. Checklist review identified that Einstein AI was being priced as a bundled inclusion at an implicit rate of $48/user — compared to standalone Einstein benchmark of $28/user for comparable healthcare accounts. Unbundling and separate negotiation of the AI component achieved $24/user for a 24-month pilot commitment — saving $1.4M annually and establishing clean pricing visibility for the subsequent renewal.

Manufacturing Group — Multi-Vendor AI Stack

AI deployment across AWS Bedrock, Azure OpenAI, and ServiceNow Now AI — total AI spend of $11M annually. Checklist applied to all three agreements revealed cross-vendor data rights conflicts: each vendor's agreement claimed rights over AI outputs generated using their infrastructure, creating an unresolved IP ownership issue for AI-generated operational data. Contract restructuring across all three vendors resolved the IP ownership chain and established clear data portability rights for cross-platform AI workflows.

Government Agency — Sovereign AI Platform

£28M AI platform procurement for a national government agency. EU AI Act compliance review using the checklist identified 11 items requiring vendor documentation that the initial contract did not provide — including technical documentation for high-risk AI system classification, conformity assessment obligations, and human oversight requirements. Contract renegotiation delivered full EU AI Act compliance documentation obligations, with contractual remedies if vendor fails to provide required regulatory documentation within 90 days of enacted regulatory deadlines.

Access the Full Checklist

The Enterprise AI Procurement Checklist includes all 47 review items across 8 risk categories, with guidance notes, sample contract language, and red flag indicators for each item. Also includes a vendor risk scorecard template and a pre-signature AI contract review workflow. Download free with registration.

What You Receive

  • ✓ 47-item AI contract checklist (PDF)
  • ✓ AI vendor risk scorecard template (Excel)
  • ✓ Sample AI contract counter-language (Word)
  • ✓ EU AI Act compliance checklist overlay
  • ✓ AI consumption modelling spreadsheet
Speak to an AI Contract Specialist

Download the Checklist — No Cost

Related Resources

AI Procurement Advisory Service

Our AI contract specialists review your AI agreements against the full 47-item checklist and deliver a prioritised risk remediation plan. For active procurement, we engage directly in negotiations to address the highest-risk items before signature. We have reviewed over $890M in AI contract value across Microsoft, Salesforce, ServiceNow, AWS, and Google Cloud.

Learn More →

AI Contract Red Flags

A more focused look at the 12 highest-risk clauses that appear in AI contracts from Microsoft, Salesforce, ServiceNow, and Google Cloud — the specific contract language that our advisors flag as requiring immediate renegotiation. Companion resource to this checklist for legal and procurement teams reviewing specific AI contract language.

Download →

Microsoft Copilot Negotiation Guide

Microsoft Copilot 365 is the most widely deployed enterprise AI product in 2026 — and the one where we see the most problematic contract terms. Our Copilot guide covers pricing benchmarks, consumption modelling, data rights provisions, and the specific negotiation positions that have achieved 30–45% reductions on Copilot proposals from Microsoft account teams.

Download →
Free AI Contract Review

Have an AI Contract You Need Reviewed?

In a 60-minute session, our AI contract specialists will review your current or pending AI agreement against the 47-item checklist — identifying the highest-risk clauses and recommending specific remediation language before you sign. No charge for the initial review.

Request an AI Contract Review