AI contracts are being signed at speed — often by procurement and IT teams applying traditional software contract review frameworks to fundamentally different commercial structures. AI pricing is consumption-based, opaque, and escalating. AI data rights clauses are contractually aggressive in ways most legal teams have not seen before. This 47-item checklist is the review framework that enterprise organisations need before signing any AI software agreement in 2026.
Every enterprise AI contract contains clauses across these eight risk categories. Most traditional software contract review processes address fewer than three of them. This checklist ensures all eight are reviewed before signature.
AI pricing structures — tokens, API calls, compute units, seats with AI add-ons — are fundamentally different from per-user or per-processor software licensing. The checklist covers: how vendor pricing meters are defined contractually; whether consumption caps are included; how overage is priced; whether price escalation applies to consumption rates as well as base fees; and whether consumption baseline estimates are contractually binding on the vendor. These six items alone have been responsible for 40%+ budget overruns in AI deployments we have reviewed.
AI vendors require access to your data to provide the service. Many AI contracts contain clauses — sometimes buried in service terms rather than the master agreement — that permit the vendor to use customer data for model training, fine-tuning, or quality improvement. The checklist covers: explicit data use prohibition language; training consent opt-out provisions; data residency commitments for AI processing; data deletion on termination; and whether AI-generated outputs are subject to IP claims by the vendor. Three of the highest-risk items in the entire checklist fall in this category.
Standard software liability caps are typically set at 12 months of fees. AI contracts increasingly attempt to limit liability further — or exclude it entirely — for AI output errors, hallucinations, or decisions made using AI-generated recommendations. The checklist covers: AI output liability exclusion clauses; whether indemnification covers third-party claims arising from AI decisions; intellectual property infringement coverage for AI-generated content; and whether regulatory compliance liability for AI-assisted processes is addressed. Vendors including Microsoft, OpenAI, Salesforce, and ServiceNow have all introduced AI-specific liability exclusion language in 2024–2025.
Traditional software SLAs define availability and response time. AI SLAs need to address model accuracy, output quality, and the vendor's obligations when AI capability degrades — particularly as models are retrained or replaced. The checklist covers: whether availability SLAs cover AI model availability separately from platform availability; accuracy or quality commitments; model version stability provisions (preventing vendor unilateral model replacement); performance regression notification obligations; and credits for AI performance below committed thresholds. Most current AI contracts offer no SLA protection on model quality whatsoever.
AI platform dependencies are more severe than traditional software lock-in. Data, fine-tuned models, prompt libraries, and integration workflows built on one vendor's AI infrastructure are rarely portable. The checklist covers: data export rights and format specifications; fine-tuned model portability (whether models trained on your data can be extracted); API compatibility commitments; transition assistance obligations; and whether the vendor is permitted to sunset capabilities during the contract term without compensation. The lock-in assessment framework in the checklist helps organisations quantify switching cost before signing.
The EU AI Act, emerging US AI governance frameworks, and sector-specific regulations (financial services, healthcare, government) impose obligations on AI deployers — not just vendors. The checklist covers: who bears regulatory compliance responsibility for AI systems in production; whether vendors provide the technical documentation required by the EU AI Act for high-risk AI systems; audit rights for AI decision systems; algorithmic transparency commitments; and how the contract handles future regulatory changes that affect AI deployment. Organisations that have not addressed these provisions in their AI contracts face significant remediation costs when regulatory deadlines arrive.
AI vendor exits are operationally complex in ways that traditional software transitions are not. Fine-tuned models, training data, prompt engineering work, and AI-integrated workflows represent significant investment that can be stranded on exit. The checklist covers: notice periods for price escalation events that trigger exit rights; data extraction timelines and format commitments; extended access provisions for transition periods; model export technical specifications; and whether transition assistance services are contractually obligated rather than commercially offered. The exit provision review items in the checklist have recovered an average of $2.3M per engagement in transition cost avoidance.
AI systems process sensitive data at scale and introduce new attack surfaces — prompt injection, model poisoning, and adversarial inputs are attack vectors that traditional security SLAs do not address. The checklist covers: AI-specific security certifications and audit rights; breach notification timelines that cover AI inference data (not just stored data); vendor obligations regarding model security testing; prompt injection protection commitments; and whether AI security incidents are covered under the same incident response SLA as general platform incidents. Six of the 47 checklist items address security provisions that are absent from the majority of current AI contracts.
How the major AI platform vendors currently score across the eight contract risk categories. Based on our review of current standard contract terms as of Q1 2026 — terms we have successfully negotiated improvements to in enterprise deals.
| Risk Category | Microsoft Copilot / Azure OpenAI | Salesforce Einstein / Agentforce | ServiceNow Now AI | Google Vertex AI |
|---|---|---|---|---|
| Pricing Transparency | Medium Risk | High Risk | High Risk | Medium Risk |
| Data Training Rights | Medium Risk | High Risk | Medium Risk | Medium Risk |
| AI Output Liability | High Risk | High Risk | High Risk | High Risk |
| AI Accuracy SLAs | No Commitment | No Commitment | No Commitment | No Commitment |
| Vendor Lock-In | High Risk | High Risk | Medium Risk | Medium Risk |
| Regulatory Compliance | Medium Risk | Medium Risk | Medium Risk | Lower Risk |
| Exit Provisions | Medium Risk | High Risk | High Risk | Medium Risk |
| Security & Incident Response | Lower Risk | Medium Risk | Medium Risk | Lower Risk |
Risk ratings based on review of standard vendor contract terms Q1 2026. Enterprise negotiated agreements can significantly change these ratings — our advisory engagements typically reduce high-risk ratings to medium or low across all categories.
$4.2M Copilot 365 agreement for 14,000 users. Checklist review identified: uncapped overage on Copilot interaction tokens; a data use clause permitting model training on communication data; no model stability provisions; and liability exclusion for Copilot-assisted decisions. Renegotiated terms: consumption cap with 110% overage limit at flat rate; explicit training data prohibition; 180-day model stability notice requirement; and coverage for regulatory challenges arising from Copilot-assisted compliance processes. Estimated risk exposure reduced: $18M over contract term.
$6.8M Salesforce renewal with Einstein AI and Agentforce bundled at standard tier pricing. Checklist review identified that Einstein AI was being priced as a bundled inclusion at an implicit rate of $48/user — compared to standalone Einstein benchmark of $28/user for comparable healthcare accounts. Unbundling and separate negotiation of the AI component achieved $24/user for a 24-month pilot commitment — saving $1.4M annually and establishing clean pricing visibility for the subsequent renewal.
AI deployment across AWS Bedrock, Azure OpenAI, and ServiceNow Now AI — total AI spend of $11M annually. Checklist applied to all three agreements revealed cross-vendor data rights conflicts: each vendor's agreement claimed rights over AI outputs generated using their infrastructure, creating an unresolved IP ownership issue for AI-generated operational data. Contract restructuring across all three vendors resolved the IP ownership chain and established clear data portability rights for cross-platform AI workflows.
£28M AI platform procurement for a national government agency. EU AI Act compliance review using the checklist identified 11 items requiring vendor documentation that the initial contract did not provide — including technical documentation for high-risk AI system classification, conformity assessment obligations, and human oversight requirements. Contract renegotiation delivered full EU AI Act compliance documentation obligations, with contractual remedies if vendor fails to provide required regulatory documentation within 90 days of enacted regulatory deadlines.
The Enterprise AI Procurement Checklist includes all 47 review items across 8 risk categories, with guidance notes, sample contract language, and red flag indicators for each item. Also includes a vendor risk scorecard template and a pre-signature AI contract review workflow. Download free with registration.
What You Receive
Download the Checklist — No Cost
Our AI contract specialists review your AI agreements against the full 47-item checklist and deliver a prioritised risk remediation plan. For active procurement, we engage directly in negotiations to address the highest-risk items before signature. We have reviewed over $890M in AI contract value across Microsoft, Salesforce, ServiceNow, AWS, and Google Cloud.
Learn More →A more focused look at the 12 highest-risk clauses that appear in AI contracts from Microsoft, Salesforce, ServiceNow, and Google Cloud — the specific contract language that our advisors flag as requiring immediate renegotiation. Companion resource to this checklist for legal and procurement teams reviewing specific AI contract language.
Download →Microsoft Copilot 365 is the most widely deployed enterprise AI product in 2026 — and the one where we see the most problematic contract terms. Our Copilot guide covers pricing benchmarks, consumption modelling, data rights provisions, and the specific negotiation positions that have achieved 30–45% reductions on Copilot proposals from Microsoft account teams.
Download →In a 60-minute session, our AI contract specialists will review your current or pending AI agreement against the 47-item checklist — identifying the highest-risk clauses and recommending specific remediation language before you sign. No charge for the initial review.
Request an AI Contract Review