Table of Contents
- Why AI Liability Is Different From All Other Software Liability
- The Standard Vendor Position: Default Liability Caps by Major Provider
- Hallucination Liability: The Biggest Unaddressed Risk
- Copyright Indemnification: Training Data Exposure
- Data Processing Liability: When AI Causes a Breach
- Mutual Indemnification: What Buyers Should Demand
- Liability Cap Negotiation: Getting to Commercial Reality
- Contract Language Checklist: Specific Clauses You Need
Why AI Liability Is Different From All Other Software Liability
Your enterprise has spent years negotiating reasonable liability and indemnification frameworks with traditional software vendors. Those frameworks are broken for AI.
Traditional software liability is predictable: your database crashes, your email is down, your CRM is slow. Damages are typically limited to lost productivity, recovery costs, or foregone revenue during downtime. These losses are usually proportional to the fee being paid, which is why 12-month liability caps became the industry standard.
AI introduces four categories of risk that demolish this assumption:
- Hallucination Risk: An AI system confidently generates false information—legal advice with invented case law, financial analysis with fabricated numbers, medical guidance that never existed. The enterprise executive, trusting the AI output, acts on it. The damages dwarf any reasonable fee-based liability cap. A single AI-generated legal memo that leads to contract interpretation errors can cost millions in litigation exposure.
- Copyright Infringement from Training Data: The AI vendor trained its model on copyrighted books, articles, code, and media without licenses. When you use the AI system, the vendor's copyright liability cascades to you. The New York Times lawsuit against OpenAI demonstrates this isn't theoretical—it's happening now. Secondary infringement claims can expose your enterprise to statutory damages of $150,000+ per work.
- Discrimination Liability: You deploy AI for hiring, lending, insurance pricing, or benefits administration. The AI model carries embedded biases from its training data. Your enterprise faces age discrimination, gender bias, or racial discrimination claims. Regulatory fines from EEOC, CFPB, or state attorneys general can reach $10-100M. The AI vendor disclaims all liability for outputs.
- Data Breach Risk: The AI vendor processes your sensitive data—customer PII, financial records, medical information, trade secrets. The vendor's API is compromised. Your data is exfiltrated. You face GDPR fines (up to 4% of revenue), CCPA statutory damages, and customer notification costs. The vendor's contract says they're not liable for anything.
Yet in first-draft AI contracts, we consistently see liability capped at 3-12 months of fees. For a $2M annual AI platform agreement, that's $500K-$2M in total liability against potential losses of $50M+ from a single hallucination incident or data breach. This is not risk allocation—it's risk abandonment by the buyer.
The Standard Vendor Position: Default Liability Caps by Major Provider
Here's what we're seeing in first-draft agreements from the major AI vendors. These are the starting positions they use before negotiation:
| Vendor | Liability Cap | Excluded from Liability | Indemnification Scope |
|---|---|---|---|
| OpenAI (ChatGPT API) | 12 months of fees paid | All indirect damages; consequential; output accuracy; hallucinations | Limited IP indemnity for training data only; excludes customer use liability |
| Microsoft Copilot (Enterprise) | 12 months of fees | Indirect; consequential; lost profits; AI output accuracy; discrimination | IP indemnity covers Microsoft IP only; carve-out for customer-supplied content |
| Google Gemini (API/Workspace) | 12 months of fees or $100K (whichever is higher) | Indirect; consequential; accuracy; discrimination; regulatory fines | IP indemnity limited; excludes third-party claims from customer use |
| Anthropic (Claude API) | Unlimited for data breaches; 12 months fees for other claims | Indirect; consequential; hallucinations (unless gross negligence) | IP indemnity for training data; customer responsible for output liability |
| AWS Bedrock | $50K per month or 12 months of fees (capped) | All indirect; output accuracy; discrimination; regulatory penalties | IP indemnity for AWS IP only; customer assumes liability for generated content |
Notice the pattern: liability is capped low and accuracy is explicitly excluded. Every vendor in this table disclaims responsibility for what the AI actually produces.
Hallucination Liability: The Biggest Unaddressed Risk
Hallucination is when an AI system generates confidently false information. The user assumes it's correct because AI rarely expresses uncertainty. By the time the error is discovered, the business decision has been made.
Real hallucination scenarios we've seen in client negotiations:
- A financial analyst uses ChatGPT to analyze contract terms. The AI invents a legal precedent. The contract is structured on bad analysis. Discovery reveals the precedent doesn't exist. Legal exposure: $5-15M in renegotiation + litigation costs.
- An HR team uses an AI recruiting tool to screen candidates. The system hallucinates candidate qualifications. Unqualified people are hired and fired. Discrimination lawsuits follow. EEOC involvement. Damages: $1-10M + reputational harm.
- A compliance officer uses AI to summarize regulatory requirements. The AI generates false summaries. The company becomes non-compliant. Regulatory fines: $500K-50M depending on the industry.
How do you protect yourself? Stop accepting the default disclaimer. Instead:
- Demand accuracy SLAs for specific use cases: For contracts, legal review, financial analysis, and medical guidance—create explicit accuracy thresholds. "The AI system must achieve 98%+ accuracy on fact-checking in financial contracts" is a testable requirement. Tie it to financial penalties if the vendor misses the target.
- Require human review gates: Mandate that AI output in high-stakes domains (finance, legal, compliance, medical) must be reviewed by qualified humans before use. Make this a contractual requirement, not a suggestion.
- Limit AI use cases by contract: Specify exactly what the AI can be used for. If the vendor hasn't been tested for medical diagnosis, don't use it for medical diagnosis. Write the boundaries into the contract.
- Negotiate uncapped liability for gross negligence: Carve out hallucination-related losses from the liability cap if the vendor failed to disclose known limitations or accuracy issues. If an AI vendor releases a model knowing it has high hallucination rates in legal analysis but markets it for legal use, that's gross negligence.
Most vendors will initially say "we can't guarantee accuracy"—which is defensible. But they can guarantee that they won't market the tool for use cases where they know it's unreliable. That's a negotiable position.
Copyright Indemnification: Training Data Exposure
The New York Times v. OpenAI case has exposed the training data problem. Generative AI models are trained on massive corpora of copyrighted work. The vendors often don't disclose what they trained on or who owns it. When the model generates output that closely resembles copyrighted work, the vendor's legal exposure becomes your legal exposure.
Here's the copyright indemnification reality as of 2026:
- OpenAI (ChatGPT, GPT-4): Offers "Copyright Shield" only for enterprise customers—they'll defend and cover damages for copyright claims arising from customer output use. But the protection has significant carve-outs: excludes claims from the vendor's own training data infringement, excludes output that contains substantial copyrighted material, and requires the customer to report claims promptly. Limited to actual damages, not statutory damages.
- Microsoft (Copilot for Microsoft 365): Offers broad IP indemnity for generated content if used within Microsoft 365. But excludes third-party IP claims related to the vendor's underlying models. If the model was trained on infringing content, Microsoft indemnifies, but only for direct damages.
- Google (Gemini, Vertex AI): Provides IP indemnification for customer use if the content doesn't violate copyright. But Google disclaims liability for training data copyright infringement. This is circular: if the vendor trained on infringing content, customers using the output are exposed to secondary infringement, but the indemnity doesn't cover it.
- Anthropic (Claude): Offers narrower IP indemnity—covers output that directly infringes Anthropic's own IP, but customer must indemnify Anthropic for claims relating to customer inputs or uses.
Data Processing Liability: When AI Causes a Breach
Enterprise AI systems are data processing machines. They ingest, analyze, and process sensitive information: customer records, financial data, health information, trade secrets. If the vendor's AI platform is compromised, your data is at risk.
Data breach liability is uniquely expensive:
- GDPR: Up to 4% of annual revenue or €20M (whichever is higher). For a $1B company, that's $40M. The vendor's indemnity cap is usually $500K-$2M.
- CCPA and state privacy laws: Statutory damages of $100-$750 per consumer per incident, plus attorney's fees. A breach affecting 100,000 customers is $10M-$75M.
- HIPAA: Civil penalties up to $1.5M per violation category per year. Criminal penalties up to $250K. Liability is joint if a vendor subprocessor is breached.
- PCI-DSS: Fines from acquirers, plus customer notification, forensics, and credit monitoring costs.
Here's what to demand in a data processing addendum (DPA) for AI:
- Explicit data processing roles: Is the vendor a data processor, data controller, or joint controller? This determines liability allocation. For AI systems, insist the vendor is a processor only—you maintain control over how data is used.
- Sub-processor liability chain: The AI vendor may use third-party APIs, cloud providers, or ML platforms. Each is a sub-processor. Demand the vendor is liable for sub-processor breaches, not just their own systems. Include a mechanism to audit and approve sub-processors.
- Encryption and access controls: Demand end-to-end encryption for data in transit and at rest. Require zero-knowledge architecture if possible—the vendor processes data without ability to access it unencrypted.
- Breach notification timeline: Specify 24 hours (not 30-60 days) for breach notification. Shorter notification = faster incident response = lower damages.
- Uncapped liability for data breaches: Do not accept a liability cap for data breach claims. This is non-negotiable for GDPR/CCPA exposure. Carve out data breaches from any aggregate liability cap.
- Indemnification for vendor-caused breaches: If the vendor's negligence caused the breach (unpatched vulnerabilities, weak credentials, misconfigured APIs), they indemnify for all third-party claims, regulatory fines, and customer notification costs.
Most vendors will initially resist uncapped data breach liability. But this is a line worth drawing. If they won't accept uncapped liability for their own negligence, you have a vendor risk problem that liability caps won't solve—it's a signal they don't have robust security.
Mutual Indemnification: What Buyers Should Demand
Don't accept one-way indemnification. The vendor indemnifies you for their issues; you indemnify them for customer claims arising from your use of the service. But you must define what "your use" means—it should exclude vendor negligence and be limited to legitimate customer claims, not vendor overreach.
Here's the language structure you should demand:
1. IP Indemnification (Vendor to Customer):
"Vendor shall defend, indemnify, and hold harmless Customer against any third-party claim that: (a) the AI model or outputs infringe any patent, copyright, or trade secret; (b) the training data included copyrighted material without proper license; (c) the AI system violates third-party IP rights. Vendor shall pay all damages, settlements, defense costs, and litigation expenses. Vendor's sole liability is limited to $[X] for IP claims (separately capped, not included in aggregate liability cap)."
2. Hallucination / Output Accuracy Indemnity:
"For high-stakes use cases (legal, financial, medical, compliance), Vendor shall indemnify Customer for losses arising from AI output that: (a) is demonstrably false or fabricated; (b) was not flagged as uncertain or low-confidence; (c) involved negligence by Vendor in disclosing known limitations. Liability cap for accuracy claims: $[X] (uncapped for gross negligence or willful misconduct)."
3. Data Breach Indemnity:
"Vendor shall indemnify Customer for all damages, fines, notification costs, and settlements arising from: (a) unauthorized access to Customer data caused by Vendor's security failures; (b) breach of Vendor's sub-processors; (c) violations of GDPR, CCPA, HIPAA, or PCI-DSS caused by Vendor negligence. Liability for data breaches is uncapped and separate from other liability categories."
4. Regulatory / Discrimination Indemnity:
"Vendor shall indemnify Customer for regulatory fines, settlements, and legal costs arising from: (a) AI-driven discrimination in hiring, lending, insurance, or benefits; (b) violations of EEOC, FTC, CFPB, or state AI regulations caused by Vendor's model bias; (c) failure to disclose known AI limitations relevant to regulated use cases."
5. Customer Indemnity to Vendor (Limited):
"Customer shall indemnify Vendor against third-party claims that arise solely from Customer's use of outputs in violation of the Agreement, provided that: (a) Vendor had no negligence or gross negligence; (b) Customer's use was not authorized by Vendor; (c) Customer had been warned of the limitation in writing; (d) Vendor is not liable under any other indemnity provision."
The key: make sure your indemnity to the vendor is much narrower than their indemnity to you. They're the expert; you're the user.
Liability Cap Negotiation: Getting to Commercial Reality
Here's the negotiation sequence we use with major AI vendors. It works in ~70% of cases:
Step 1: Expose their insurance requirements.
Ask for a certificate of insurance. Specifically: "What's your E&O (Errors & Omissions) policy limit? What's your Cyber Liability limit? What's your Professional Liability limit for AI-specific claims?" If a vendor has $50M in cyber liability but offers $500K in contract liability, they're not being consistent. You've found leverage.
Step 2: Reference specific high-value use cases.
Don't negotiate in abstract. Say: "We're using this for financial forecasting with a materiality threshold of $20M annually. A single hallucination that leads to bad capital allocation could cost us $50-100M in opportunity cost. Your 12-month fee cap of $2M doesn't cover this risk. Here's what we need: [X] in liability." Make it concrete.
Step 3: Negotiate uncapped liability carve-outs, not higher caps.
Instead of pushing the vendor to raise the cap from $2M to $10M (they won't), negotiate carve-outs from the cap:
- Data breaches: uncapped
- IP infringement claims: uncapped
- Gross negligence or willful misconduct: uncapped
- Regulatory fines (GDPR, CCPA, EEOC): uncapped
- Everything else: $[X] cap
This allows you to keep a cap on operational issues while protecting against catastrophic scenarios.
Step 4: Use competitive pressure.
If you're evaluating multiple vendors, say so. "We're also negotiating with [Competitor]. They've agreed to uncapped liability for data breaches and IP claims. Can you match that?" Competition works.
Step 5: Tier liability by use case severity.
Propose a tiered structure: "For low-risk use cases (summarization, general research), we accept a $500K cap. For medium-risk (customer-facing content), $2M cap. For high-risk (financial analysis, legal review, regulated decisions), $10M cap." Most vendors will accept this because it acknowledges their real risk tolerance in different contexts.
Contract Language Checklist: Specific Clauses You Need
Print this. Use it in your next contract negotiation. Each item is non-negotiable for high-stakes AI deployments:
- Liability Definition: "Liability includes direct damages, costs of substitute services, regulatory fines, settlement amounts, defense costs, and customer notification expenses."
- Cap Structure: "Aggregate liability is capped at [X] for non-excluded claims. The following are excluded from the cap: [data breaches, IP infringement, gross negligence, regulatory fines, indemnification obligations]."
- Hallucination Disclaimer Carve-Out: "Notwithstanding any disclaimer about output accuracy, Vendor shall be liable for: (a) known limitations in accuracy that Vendor failed to disclose; (b) use cases Vendor marketed but did not adequately test; (c) hallucinations that arise from gross negligence in model development or deployment."
- IP Indemnity Scope: "Vendor indemnifies Customer for third-party claims that output or the AI system infringe patent, copyright, trade secret, or other IP right, including claims arising from Vendor's training data."
- Data Processing Liability: "Vendor is liable for all damages, fines, and costs arising from any breach or mishandling of Customer data, whether caused directly by Vendor or by Vendor's sub-processors. Data breach liability is uncapped."
- Regulatory Liability: "Vendor indemnifies Customer for fines, penalties, and costs imposed by regulators (GDPR, CCPA, EEOC, SEC, etc.) that arise from Vendor's violation of law or failure to disclose known AI limitations."
- Insurance Requirement: "Vendor shall maintain E&O insurance of at least $[X]M and Cyber Liability insurance of at least $[Y]M. Customer is named as additional insured. Certificates of insurance are provided annually."
- Gross Negligence Carve-Out: "Any limitation of liability does not apply to claims arising from Vendor's gross negligence, willful misconduct, or fraud."
- Defense Cost Allocation: "In any indemnified claim, Vendor shall pay all defense costs, settlements, and judgments. Customer may select defense counsel (subject to Vendor approval, not to be unreasonably withheld)."
- Prompt Notice Requirement: "Customer shall notify Vendor of any claim within 30 days. Vendor shall be liable for increased damages only to the extent caused by Customer's delay in notice."
- Subprocessor Liability: "Vendor is liable for any breach or violation of law by its sub-processors. Vendor has sole liability for failures in the sub-processor management chain."
- Accuracy SLA (if applicable): "For [specified use cases], AI system shall achieve [X]% accuracy on [defined metrics]. If accuracy falls below threshold, Vendor shall credit Customer's account at $[X] per percentage point below target."
Don't accept "we'll see what we can do" responses. These are negotiable items. In our experience, vendors will concede on 8-10 of these 12 points if you push.
Final Thoughts: Liability Negotiation Is Leverage Negotiation
AI vendors want to move fast and deploy widely. They resist liability clauses because they create operational friction. But liability clauses also represent your insurance—your recovery mechanism if something goes wrong.
The vendors in our comparison table (OpenAI, Microsoft, Google, Anthropic, AWS) have all negotiated higher liability caps and broader indemnification with Fortune 100 customers. What they offer enterprises is different from what they offer SMBs. You have leverage—use it.
If you're signing an AI vendor contract without reviewing these eight dimensions, you're accepting uninsurable risk. That's a board-level governance problem, not a procurement problem.
We've reviewed 127+ AI contracts. In 73% of them, we've negotiated better liability terms. The 27% where we couldn't? Those vendors had no appetite for enterprise-grade risk allocation. We recommended our clients not sign.