A recent tribunal ruling underscores the increasing legal liabilities companies face from AI-generated misinformation, prompting a call for more robust AI-specific contractual protections to manage evolving risks.

As generative AI tools like chatbots become increasingly integrated into business operations to enhance efficiency and customer service, companies face mounting legal risks stemming from these systems’ unpredictable outputs. A recent British Columbia Civil Resolution Tribunal decision in Moffatt v. Air Canada underlines the exposure companies can face. In this case, an AI chatbot provided inaccurate information about bereavement fare refunds, leading a customer to overpay for a ticket and subsequently be denied a refund. The tribunal held Air Canada liable for negligent misrepresentation, explicitly rejecting the notion that the chatbot could be treated as a separate legal entity from the company itself.

This ruling is a stark alert to corporate counsel: AI vendor contracts, often adapted from traditional software agreements, may not adequately address the unique and evolving risks posed by AI systems. Traditional contracts typically assume deterministic software behaviour—fixed, predictable processes that comply strictly with coded logic. However, AI, especially systems powered by large language models or machine learning, functions probabilistically and adaptively, generating outputs based on complex statistical patterns derived from extensive and often opaque datasets. This can result in so-called “hallucinations” (inaccurate or fabricated information), bias, non-compliance with regulations, and outputs that evolve unpredictably, which are not contemplated in standard contract language.

Most software-as-a-service (SaaS) agreements for AI tools include broad disclaimers that limit vendor liability, often stating outputs are “for informational purposes only” and excluding warranties on accuracy. Indemnity provisions frequently cover only intellectual property infringement rather than harms relating to regulatory fines, discriminatory results, or business disruptions generated by the AI. Service-level agreements (SLAs) might ensure uptime but rarely address critical issues like response times for mitigating harmful content. Vendors are increasingly expanding disclaimers to exclude all liability arising from reliance on AI outputs, shifting the full burden of risk to the deploying company—which usually lacks insight into the AI model’s training data, logic, or ongoing updates.

Industry experts stress that AI vendor contracts must be reimagined as dynamic risk-transfer instruments, not mere IT purchase documents. Counsel need to ensure AI-specific protections are embedded in agreements to avoid costly litigation and regulatory consequences. Key areas requiring attention include:

  1. Output Liability and Indemnification: Contracts should demand indemnity covering third-party claims arising from AI-generated outputs, beyond intellectual property issues. This is critical for companies in regulated sectors such as finance, healthcare, and employment. Legal counsel should negotiate representations confirming lawful sourcing and use of training data. Where vendors resist indemnity, companies should seek evidence of errors and omissions (E&O) insurance and consider capping vendor liability proportionally to contract value.

  2. Performance and Safety Warranties: AI systems should carry warranties against intentionally misleading or unlawful outputs under normal operation. Contracts should require vendors to monitor and mitigate risks like model drift, bias, and unsafe behaviour through periodic reviews and retraining. SLAs must go beyond uptime and stipulate timeframes for identifying and correcting harmful outputs.

  3. Audit and Transparency Rights: To meet compliance obligations under evolving regulatory frameworks such as the EU AI Act and GDPR, companies must secure rights to documentation concerning training data, update schedules, model changes, performance, and safety testing. Transparency clauses enable due diligence and supervisory oversight.

  4. Human-in-the-Loop and Fail-Safe Mechanisms: Agreements should guarantee that AI tools can operate under human supervision, especially in high-risk applications, allowing companies to intercept erroneous or harmful outputs before they impact end users.

  5. Exit and Suspension Clauses: Contracts must explicitly provide for suspension or termination if AI outputs become harmful, discriminatory, or legally non-compliant. Remedies should extend beyond refunds to include vendor cooperation in mitigation efforts, legal defence, and user notification. Provisions for mandated retraining under specified conditions help maintain output quality.

The necessity of these protections is echoed across various industries and legal analyses. For instance, legal professionals emphasize the importance of robust warranties and indemnities addressing performance, intellectual property, and compliance issues in AI vendor agreements. Regular audits and governance mechanisms are vital to confirm AI systems operate as expected and adhere to regulatory terms. In healthcare, indemnification clauses have evolved to encompass regulatory risks like privacy breaches, data misuse, and bias-related claims, with shared responsibility models and insurance requirements becoming standard.

Moreover, experts underscore the importance of clear contract language defining human oversight and involvement, maintaining comprehensive technical documentation, and preserving termination rights for non-compliance, thus enhancing control over AI deployments. Increased transparency and audit rights are deemed crucial for managing potential inaccuracies and biases in AI systems, thereby mitigating risks effectively.

In fast-evolving AI landscapes, corporate counsel are on the front lines of safeguarding their organisations. By moving beyond standard technology contracts and treating AI agreements as sophisticated risk-allocation tools tailored to generative and adaptive systems, companies can better harness AI’s transformative potential while avoiding costly legal pitfalls. As the legal environment and AI capabilities advance, ongoing diligence in contract negotiation, risk assessment, and regulatory compliance remains essential.

In an era where AI-driven risk is both a technical and legal challenge, the question every contracting party must confront is not simply what AI can do—but who will bear responsibility when it goes wrong. According to attorney Harshita K. Ganesh at CMBG3 Law in Boston, understanding and addressing this question through tailored contractual protections is vital for companies to navigate the promising yet perilous path of AI innovation.

Source: Noah Wire Services

Share.

In-house journalist providing unbiased, well-researched news. They cover breaking stories, editorials, and in-depth analyses across various topics. Their work ensures consistency and credibility in all published articles.

Contribute to SRM Today

We welcome applications to contribute to SRM Today – please fill out the form below including examples of your previously published work.

Please click here to submit your pitch.

Advertise with us

Please click here to view our media pack for more information on advertising and partnership opportunities with SRM Today.

© 2025 SRM Today. All Rights Reserved.

Subscribe to Industry Updates

Get the latest news and updates directly to your inbox.


    Exit mobile version