Skip to Main Content.

Artificial intelligence (AI), particularly in the form of large language models (LLMs), has become a go-to resource for online customer service, as evidenced in recent years by the proliferation of AI chatbots fielding consumers’ questions in real time. This technology offers the dual benefits of providing instantaneous customer service while reducing personnel expenses for service providers. However, despite their convenience, AI chatbots are prone to “hallucinations” or generating incorrect answers, which poses a significant risk. In fact, this issue has escalated to a point where the misinformation provided by AI chatbots can lead to legally binding contracts, raising serious concerns.

A notable recent instance highlighting these challenges emerged in New York City, where a chatbot deployed by the local government, intended to assist users in navigating municipal services, erroneously provided advice that was both incorrect and unlawful. This chatbot, tasked with offering guidance on a range of issues from food safety to sexual harassment and public health, disseminated misinformation that could lead individuals to unintentionally break state and federal laws, possibly facing fines or legal repercussions. Such errors not only jeopardize public trust in digital initiatives from businesses, organizations, and governments but also underscore the precarious balance between innovation and the risk of liability in AI implementations.

AI chatbots serve as authoritative representatives for businesses, similar to how Frequently Asked Questions (FAQ) pages and Terms of Service documents outline a company’s practices and policies. This authority was put to the test in a landmark case, Moffat v. Air Canada, where the British Columbia Civil Resolution Tribunal found Air Canada liable for misinformation provided by its AI chatbot. The chatbot inaccurately represented the airline’s bereavement fare policy, leading a consumer to believe he was entitled to a refund—a belief that Air Canada later contradicted. The tribunal ruled in favor of the consumer, citing negligent misrepresentation through the chatbot’s responses.

Furthermore, the tribunal rejected Air Canada’s argument that the AI chatbot was a separate legal entity, echoing the stance taken in many intellectual property cases related to AI-created content. Historical precedent for this type of liability dates back to 1972 (albeit under U.S. law), in State Farm Mutual Automobile Insurance Company v. Bockhorst, where a company was held liable for its computer’s errors. These rulings demonstrate a longstanding legal principle that companies are accountable for the actions and errors of their automated systems.

This accountability extends beyond technological errors. Courts have emphasized that businesses, as the more knowledgeable party, have a responsibility to ensure the accuracy of their own materials. An example of this is the U.S. Supreme Court’s consideration in Coinbase, Inc. v. Suski, which revolved around the enforceability of terms in contracts and user agreements, highlighting the complexities arising from inconsistent documentation created by humans.

The challenges highlighted by the Air Canada case and similar cases in the U.S. underscore the ongoing legal risks associated with AI chatbots. The increasing complexity of website terms and policies, as seen in the Coinbase situation, suggests that these issues will persist and likely be further amplified in this era of LLM and generative AI. To mitigate legal risks, businesses should consider the following strategies:

  • Policy Links: Train AI chatbots to respond to policy questions with direct links to the actual policies.
  • Disclosure of Terms (Opt-In): Implement disclosures clarifying that chatbot statements are not enforceable without human representative consent, along with other material terms and conditions related to the use of the chatbot. These disclosures and terms should ideally be accepted via a consumer opt-in mechanism.
  • Data Integrity: Ensure the data used to train chatbots is accurate and free from corruption.

As regulations evolve, it is vital for businesses to establish clear policies that communicate their intentions while leveraging the convenience of AI. However, complete reliance on automated systems without human oversight is ill-advised. Regular review by professionals is necessary to ensure accuracy and compliance.

In anticipation of regulatory developments, businesses should strive for consistency in all forms of communication, including AI interactions. Standardizing policies across various channels and ensuring transparency and fairness in public-facing documents are key steps in maintaining compliance and preventing legal disputes. This approach not only protects businesses legally but also fosters trust and clarity among consumers.

For more information about artificial intelligence and consumer compliance, please contact the authors or any attorney with the firm’s Data, Digital Assets, & Technology practice or the Artificial Intelligence team.