OpenAI, the pioneering research organisation behind the AI chatbot ChatGPT, is faced with a defamation lawsuit initiated by Mark Walters. This groundbreaking case holds significant implications for OpenAI and the tech companies harnessing AI technologies.  As legal advisors committed to the evolving tech landscape, our team at Broderick Bozimo and Company has studied this lawsuit and its potential impact on the industry. To further help tech companies navigate these potential pitfalls, we have developed a comprehensive AI Legal Audit Checklist.

Background

The lawsuit in the Superior Court for Gwinnett County, Georgia, USA, contends that ChatGPT generated false information about Mark Walters in response to a journalist’s inquiries regarding a real-life case, causing reputational damage.  The wider implications of this event are considerable, as this stands as one of the first significant legal actions taken against OpenAI and its AI chatbot.

In-depth Case Analysis

The complaint reveals that Walters is not a party to the litigation that journalist Fred Riehl was researching.  Yet, ChatGPT incorrectly stated that Walters was implicated in the case when asked to summarise the lawsuit.  Alarmingly, it furnished an invented excerpt from the complaint, directly implicating Walters.

All these assertions made by ChatGPT were erroneous.  Riehl confirmed the inaccuracy of the allegations when he contacted one of the actual participants in the lawsuit.  ChatGPT’s accusations were quite severe, stating that Walters was involved in fraudulent and criminal activities.  Therefore, it is easy to comprehend how such claims, even when disproved, can cause severe reputational damage.

OpenAI, though admitting that ChatGPT might occasionally make factual errors or “hallucinate”, is now facing a lawsuit alleging negligence, reckless disregard for the falsity of the communication, and the publication of defamatory matters.  The complaint also asserts that the communication from the AI to Riehl was not privileged, paving the way for the libel claim.

OpenAI could build its defence by referring to specific provisions in its Terms of Use as of the time of the interaction, specifically 4th May 2023.  Specifically, Clause 3(d) encourages users to verify the accuracy of any AI-generated output. This clause acknowledges the inherently probabilistic nature of machine learning algorithms, which might sometimes generate erroneous outputs. Consequently, this clause assigns the responsibility of validating AI’s outputs to the users, ensuring the results align with their intended use.

Given this stipulation, OpenAI could feasibly argue that journalist Fred Riehl should have had additional measures to authenticate the information provided by ChatGPT before using it in any form.  Clause 3(a) also alerts users that ChatGPT’s responses are fundamentally determined by the input received, further underlining the user’s role in shaping the AI’s output.

These aspects of the Terms of Use could form the cornerstone of a user error defence.  OpenAI may assert that any defamatory output concerning Walters was the product of flawed input from Riehl.   This argument underscores the importance of understanding the limitations of AI technologies and the crucial role of human review, highlighting the imprudence of sole reliance on AI without adequate scrutiny.

Our Perspective

From our standpoint at Broderick Bozimo and Company, this case is fascinating in its potential to reshape the legal framework for AI technologies.  By focusing on a specific incident involving ChatGPT, the case shines a light on the broader issue of information generation by AI and the unique role of AI in this legal challenge.  “Hallucination”, a known phenomenon where AI systems fabricate information, has become a key component in this lawsuit.  It raises serious questions about mitigating these occurrences and managing their consequences.  The fact that an AI ‘hallucination’ led to this lawsuit underscores the potential legal risks associated with such AI behaviours.  What happens when an AI, taught and trained by humans, disseminates damaging false information?  Who will be held responsible – the AI, the creators, or the operators?

The case underscores the need for an informed legal strategy when deploying AI technologies.  At Broderick Bozimo and Company, we advocate for due diligence to identify and manage these risks.  Tech companies would do well to consider not just the efficiency and scalability AI offers but also the legal and ethical challenges that may arise.

The Broader Context

This lawsuit is a crucial milestone in the broader conversation about technology and law.  For tech companies and AI providers, the issue goes beyond a single case – it underscores the critical need for legal foresight in the rapidly evolving AI landscape.

As discussed below, the action could set a precedent that defines AI providers’ liability for their systems’ outputs.  Therefore, businesses must keep abreast of such developments to ensure they comply with all relevant laws and regulations.  Additionally, the handling of data by AI systems is put under the microscope.  The way an AI system processes and disseminates information, as highlighted by the ChatGPT incident, requires thorough scrutiny.

The impact of this case could be far-reaching, touching other use cases of AI, such as autonomous vehicles, predictive policing, or healthcare.  These systems, like ChatGPT, could potentially cause harm if they produce incorrect outputs, emphasising the importance of legal oversight and regulation in AI deployments.

AI Legal Audit Checklist

We have developed an AI Legal Audit Checklist to help navigate this complex landscape.  This comprehensive tool outlines the key considerations and best practices for deploying AI technologies in a legally compliant manner that manages potential risks proactively.

Below is a snapshot of what the checklist covers:

Understanding the AI System: Understanding how the system works, its data requirements, and the context of its use.

Ethical Guidelines and Research Investment: Establishing an ethics committee, developing a code of ethics, and investing in research to minimise AI biases.

Data Management and Transparency: Developing data governance policies, implementing data anonymisation techniques, and creating transparency reports.

Risk Management and Safeguards: Maintaining a risk registry for the AI system and establishing clear protocols for managing high-risk outputs.

Regulatory and Industry Standards: Engaging with industry groups and developing internal guidelines that align with best practices.

Intellectual Property: Conducting regular audits of the AI system’s code and data to prevent infringement of intellectual property rights.

Third-Party Relationships: Reviewing contracts with third-party vendors and ensuring they comply with the same legal and ethical standards.

Terms of Use and Liability Clauses: Ensuring Terms of Use clearly define the responsibilities and limitations of the AI provider.

User Education and Communication: Implementing training programs for users and creating a feedback mechanism for reporting issues.

Insurance and Legal Team: Conducting periodic reviews of insurance policies and developing a roster of external legal experts.

Legal Compliance and Advisory: Creating a legal compliance map outlining all jurisdictions in which the AI system operates.

Public Relations and Reputation Management: Developing a PR strategy focusing on transparency, accountability, and openness.

Accessibility and Inclusivity: Ensuring the AI system is accessible to users with disabilities and does not disproportionately disadvantage any demographic.

Feedback Loop and Continuous Improvement: Implementing a continuous improvement program for the AI system.

Record-Keeping and Documentation: Maintaining comprehensive records of the AI system’s development and updates.

This checklist concludes with a note on fostering a culture of openness, learning, and collaboration within the organisation. The complete AI Legal Audit Checklist can be downloaded here.

Recommendations for the Industry

Given the evolving landscape, we propose several considerations for AI and tech companies:

Risk Management: Businesses must understand the potential for AI to produce inaccurate or damaging information and implement safeguards to reduce these risks.  These safeguards could include rigorous testing and validation processes to catch and correct erroneous outputs, monitoring and reporting systems to flag anomalies, and crisis management plans to handle potentially damaging situations swiftly.

Legal Compliance and Advisory: Companies should stay informed about legal developments that could affect their business operations and adjust their strategies accordingly.  This could involve employing legal counsel with expertise in AI and technology law, keeping up to date with regulatory changes, and considering legal ramifications when designing and deploying AI systems.

Data Management: Companies should prioritise how their AI systems process and distribute information, implementing measures to reduce the potential for disseminating misinformation.  This might involve establishing comprehensive data handling and processing protocols and maintaining systems that monitor and correct potentially harmful or erroneous outputs.  Ensuring transparency in the AI’s decision-making process is also crucial.  However, this does not mean businesses should manually oversee each piece of information before it’s released.  Instead, a robust and reactive system should be in place for swiftly identifying and correcting potentially harmful outputs, thereby mitigating the risk of legal liabilities.  Navigating this balance is a complex issue best undertaken with expert legal advice. 

Ethical Guidelines and Research: AI companies should incorporate strict ethical guidelines into their development processes, invest in research to reduce the occurrence of AI hallucinations and develop mechanisms to track and correct misinformation quickly and effectively.

Potential Outcomes and their Impact

We believe this lawsuit could have profound implications for AI providers and users.    Should OpenAI be found liable, it would set a precedent that could ripple across the AI industry.

Liability: If the court rules in favour of Walters, it could establish that AI providers are responsible for their system’s output.  This could increase legal and insurance costs for AI providers due to the potential liability for their AI’s output.  It would also likely necessitate more robust internal legal teams to manage risk.

One aspect to consider is how Walters circumvented the mandatory arbitration clause in OpenAI’s Terms of Use.  Clause 8(a) of the Terms specifies that users must resolve any claims relating to the Terms or Services through binding arbitration.  However, this clause only applies to users of the service.  Since Walters was not using ChatGPT himself but was instead a subject of the information produced by the AI, he was not bound by the user agreements and could pursue legal action through the courts.  AI providers need to understand this potential gap in their legal protections.

Law and Policy Reforms: A decision in Walters’s favour could expedite legal and policy reforms surrounding AI and technology.  Regulators could use the verdict as a starting point for shaping new laws to regulate AI more effectively, creating more secure environments for their use and reducing potential harm.

Reputation: The case could also affect the reputations of AI providers.  As the case has garnered significant attention, public perceptions of AI and its safety will likely be affected.  AI providers could face increased scrutiny and public relations challenges as a result of any adverse findings.

Final Thoughts

As we watch this unprecedented legal battle unfold, we reflect on the rapidly changing technological landscape and the legal challenges it presents.  While we cannot definitively predict the outcome, it is clear that the repercussions will be far-reaching, affecting both the legal and tech industries.  Understanding and proactively managing potential risks are crucial for any tech company or AI provider.  At Broderick Bozimo and Company, we remain committed to providing expert legal counsel in this challenging and exciting intersection of technology and law.

Disclaimer: The content of this article is intended to provide a broad overview and general understanding of the evolving legal landscape in the AI sector. It does not purport to offer legal advice and should not be used as a substitute for consultation with professional legal advisors. The legal implications of AI technologies can vary significantly across jurisdictions; therefore, it is crucial to consult with legal professionals who are well-versed in the specific legal contexts applicable to your operations.

Should you find this article insightful and have further inquiries, or if you need assistance navigating the legal aspects of AI deployment, please do not hesitate to contact our specialised team via legalAI@broderickbozimo.com.  We would be delighted to guide you through these processes and address any areas of concern.

Isaiah Bozimo

Isaiah Bozimo

Partner

Afolasade Banjo

Afolasade Banjo

Associate

Share This