Why can’t Leverets sue ChatGPT for defamation?

Nov 19, 2024

Why can’t Leverets sue ChatGPT for defamation?

What happens if generative AIs, such as ChatGPT, publish false or defamatory information that could cause reputational damage.

An unexpected turn of events has given Leverets Group pause for thought on the use of generative AI – because we recently found ourselves on the receiving end of a ChatGPT hallucination – and a defamatory one at that. 

Leverets was recently defamed by a ChatGPT output.

Like many, we’ve been experimenting with the potential of AI. In one such experiment we wanted to see how ChatGPT reported our biggest case of last year – the Mozambique “Tuna Bonds Scandal” and more specifically our role in representing three defendants through the civil litigation process.

What came back was a false, and “hallucinatory”, story that suggested our boutique law firm was pivotal in the commission of the fraud itself. It stopped us in our tracks.

Here’s some of the hair-raising outputs:

“ Leveretts played a role in facilitating and advising on the loans that were at the center of the scandal.”

“ Leveretts provided advisory services to the Mozambican government and the state-owned enterprises involved in the loans”

“Leveretts helped structure the complex financial transactions.”

“ Leveretts is implicated in the misrepresentation of the true purpose and risk of the loans.”

“Leverets played a key role as a middleman company through which illegal payments were funneled.”

“Leverets was one of the entities through which bribe money flowed to various government officials and individual involved in the deal.”

Clearly absurd and no-one (we trust) would believe it! Leverets secured settlements in London’s Commercial Court extracting our clients them from any suggestion of involvement in, or knowledge of, the corruption of Mozambican officials in deals struck by state-owned companies.

Whilst we can laugh at this, it got us thinking about what we could do if any of our clients had their integrity impugned by a machine?

What would happen if someone did attempt to sue ChatGPT for libel? And what are the potential legal issues for a ChatGPT user that unwittingly republishes this content?

The implications for ChatGPT and other third parties

An action in libel requires a defamatory statement, which published, and causes serious harm or is likely to cause serious harm to the reputation of the claimant party. So, has ChatGPT committed libel in creating this false and defamatory statement about Leverets?

Four key things immediately stand out:

  1. Safeguards

ChatGPT clearly has insufficient safeguards in place to prevent it from hallucinating and generating defamatory, statements. In our case, even when prompted for factual information, it delivered back a fabrication.

Ours is not a standalone event. Earlier this year a Canadian lawyer came under fire after the AI chatbot she used for legal research created “fictitious” precedents, upon which she relied to develop legal submissions in a court case. Perhaps more shockingly, in 2023 a UCLA law professor ran some queries about several newsworthy individuals. ChatGPT generated answers that were both false and defamatory, accusing one of having pleaded guilty to wire fraud (an allegation it backed up with a fictional Reuters quote), and several others of sexual harassment.

  1. Libel Action

In an action for libel, the claimant must show that the statements were published to a third party, and that the defendant published or is responsible for the publication of the words.

The biggest complications here could be that AI-generated content isn’t consciously defamatory, the machine is assembling words, ideas, and concepts produced elsewhere – therefore is it responsible?

Also, threads between a third-party user and ChatGPT are private – could that have an impact on a court case? How can we know exactly who and how many users have been communicated this false information about our firm? We could perhaps argue that there’s no requirement that a libellous statement has to be published to more than one individual, but that could seriously limit a claim for damages without proof of widespread republication.

  1. Whom do we sue?

ChatGPT doesn’t have a legal personality, and therefore cannot be sued an as entity.  Could we therefore sue OpenAI instead? Interestingly, OpenAI is currently the defendant in a claim filed in the US by a radio host after it provided a synopsis of a fictional legal claim in which he was falsely named as a defendant.

OpenAI could seek to rely on two commonly pleaded defences to defamation – honest opinion and public interest. Both are subjective, particularly public interest, which requires that the publisher believed the statement was in the public interest, and further that this belief was reasonable at the time of publication. Can a large language model hold such beliefs when it cannot reason?

  1. Third Party Publication

Perhaps the most likely risk is that a third-party could republish the false information.

It would certainly be difficult for anyone in this situation to claim they’re not responsible for republication. Likewise, they would struggle to defend themselves based on common routes such as ‘basis of truth’ or ‘honest opinion’, if they’ve reproduced content without fact checking.

OpenAI essentially puts responsibility onto users of their intelligence through a series of caveats, as well as the warning that it can occasionally produce incorrect, harmful, or biased content. It states that:

“…given the probabilistic nature of machine learning, use of our Services may in some situations result in incorrect Output that does not accurately reflect real people, places or facts. You should evaluate the accuracy of any Output as appropriate for your use case, including by using human review…”

The future legal landscape for AI

In time we’ll see the creation of liability for AI, and protections against the reputational harms that come from AI-generated libel. It’s certainty going to be interesting to see how the kinds of issues described above are tested in court.

Until then, be careful what you take as truth in the digital domain, and always take proactive steps to minimise the potential impacts of false AI content on your business.

Author: Owen Vanstone-Hallam, Solicitor.

Owen joined Leverets as a solicitor in 2022. Primarily based in our Canterbury office, he acts on a range of disputes matters, with a particular interest in matters with a restructuring and insolvency angle. He has advised major companies, directors, insolvency practitioners and pension schemes.

Owen was part of the Leverets team which successfully defended two individuals in a Commercial Court claim brought by the Republic of Mozambique against Credit Suisse and others to set aside sovereign guarantees worth up to $2bn and claim damages in fraud and deceit.

Search Posts

Company and Commercial litigation

We act on behalf of large international enterprises, UK institutions, SMEs, and high net worth individuals across all aspects of company and commercial litigation.

READ MORE

Personal and Corporate Insolvency

An exacting field, insolvency cases require the very highest levels of precision and expertise. Our blended team guarantees superior results whatever the nature of the case.

READ MORE

Civil Fraud

The number of civil fraud cases heard in the courts has undergone a 50% rise in recent years.

READ MORE

Public Inquiries

Our specialist team have significant experience both of representing individuals, private organisations, campaigners, charitable organisations, and public bodies who find themselves under scrutiny, required to give evidence, or face the prospect of appearing at a hearing.

READ MORE