Professional growth          Court news           Productivity           Technology          Wellness          Just for fun

Commentary: When generative AI creates false information, who is liable?

Generative AI is the latest innovation disrupting our information society.

White ChatGPT and other AI tools show great promise, they can still generate inaccurate information. When this happens, who is liable for false information created by generative AI?

This question has resulted in Generative AI liability becoming a hot subject of debate in legal circles.

Even if there is no malice or intention to cause harm by generating false information, AI can create thorny legal issues. Here, we explore several facets of this ongoing debate.

Legal professionals using generative AI

Unfortunately for legal professionals, they must remain on guard against the dangers of generative AI more than most.

This is partly because from an ethical perspective, the legal profession depends on accurate information and coherent arguments.

Maybe more importantly, one of the highest-profile instances of the misuse of generative AI involves an attorney in a legal proceeding.

Here’s the short version of that story:

In March 2023, attorney Steven A. Schwarz filed a legal brief in New York federal court which cited cases that opposing counsel could not locate. When the court ordered Schwarz to provide copies of the cited cases, he provided decisions that were later discovered to be fabricated.

Schwarz ultimately admitted to using ChatGPT to generate both the brief and the fabricated cases.

In response to this unfortunate episode, some federal judges have instituted requirements for generative AI. One judge issued a standing order requiring attorneys to certify that either (1) they did not use generative AI in preparing their court filings, or (2) if they did use generative AI, its output was double-checked by a human being. Another judge required that any party using AI for its legal work disclose this usage to the court.

Some argue that this strict approach to generative AI is unwarranted, since attorneys already have professional obligations to check their work.

Others claim this level of oversight is reasonable and balanced.

There are also those who argue that AI has no place in the courtroom at all, advocating for a total ban.

Regardless, these developments show that AI can still be used for drafting legal briefs — but attorneys may pay the consequences for misuse of AI tools. Will those attorneys look to the developers who built the artificial intelligences for damages when systems produce false information?

Liability for generative AI companies?

There could be some legal peril for the companies behind generative AI tools.

This possibility is being tested in a defamation lawsuit against OpenAI, the company behind ChatGPT.

According to the complaint filed by nationally syndicated radio host Mark Walters, the editor of a gun publication asked ChatGPT about Walters’ role in a Washington lawsuit. In response, ChatGPT fabricated an allegation from that lawsuit that Walters had embezzled funds from a special-interest group while serving as its financial officer. There was no such lawsuit.

Walters now claims that ChatGPT published libelous materials, and OpenAI should be held liable.

The defamation suit faces several legal hurdles, since ChatGPT is not an individual making defamatory statements. Walters did not ask OpenAI to retract the statement, which may bar a defamation suit —- and there is in fact no practical way OpenAI could do so.

In addition, there is no allegation OpenAI was put on notice of ChatGPT’s false allegations.

Nevertheless, this lawsuit is unlikely to be the final word on whether tech companies can be liable for false information created by generative AI tools. It has certainly shined a spotlight on some of the potential consequences of this technology.

Does Section 230 apply to generative AI?

Section 230 of the Communications Decency Act shields social media companies from liability for third-party content posted on their platforms.

This liability shield is considered justifiable because the platforms are not the publisher or creator of those materials. However, it is likely generative AI platforms would be treated differently under the law than most social media platforms.

Generative AI tools such as ChatGPT operate based on prompts from users, which the tools then use to create new content. The plain language of Section 230 appears to not apply the liability shield here — it provides an exception for online content the platform either creates or helps create.

Some lawmakers want to go even further, with bipartisan legislation being introduced in Congress to exclude generative AI from Section 230’s legal protections.

Many legal analysts believe that if the issue is challenged in court, Section 230 will be found inapplicable to generative AI.

Nonetheless, there are likely to be some limits to the liability exposure for generative AI tools. If users deliberately prompt an AI tool to generate false content, for example, the platform may not be accountable.

These legal gray areas will likely be clarified over the coming years.

The future of generative AI in the legal industry

It appears that generative AI will continue to play a large role in the legal industry.

Despite the difficulties described here, there are simply too many potential use cases for generative AI, ranging from contract drafting to eDiscovery and more. This means legal professionals must learn to use generative AI responsibly and always do their due diligence when using any tools to help with research or drafting.

The role of generative AI in our society is still evolving, even as the underlying technology itself evolves. The issue of liability for false information created by generative AI tools will also be in flux. Future court decisions and legislation will need to guide the way.

Our recommendations

Follow InfoTrack