This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
viewpoints
Welcome to Reed Smith's viewpoints — timely commentary from our lawyers on topics relevant to your business and wider industry. Browse to see the latest news and subscribe to receive updates on topics that matter to you, directly to your mailbox.
| 1 minute read

ChatGPT: Artificial Intelligence Tool Generates “Fake Opinions”

Mata v. Avianca, Inc., No. 22-cv-1461 (PKC), 2023 BL 213626 (S.D.N.Y. June 22, 2023)

made national headlines when the artificial intelligence tool ChatGPT generated fake opinions cited in a court brief filed by Plaintiff’s attorney. The case originated as a personal injury claim in state court, but Defendant’s attorney had the case removed to federal court because the injury occurred during an international flight. Since Plaintiff’s state court attorney was not admitted to practice law in the federal court, the notice of appearance was filed by another attorney at the firm. While Plaintiff’s new attorney of record verified the federal court filings, Plaintiff’s original attorney continued to draft the briefs and perform all substantive work.

When Defendant’s attorney questioned “the existence” of cases cited in Plaintiff’s brief, the Court ordered production of the cases, which was impossible since they did not exist. Only when faced with a motion for sanctions did Plaintiff’s attorney come clean about his actions. The record of this case “would look quite different,” the Court noted, if Plaintiff’s attorney came clean “shortly after” being questioned about the citations. Instead, he “did not begin to dribble out the truth” until faced with sanctions. Accordingly, the Court found “bad faith” on the part of Plaintiff’s counsel based on “acts of conscious avoidance and false and misleading statements to the Court.”

The Court acknowledged that there is “nothing inherently improper about using a reliable artificial intelligence tool for assistance” and that “[t]echnological advances are commonplace.” However, the Court also stressed a reminder that “existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.” Rule 11, Fed. R. Civ. P. A penalty of $5,000 was imposed jointly and severally on the law firm and individual attorneys. The attorneys were also ordered to notify “each judge falsely identified as the author” of the fake opinions.

There are a couple of useful takeaways from this case. First, attorneys have a responsibility to verify all court submissions. Second, if an error is discovered, be honest, and immediately work toward correcting it to avoid more severe damages to all concerned.

[Counsel] abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question.

Tags

ediscovery, rule 11, sanctions, chatgpt, ai, artificial intelligence, verification, citation, case law, e-discovery