This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
Welcome to Reed Smith's viewpoints — timely commentary from our lawyers on topics relevant to your business and wider industry. Browse to see the latest news and subscribe to receive updates on topics that matter to you, directly to your mailbox.
| 2 minutes read

“Hallucinate” is’s Word of the Year — and No, You’re Not Imagining Things! announced last week that it selected “hallucinate” as its 2023 word of the year, referring to a new definition of the term pertaining to artificial intelligence: “to produce false information contrary to the intent of the user and present it as if true and factual.” In other words, it’s when chatbots and other AI tools make stuff up.

Grant Barrett, head of lexicography at, told CNN this particular definition of “hallucinate” was added to the site earlier this year, though its use in computer science dates at least as far back as 1971. As staff at the online dictionary considered contenders for the defining words of 2023, Barrett said it became clear that AI was increasingly changing our lives, working its way into our language as well.

“When we looked at the different words associated with artificial intelligence, we saw that ‘hallucinate’ really encapsulated this notion that AI wasn’t exactly what we as a culture wanted it to be,” Barrett said.

The introduction of ChatGPT and similar AI-powered tools equips lawyers with a new way to conduct legal research, draft memos, and legal briefs, and even prepare motions. The adoption of Generative AI tools offers unique advantages, but it also presents novel ethical and accountability challenges. AI tools hallucinate a lot, and that can be a huge problem for the legal profession. The consequences associated with failing to identify and correct Generative AI hallucinations became apparent in two notable cases in 2023, one in New York and one in Colorado, where lawyers relied on case citations generated by ChatGPT without first checking to ensure that they were real cases. In both situations, the lawyers were hit with monetary and other sanctions, including a temporary suspension in the Colorado case. These incidents emphasize the need for legal professionals to tread carefully when it comes to leveraging artificial intelligence in their practices. 

In response to many of the concerns surrounding the adoption of Generative AI, some U.S. courts in 2023 started issuing orders that regulate or require special notifications regarding the use of these tools. Judge Donald W. Molloy of the District of Montana ordered, “Pro hac counsel must do his or her own writing; sign his or her own pleadings, motions, and briefs; and appear and participate personally. Use of artificial intelligence automated drafting programs, such as ChatGPT, is prohibited.” Judge Brantley Starr of the Northern District of Texas authored an order requiring attorneys to certify that either they did not use any Generative AI tools in their filings or certify that any use of Generative AI has been checked for accuracy by a human being. Judge Stephen A. Vaden of the U.S. Court of International Trade and U.S. District Judge Michael Baylson both have standing orders mandating the disclosure of the use of AI tools in the preparation of filings in cases before them.

As we look to the future, there is no question that AI offers significant advantages in terms of productivity and efficiency, but it also presents novel ethical and accountability challenges. Legal professionals must adapt to this new technology and learn how to appropriately apply it to their practice. The year ahead will undoubtedly bring new innovations—and new challenges—as AI continues to evolve, impacting the future of legal practice and many other aspects of our lives.


ediscovery, generative ai, ai, artificial intelligence, legal technology, hallucinate, hallucination, chatgpt, e-discovery