As with so many other facets of our everyday lives, from our purchasing habits to our healthcare, to even the entertainment we consume, generative artificial intelligence (GenAI) is being integrated into the practice of law at an increasingly rapid rate. One area where GenAI is already making a significant impact in the legal industry is in discovery, where new tools are being implemented to help attorneys quickly review, assess, and summarize large quantities of electronic data belonging to clients and opposing parties.
These tools can certainly be helpful, but the use of GenAI programs in e-discovery is not without risk. Hallucinations – where GenAI provides incorrect answers to prompts or completely fabricated citations to authority – have already led to several infamous occurrences where lawyers failed to validate the program’s output. In addition to concerns over the accuracy of output from GenAI, lawyers must also be aware of our obligations to maintain clients’ confidentiality, including consideration of how opposing parties’ use of GenAI tools might run afoul of our own ethical obligations. Lawyers need to start considering affirmative steps to protect client data, including through new provisions in confidentiality agreements and protective orders.
Consider a situation where opposing counsel receives a large production of your client’s documents, including confidential information, and then loads the documents into ChatGPT or another public GenAI platform to obtain a summary of the produced data. Using such platforms bears the risk that confidential information loaded by one user may then be incorporated into answers to prompts of future unrelated users of such platforms. This raises the question of who has the primary duty to protect your client’s information from such inadvertent disclosure. Is opposing counsel violating any ethical duty by using a GenAI tool to summarize your client’s data? Or does the primary duty to protect YOUR client’s data fall on YOU?
None of the Model Rules of Professional Conduct explicitly require opposing counsel to protect client data absent applicable provisions in a confidentiality agreement, protective order, or other applicable order. Model Rule 1.6(c) does, however, explicitly require lawyers to “make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client.” Accordingly, your failure to negotiate protections for your client’s produced data could arguably be a violation of your duty to take reasonable steps to protect that data, even if it was opposing counsel who uploaded the data. Even the use of private, in-house GenAI tools used by law firms could run afoul of a protective order if the client’s confidential information is accessed by other law firm users who are not involved in the litigation.
To mitigate these risks, lawyers should consider the inclusion of language in protective orders that specifically addresses the use of GenAI tools. For example, a protective order might prohibit the upload of any produced materials designated as confidential into public facing GenAI programs (or preclude their use altogether), or the parties may agree to use a limited set of GenAI software meeting certain criteria or containing certain settings for confidentiality protections.
Lawyers are also bound by Model Rule 1.1 to provide competent representation, which encompasses the need to maintain a reasonable level of knowledge about the use of technology, including new GenAI tools. By understanding the benefits and pitfalls associated with these evolving tools, and taking additional steps to protect client information, lawyers can use GenAI to provide value to clients without violating their obligation to protect their clients’ confidences.