This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
Welcome to Reed Smith's viewpoints — timely commentary from our lawyers on topics relevant to your business and wider industry. Browse to see the latest news and subscribe to receive updates on topics that matter to you, directly to your mailbox.
| 1 minute read

AI in health care is the next state AG enforcement frontier

The most recent data shows that spending on health care in the United States in 2018 grew to $3.6 trillion, or $11,172 per person. This was an increase of 4.6% from 2017 and accounted for 17.7% of the country's entire Gross Domestic Product.

As states try to contain these ever increasing costs - especially for Medicaid eligible patients - more state health systems are turning to A.I. as a key tool to do so. However, as the article linked below points out, few patients are told about the impact that A.I. has on their care and the impact that A.I. has on the recommendations made by doctors and nurses for patient care.

As the primary source of consumer protection in every state, it is just a matter of time before state attorneys general (AGs) begin to investigate these systems which are fraught with potential bias based on age, race and gender. In fact, last December, AGs began asking these very questions at a national AG meeting with one panel dedicated to A.I. and privacy.

There is no question that as the country emerges from the current pandemic, AGs will be on a collision course with health care providers who utilize A.I. and the companies that develop and market these diagnostic programs. We should expect a thorough investigation into the use of A.I. and any role it played in patient care decisions as we lost 138,000 to the virus (as of this month). Indeed, one should expect not only AGs but other relevant agencies with jurisdiction such as the HHS Office for Civil Rights to examine any HIPAA violations and state health agencies to open inquiries utilizing their authority as well.  

"AI models are fraught with bias, and even those that have been demonstrated to be accurate largely haven’t yet been shown to improve patient outcomes...It’s disturbing that they’re deploying these tools without having the kind of information that they should have."


state attorneys general, health care, artificial intelligence