This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
Welcome to Reed Smith's viewpoints — timely commentary from our lawyers on topics relevant to your business and wider industry. Browse to see the latest news and subscribe to receive updates on topics that matter to you, directly to your mailbox.
| 1 minute read

Potential cyber threats against AI models in radiology highlight the need to prepare against adversarial attacks

A study published on December 14, 2021, in the journal Nature Communications by researchers at the University of Pittsburgh Department of Radiology highlights a potential cyber safety issue for artificial intelligence (AI) models used to evaluate diagnostic radiology images. The authors describe the potential for "adversarial attacks," which could seek to alter images or other inputs to make the AI models and radiologists draw erroneous conclusions about those images.

The researchers note that potential motivations for such adversarial attacks include insurance fraud from health care providers looking to boost revenue or companies trying to adjust clinical trial outcomes in their favor. Potential adversarial attacks on medical images range from tiny manipulations that change the AI's decision, but are imperceptible to the human eye, to more sophisticated manipulations that target sensitive contents of the image, such as the cancerous regions on the diagnostic image, making them more likely to fool the interpreting physician. Five radiologists were asked to distinguish whether mammogram images were real or fake. Of concern is that the human radiologists identified the images' authenticity with an accuracy of only between 29% and 71%, depending on the individual.

Because of these vulnerabilities, the study's authors recommend 'adversarial training' at institutions using AI models in the review of diagnostic images. According to Shandong Wu, Ph.D. associate professor of radiology, biomedical informatics, and bioengineering at the University of Pittsburgh, "This involves pre-generating adversarial images and teaching the model that these images are manipulated."

The authors hope radiologists will think about medical AI model safety and what can be done to defend against potential attacks to ensure that AI systems function safely.

With the prospect of AI being introduced to medical infrastructure, Wu said that cybersecurity education is also important to ensure that hospital technology systems and personnel are aware of potential threats and have technical solutions to protect patient data and block malware.


health care & life sciences, artificial intelligence, ai, radiology, cyber threats, adversarial attacks