A study published on December 14, 2021, in the journal Nature Communications by researchers at the University of Pittsburgh Department of Radiology highlights a potential cyber safety issue for artificial intelligence (AI) models used to evaluate diagnostic radiology images. The authors describe the potential for "adversarial attacks," which could seek to alter images or other inputs to make the AI models and radiologists draw erroneous conclusions about those images.
The researchers note that potential motivations for such adversarial attacks include insurance fraud from health care providers looking to boost revenue or companies trying to adjust clinical trial outcomes in their favor. Potential adversarial attacks on medical images range from tiny manipulations that change the AI's decision, but are imperceptible to the human eye, to more sophisticated manipulations that target sensitive contents of the image, such as the cancerous regions on the diagnostic image, making them more likely to fool the interpreting physician. Five radiologists were asked to distinguish whether mammogram images were real or fake. Of concern is that the human radiologists identified the images' authenticity with an accuracy of only between 29% and 71%, depending on the individual.
Because of these vulnerabilities, the study's authors recommend 'adversarial training' at institutions using AI models in the review of diagnostic images. According to Shandong Wu, Ph.D. associate professor of radiology, biomedical informatics, and bioengineering at the University of Pittsburgh, "This involves pre-generating adversarial images and teaching the model that these images are manipulated."
The authors hope radiologists will think about medical AI model safety and what can be done to defend against potential attacks to ensure that AI systems function safely.