Robustness Against Adversarial Attacks Through a Hybrid Defense Approach in Medical Imaging
Main Article Content
Abstract
Medical imaging plays a vital role in clinical diagnosis, yet machine learning models used in this domain are
highly vulnerable to adversarial attacks, risking misdiagnosis, which could lead to incorrect diagnoses. Using
two benchmark datasets, Pneumonia and BreakHis images, this study assesses the resilience of well-known
deep learning architectures, such as VGG16, ResNet50, InceptionV3, and VGG19, against adversarial attacks.
For better model robustness, a hybrid defense strategy is suggested that combines adversarial training with
autoencoder-based preprocessing. Results indicate that adversarial attacks degrade base model performance,
but the hybrid approach enhances accuracy, precision, recall, F1 score, and area under the curve (AUC) score.
Autoencoders suit BreakHis data, while adversarial training better supports Pneumonia dataset robustness.
Statistical analysis and evaluation metrics such as accuracy, precision, recall, F1 Score, and AUC score on the
basis of confusion matrices, and its comparison analysis visualizations support the superiority of the hybrid
strategy in improving classification reliability across varying attack types. In situations with limited resources,
autoencoders offer a lightweight additional defense, and adversarial training is effective on all architectures.
The results demonstrate the critical need for integrated defenses in ensuring trustworthy artificial intelligence–
driven medical diagnosis.
Cite this article as: M. S., A. Kumar Sagar, and V. Khemchandani, “Robustness against adversarial attacks through a hybrid defense approach in medical imaging,”
Electrica, 25, 0127, 2025. doi: 10.5152/electrica.2025.25127.
