KEYWORDS: Data modeling, Image segmentation, Medical imaging, Statistical modeling, Education and training, Process modeling, Visualization, Surgery, Solid modeling, Polyps
The next-generation of artificial intelligence technology has contributed significantly to the development of medical intelligence. However, the widespread use of deep neural networks (DNNs) has also brought about serious security threats. In this paper, we present an adversarial attack approach for deep learning-based image segmentation models in the field of medical image analysis. In our solutions, we propose a novel adversarial attack method, which is designed to exploit the DNNs’ generic down-sampling operation to ensure the effectiveness, stealthiness, and transferability of the attack. We perform the attack on two State-Of-The-Art (SOTA) models, DDANet and CaraNet in a general medical image dataset Kvasir-SEG, and a comprehensive evaluation shows that our attack is effective stealthy, and transferrable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.