Paper
3 October 2022 Adversarial attacks for machine learning denoisers and how to resist them
Author Affiliations +
Abstract
Adversarial attacks rely on the instability phenomenon appearing in general for all inverse problems, e.g., image classification and reconstruction, independently of the computational scheme or method used to solve the problem. We mathematically prove and empirically show that machine learning denoisers (MLD) are not excluded. That is to prove the existence of adversarial attacks given by noise patterns making the MLD run into instability, i.e., the MLD increases the noise instead of decreasing it. We further demonstrate that adversarial retraining or classic filtering do not provide an exit strategy for this dilemma. Instead, we show that adversarial attacks can be inferred by polynomial regression. Removing the underlying inferred polynomial distribution from the total noise distribution delivers an efficient technique yielding robust MLDs that make consistent computer vision tasks such as image segmentation or classification more reliable.
© (2022) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Saiyam B. Jain, Shao Zongru, Sachin K. T. Veettil, and Michael Hecht "Adversarial attacks for machine learning denoisers and how to resist them", Proc. SPIE 12204, Emerging Topics in Artificial Intelligence (ETAI) 2022, 1220402 (3 October 2022); https://doi.org/10.1117/12.2632954
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Denoising

Interference (communication)

Machine learning

Image classification

Image quality

Inverse problems

Image resolution

Back to Top