Paper
9 June 2014 Representational learning for sonar ATR
Jason C. Isaacs
Author Affiliations +
Abstract
Learned representations have been shown to give hopeful results for solving a multitude of novel learning tasks, even though these tasks may be unknown when the model is being trained. A few notable examples include the techniques of topic models, deep belief networks, deep Boltzmann machines, and local discriminative Gaussians, all inspired by human learning. This self-learning of new concepts via rich generative models has emerged as a promising area of research in machine learning. Although there has been recent progress, existing computational models are still far from being able to represent, identify and learn the wide variety of possible patterns and struc- ture in real-world data. An important issue for further consideration is the use of unsupervised representations for novel underwater target recognition applications. This work will discuss and demonstrate the use of latent Dirichlet allocation and autoencoders for learning unsupervised representations of objects in sonar imagery. The objective is to make these representations more abstract and invariant to noise in the training distribution and improve performance.
© (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Jason C. Isaacs "Representational learning for sonar ATR", Proc. SPIE 9072, Detection and Sensing of Mines, Explosive Objects, and Obscured Targets XIX, 907203 (9 June 2014); https://doi.org/10.1117/12.2053057
Lens.org Logo
CITATIONS
Cited by 8 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Machine learning

Visualization

Neural networks

Automatic target recognition

Computer programming

Data modeling

Sensors

Back to Top