Kyle J. Myers, Ph.D., received bachelor’s degrees in Mathematics and Physics from Occidental College in 1980 and a Ph.D. in Optical Sciences from the University of Arizona in 1985. Since 1987 she has worked for the Center for Devices and Radiological Health of the FDA, where she is the Director of the Division of Imaging, Diagnostics, and Software Reliability in the Center for Devices and Radiological Health’s Office of Science and Engineering Laboratories. In this role she leads research programs in medical imaging systems and software tools including 3D breast imaging systems and CT devices, digital pathology systems, medical display devices, computer-aided diagnostics, biomarkers (measures of disease state, risk, prognosis, etc. from images as well as other assays and array technologies), and assessment strategies for imaging and other high-dimensional data sets from medical devices. She is the FDA Principal Investigator for the Computational Modeling and Simulation Project of the Medical Device Innovation Consortium. Along with Harrison H. Barrett, she is the coauthor of Foundations of Image Science, published by John Wiley and Sons in 2004 and winner of the First Biennial J.W. Goodman Book Writing Award from OSA and SPIE. She is an associate editor for the Journal of Medical Imaging as well as Medical Physics. Dr. Myers is a Fellow of AIMBE, OSA, SPIE, and a member of the National Academy of Engineering. She serves on SPIE’s Board of Directors (2018-2020).
This will count as one of your downloads.
You will have access to both the presentation and article (if available).
We evaluate the Pre-Whitening Matched Filter (PWMF), “Eye-Filtered” Non-Pre-Whitening (NPWE) and Sparse-Channelized Difference-of-Gaussian (SDOG) models for predictive performance, and we compare various training and testing regimens. These include “training” by using reported values from the literature, training and testing on the same set of experimental conditions, and training and testing on different sets of experimental conditions. Of this latter category, we use both leave-one-condition-out for training and testing as well as a leave-one-factor-out strategy, where all conditions with a given factor level are withheld for testing. Our approach may be considered a fixed-reader approach, since we use all available readers for both training and testing.
Our results show that training models improves predictive accuracy in these tasks, with predictive errors dropping by a factor of two or more in absolute deviation. However, the fitted models are not fully capturing the effects apodization and other factors in these tasks.
View contact details
No SPIE Account? Create one