Deep learning can be used to classify images to verify or correct DICOM header information. One situation where this is useful is in the classification of thoracic radiographs that were acquired anteroposteriorly (AP) or posteroanteriorly (PA). A convolutional neural network (CNN) was previously trained and showed a strong performance in the task of classifying between AP and PA radiographs, giving a 0.97 ± 0.005 AUC for an independent test set. However, 81% of the AP training set and 24% of the AP independent test set consisted of images with imprinted labels. To evaluate the effect of labels on training and testing of a CNN, the labels on the images used for training were removed by cropping. Then the CNN was retrained using the cropped images with the same training parameters as before. The retrained CNN was tested on the same independent test set and resulted in a 0.95 ± 0.007 AUC in the task of classifying between AP and PA radiographs. The p-value is 0.002 between the AUCs from the two networks, showing a statistically significant decrease in performance by the network trained on the cropped images. The decrease in performance may be due to the network being previously trained to recognize imprinted labels or due to relevant anatomy being cropped along with the label, however, the performance is still high and can be incorporated in clinical workflow.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.