PurposeThe limited volume of medical training data remains one of the leading challenges for machine learning for diagnostic applications. Object detectors that identify and localize pathologies require training with a large volume of labeled images, which are often expensive and time-consuming to curate. To reduce this challenge, we present a method to support distant supervision of object detectors through generation of synthetic pathology-present labeled images.ApproachOur method employs the previously proposed cyclic generative adversarial network (cycleGAN) with two key innovations: (1) use of “near-pair” pathology-present regions and pathology-absent regions from similar locations in the same subject for training and (2) the addition of a realism metric (Fréchet inception distance) to the generator loss term. We trained and tested this method with 2800 fracture-present and 2800 fracture-absent image patches from 704 unique pediatric chest radiographs. The trained model was then used to generate synthetic pathology-present images with exact knowledge of location (labels) of the pathology. These synthetic images provided an augmented training set for an object detector.ResultsIn an observer study, four pediatric radiologists used a five-point Likert scale indicating the likelihood of a real fracture (1 = definitely not a fracture and 5 = definitely a fracture) to grade a set of real fracture-absent, real fracture-present, and synthetic fracture-present images. The real fracture-absent images scored 1.7±1.0, real fracture-present images 4.1±1.2, and synthetic fracture-present images 2.5±1.2. An object detector model (YOLOv5) trained on a mix of 500 real and 500 synthetic radiographs performed with a recall of 0.57±0.05 and an F2 score of 0.59±0.05. In comparison, when trained on only 500 real radiographs, the recall and F2 score were 0.49±0.06 and 0.53±0.06, respectively.ConclusionsOur proposed method generates visually realistic pathology and that provided improved object detector performance for the task of rib fracture detection.
Rib fractures are a sentinel injury for physical abuse in young children. When rib fractures are detected in young children, 80-100% of the time it is the result of child abuse. Rib fractures can be challenging to detect on pediatric radiographs given that they can be non-displaced, incomplete, superimposed over other structures, or oriented obliquely with respect to the detector. This work presents our efforts to develop an object detection method for rib fracture detection on pediatric chest radiographs. We propose a method entitled “avalanche decision” motivated by the reality that pediatric patients with rib fractures commonly present with multiple fractures; in our dataset, 76% of patients with fractures had more than one fracture. This approach is applied at inference and uses a decision threshold that decreases as a function of the number of proposals that clear the current threshold. These contributions were added to two leading single stage detectors: RetinaNet and YOLOv5. These methods were trained and tested with our curated dataset of 704 pediatric chest radiographs, for which pediatric radiologists labeled fracture locations and achieved an expert reader-to-reader F2 score of 0.76. Comparing base RetinaNet to RetinaNet+Avalanche yielded F2 scores of 0.55 and 0.65, respectively. F2 scores of base YOLOv5 and YOLOv5+Avalanche were 0.58 and 0.65, respectively. The proposed avalanche inferencing approaches provide increased recall and F2 scores over the standalone models.
Surgical procedures often require the use of catheters, tubes, and lines, collectively called lines. Misplaced lines can cause serious complications, such as pneumothorax, cardiac perforation, or thrombosis. To prevent these problems, radiologists examine chest radiographs after insertion and throughout intensive care to evaluate their placement. This process is time consuming, and incorrect interpretations occur with notable frequency. Fast and reliable automatic interpretations could potentially reduce the cost of these surgical operations, decrease the workload of radiologists, and improve the quality of care for patients. We develop a segmentation model which can highlight the medically relevant lines in pediatric chest radiographs using deep learning. We propose a two-stage segmentation network which first classifies whether images have medically relevant lines and then segments images with lines. For the segmentation stage, we use the popular U-Net architecture substituting the encoder path with multiple state-of-the-art CNN encoders. Our study compares the performance of different permutations of model architectures for the task of highlighting lines in pediatric chest radiographs and demonstrates the effectiveness of the two-stage architecture.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.