Intraoperative optical coherence tomography (iOCT) enables volumetric imaging of surgical maneuvers. While previous studies have demonstrated the utility of iOCT for verifying completion of surgical goals, images were acquired over static field-of-views (FOVs) or required manual tracking of regions-of-interest (ROIs). The lack of automated instrument-tracking remains a critical barrier to real-time surgical feedback and iOCT-guided surgery. Previously presented approaches to address this include active stereo-vision based instrument tracking, which was limited to imaging in the anterior segment; and instrument-tracking using volumetric OCT data, which was limited by OCT acquisition speeds and fundamental trade-offs between sampling density and FOV. We previously presented spectrally-encoded coherence tomography and reflectometry (SECTR), which provides simultaneous imaging of spatiotemporally coregistered orthogonal imaging planes (en face and cross-sectional) at several gigapixels-persecond. Here, we demonstrate automated surgical instrument-tracking and adaptive-sampling of OCT using a combination of deep-learning and SECTR. A GPU-accelerated deep neural network was trained using SER images for detection and localization of 25G internal limiting membrane (ILM) forceps at up to 50 Hz. Positional information was used for acquisition of adaptivelysampled SER frames and OCT volumes, which were densely-sampled at the instrument tip and sparsely-sampled elsewhere to retain tracking features over a large field-of-view. We believe this method overcomes critical barriers to clinical translation of iOCT and offers advantages over previous approaches by 1) reducing the instrument-tracking problem to 2D space, which is more efficient than 3D tracking or pose-estimation, and allows direct leveraging of the recent advances in computer-vision software and hardware; and 2) decoupling tracking speed and performance from OCT system and acquisition parameters.
|