Phantom evaluation of novel image guided surgery techniques enables low-cost, rapid design iteration prior to (or in place of) studies requiring biological specimens. This work introduces a novel, geometrically-accurate anatomical phantom model of surgical exposure of structures in the upper urinary tract. After segmenting the CT scan of a representative study subject, the kidneys, ureters, and associated vasculature were cast in silicone and loaded with barium to facilitate segmentation of phantom tomography. This deformable silicone model was secured to a rigid replication of the spine and retroperitoneal musculature. An acrylic housing was designed to mimic the abdominal cavity and was filled with wool batting to simulate occluding adipose tissue. Initial evaluation on CT showed good subjective correspondence with clinical tomography, as well as physiologically-relevant 25–30 mm kidney displacement between orientations, with the ability to produce larger displacements as would be expected due to intraoperative manipulation. Stereoendoscopic views of partially occluded structures during simulated dissection with the da Vinci S and Xi Surgical Systems show promise for use in developing and validating image guidance tools for surgical exposure in the abdomen.
Image guidance for abdominal procedures requires an anatomical model capable of representing significant displacement and deformation of relevant tissues in a computationally efficient manner. This work evaluates the suitability of statistical shape modeling to represent key structures in Robot Assisted Laparoscopic Partial Nephrectomy (RALPN) both individually, and also as a multi-body composite model. Tomography obtained from subjects in an ongoing RALPN study was used to produce surface model representations of the kidneys, abdominal aorta, and inferior vena cava. Each structure was resliced and remeshed in a standardized fashion to allow for extraction of the principal modes of variation. Reduced parameter representations of the example structures based on the strongest eigenmodes indicate that <5mm average RMSE modeling accuracy can be achieved with four parameters for the individual models and eight parameters for the four-body composite model. The magnitude of centroid displacements observed under the principal modes of variation is consistent with literature-reported values, suggesting that this approach may be suitable for image guidance in RALPN.
Although robotic instrumentation has revolutionized manipulation in oncologic laparoscopy, there remains a significant need for image guidance during the exposure portion of certain abdominal procedures. The high degree of mobility and potential for deformation associated with abdominal organs and related structures poses a significant challenge to implementing image-based navigation for the initial phase of robot-assisted laparoscopic partial nephrectomy (RALPN). This work introduces two key elements of a RALPN exposure simulation framework: a model for laparoscopic exposure and a compact representation of anatomical geometry suitable for integration into a statistical estimation framework. Data to drive the exposure simulation were collected during a clinical RALPN case in which the robotic endoscope was tracked in six dimensions. An initial rigid registration was performed between a preoperative CT scan and the frame of the optical tracker, allowing the endoscope trajectory to be replayed over tomography to simulate anatomical observations with realistic kinematics. CT data from five study subjects were combined with four publicly available datasets to produce a mean kidney shape. This template kidney was fit back to each of the input models by optimally tuning a set of eight parameters, achieving an average RMSE of 2.18mm. These developments represent important steps toward a full, clinically-relevant framework for simulating organ exposure and testing navigation algorithms. In future work, a particle filter estimation scheme will be integrated into the simulation to incrementally optimize correspondences between parametric anatomical models and simulated or reconstructed endoscopic observations.
Despite a number of recent advances in robot-assisted surgery, achieving minimal access still requires that surgeons operate with reduced faculties for perception and manipulation as compared to open surgery. Image guidance shows promise for enhancing perception during local navigation (e.g. near occluded endophytic tumors), and we hypothesize that these methods can be extended to address the global navigation problem of efficiently locating and exposing a target organ and its associated anatomical structures. In this work we describe the high-level architecture of an augmented reality system for guiding access to abdominal organs in laparoscopic and robot-assisted procedures, and demonstrate the applicability of an array of assimilation algorithms through proof-of-concept simulation. Under the proposed framework, a coarse model of procedure-specific internal anatomy is initialized based on segmented pre-operative imaging. The model is rigidly registered to the patient at the time of trocar placement, then non-rigidly updated in an incremental manner during the access phase of surgery based on limited views of relevant anatomical structures as they are exposed. Observations are assumed to derive primarily from reconstruction of stereoscopic imaging; however, the assimilation framework provides a means of incorporating measurements made with other sensing modalities. Simulation results show that standard state estimation algorithms are suitable for accommodating large-scale displacement and deformation of the observed feature configuration relative to the initial model. Future work will include development of a suitable 3D model of anatomical structures involved in partial nephrectomy as well as provision for leveraging intraoperative dynamics in the assimilation framework.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.