Open Access
26 December 2024 Applications of mixed reality with medical imaging for training and clinical practice
Alexa R. Lauinger, Meagan McNicholas, Matthew Bramlet, Maria Bederson, Bradley P. Sutton, Caroline G. L. Cao, Irfan S. Ahmad, Carlos Brown, Shandra Jamison, Sarita Adve, John Vozenilek, Jim Rehg, Mark S. Cohen
Author Affiliations +
Abstract

Purpose

This review summarizes the current use of extended reality (XR) including virtual reality (VR), mixed reality, and augmented reality (AR) in the medical field, ranging from medical imaging to training to preoperative planning. It covers the integration of these technologies into clinical practice and within medical training while discussing the challenges and future opportunities in this sphere. This will hopefully encourage more physicians to collaborate on integrating medicine and technology.

Approach

The review was written by experts in the field based on their knowledge and on recent publications exploring the topic of extended realities in medicine.

Results

Based on our findings, XR including VR, mixed reality, and AR are increasingly utilized within surgery both for preoperative planning and intraoperative procedures. These technologies are also promising means for improved education at every level of physician training. However, there are still barriers to the widespread adoption of VR, mixed reality, and AR, including human factors, technological challenges, and regulatory issues.

Conclusions

Based on the current use of VR, mixed reality, and AR, it is likely that the use of these technologies will continue to grow over the next decade. To support the development and integration of XR into medicine, it is important for academic groups to collaborate with industrial groups and regulatory agencies in these endeavors. These joint projects will help address the current limitations and mutually benefit both fields.

1.

Introduction

Extended reality (XR) is an umbrella term that encompasses augmented reality (AR)—where digital images are projected onto the real world, mixed reality—where virtual items are interacted with in the real world, and virtual realities (VR)—where the entire visual space is a digital environment. XR technologies have surged into multiple healthcare applications in recent years with applications ranging from mental health to surgery.1,2 This has been accelerated in many aspects through healthcare’s virtual transformation during the global COronaVIrus Disease of 2019 (COVID) pandemic.35

AR and VR have some similarities in that they both provide users the chance to enhance their understanding of complex situations; however, they differ fundamentally in how people interact with them. VR allows for the creation of whole new worlds and provides a fully immersive experience that can be manipulated to the user’s needs. On the one hand, this lends itself well to medical simulation to take users through procedural steps or plan complex procedures more effectively while also eliminating external distractions. On the other hand, AR allows for the interaction of both physical and virtual entities in real time by overlaying digital information onto the real-world environment. AR may be preferable to certain healthcare workers desiring visualization or added data content during procedural training on physical simulators (which add real-world haptic feedback) as well as during procedures or clinical exams and consultations on patients in the clinical environment. Both technologies require specialized hardware, such as headsets and handheld devices equipped with the necessary software and technical capabilities to create such environments or overlay digital images onto a real field.

AR and VR technologies in recent years have unlocked innovative solutions to address variable patient anatomy, complex surgical procedures, and rare pathologies in many medical subspecialties fields that benefit from enhanced imaging for procedures and procedural planning. For example, vascular surgeons and neurosurgeons can use AR to assist in surgical planning and patient-specific practice prior to a patient’s surgery, which was perceived by surgeons to streamline workflow and improve surgical visualization with reduced radiation and contrast exposure.6 VR has exploded on the scene as a training tool for students and physicians at all stages of practice, although research appears to focus on applications in undergraduate and graduate medical training.79 VR offers a scaled opportunity for students and trainees to gain hands-on experience without any risk of patient harm. In addition, seasoned physicians can learn new surgical techniques and advance their mastery using this technology.10

AR applications through an overlay of imaging in the clinical and training environments are especially useful for procedural planning as well as during medical and surgical procedures. This has been shown recently to improve the efficacy and accuracy of surgical procedures overall with the potential to improve safety for patients via increased precision and reduced reoperation rates.11 Surgeons have reported improved cognitive and motor tasks along with reduced radiation exposure for patients while using AR; however, there have not been reported differences in the complication rate or need for revision surgery when using XR in the operating room.12,13 Additional applications have been in virtual consultations in the clinic/emergency room/intensive care unit (ER/ICU). AR systems allow virtual collaboration between specialists and physicians in remote areas where timely management of problems can impact outcomes when the right expertise is able to be brought in real-time to the bedside. This was expanded to international tumor boards and grand rounds during the pandemic where imaging and expertise were virtually displayed at the bedside for collaborative patient management and bedside teaching to students who were remote to reduce infection risk.1419 This use of mixed reality with medical imaging also allows trainees to learn from physicians around the world and interact directly with world experts they may not have access to, increasing their medical knowledge and competency.15,16

Despite the many applications of medical imaging with AR and VR in the clinical and training environment, these technologies have not received the broad adoption in healthcare they initially anticipated.17 A large part of this is due to challenges with image quality, large data transfer and latency, processing power, bulky headset designs difficult to use for extended time periods, cybersickness, and challenges with patient motion and image overlay registration.20 In addition, the cost of these technologies, health data management, protection, and storage, and user interface/user experience learning curves have been barriers affecting adoption, scalability, and validation by physicians and healthcare systems.19 In addition, there are regulatory limitations to integrating these technologies into medicine. In this article, we discuss several applications and areas of ongoing research both at our institution and through broader academic-corporate-government partnerships to overcome these use and adoption barriers to provide a broader future for the application of medical imaging through mixed reality in healthcare. We also acknowledge the importance of academic and industry collaboration to support this growing field.

In light of the expanding role AR and VR play in healthcare, this manuscript aims to contribute to the understanding of these technologies’ applications in medicine, particularly in medical imaging and devices, discuss their advantages and limitations, and offer insights into the potential future of AR and VR in healthcare.

2.

Applications of Preoperative Use for Extended Reality

Instead of a surgeon simply viewing contrast variations in several 2D views and inferring structures and their 3D relationships, XR applications require 3D models to be formed or segmented from the imaging data for the presentation of the structural anatomy of the patient. This requires either manual segmentation of the images or automated processing to extract the 3D model of structures and their positions relative to each other and to physical markers in AR applications. The main benefit of 3D modeling in pre-surgical planning is improved representations of complex pathology to the operator.21 This benefit can be attributed to two factors: patient selection and correct imaging protocols.

The first step is selecting a patient whose pathology is related to complex 3D geometries. Congenital heart disease, due to the inherent 3D orientation as the foundational pathology, emerged as the initial diagnosis to benefit from 3D modeling as viewed in VR.22 Surgical oncology is emerging as another area of focus due to the same 3D factors. As a large tumor displaces normal anatomy, pre-surgical analysis of the patient-specific 3D VR model achieves the same benefit seen in congenital heart disease; an improved 3D visualization of complex anatomic relationships.23 Other use cases are waiting to be investigated.

The second logistical component of pre-surgical 3D modeling for XR is idealized 3D imaging. Once a case is selected for 3D modeling, it is important to work with radiology to select the correct imaging modality and/or sequence parameters to generate an isotropic 3D imaging dataset. The process of creating a 3D model from a medical imaging dataset is dependent upon a clinical imaging diagnostician or algorithm being able to identify and segment out the anatomic segments slice by slice to generate a 3D model. If the imaging modality cannot differentiate between the clinically relevant tissues, it is the wrong modality. In addition, if the imaging modality can differentiate the relevant tissues, but the slices are not isotropic and are too thick, then the resulting segmentation will be insufficient in the z-dimension and inadequate for 3D model generation.24 Working with radiology from the moment of order can ensure both the correct imaging modality and resolution in three dimensions.

2.1.

Case Identification

When selecting a patient with the potential for success in VR-based pre-surgical planning, one must consider the complexity of the 3D anatomic/pathologic representation. In our experience, complex congenital heart disease and surgical solid tumor cases represent a straightforward initial focus for this technology. Converting these anatomies into 3D models achieves two goals. First, 3D modeling improves the mental representation of the surgical anatomy as it will be encountered in the surgical field.25 Surgeons utilizing VR for surgical planning have independently stated: “I felt like I had been there before.” Long gone are the days of “I’ll figure it out when I get in there.”26 This improved mental representation leads to the next benefit of expertise-dependent situational awareness. Rather than utilizing working memory to marry the surgical field to the surgery conference presentation of 2D multi-modal imaging, the operator more efficiently orients the surgical field to the memorable landmarks of the 3D mental representation provided by the VR pre-surgical interaction.27 This long-term memory of the 3D anatomy effectively replaces what historically required years of operator experience to develop. Practically speaking, in hypertrophic obstructive cardiomyopathy, successful myectomy (surgically removing “just the right amount” of left ventricular outflow tract myocardial tissue) has been linked to years of expertise.28 For example, an expert will study medical imaging similar to a novice, but when an expert opens the surgical field and encounters the anatomy in real-life 3D view, they can tap into years of long-term memories of similar cases to exercise their expertise in optimal execution of a procedure on this specific patient. The extraction and visualization of the virtual 3D models can add some of this expertise to an operator’s view, derived through years of exposure to various cases by creating operator experts on individual patients.29 Due to the 3D model, the operator now becomes more of an “expert” on the specific patient in front of them. In addition to the improvement in 3D mental representation, VR affords the opportunity to selectively visualize merged 3D imaging data, toggling on and off layers such as tractography of the brain or functional data. It also enables the presentation of non-traditional anatomic information in new visual formats. For example, converting stereo electro-encephalographic data into 4D digital animated models allow for the operator to “see” a seizure propagate through the brain.30,31 It is not possible to achieve time-sequential 3D representations of anatomy and pathology with traditional approaches of 3D printing, enabling extended applications to unlock dynamic physiological visualizations, in our opinion.

3.

Logistics and Technical Aspects of Incorporating 3D Modeling into Preoperative Analysis

3.1.

3D Imaging Acquisition and Segmentation

With XR applications driving the need for extracting high-quality 3D virtual models of anatomy from imaging data, traditional radiology imaging protocols, which are focused on 2D, high-resolution in-plane views but thick slices, are challenged to adapt. The creation of patient-specific 3D virtual anatomy models involves segmenting 3D blocks of grayscale digital imaging and communications in medicine (DICOM) data into clinically relevant, tissue-specific, 3D digital segments of anatomy. In addition, once the focus of diagnostic imaging is transitioned from 2D representations to the extraction of 3D models, the extracted models themselves provide new opportunities in diagnostics and analytics.32 When one considers the diagnostic reasoning of the radiologist, they utilize their understanding of human anatomy as it is represented (in multi-modal formats) on a 2D screen and look for deviations from the norm. This can only occur when there is a variation in signal intensity between two tissues. This contrast allows for the segmentation and mental representation of anatomy/pathology in the mind of the reading radiologist. When imaging fails to represent, in a visual format, the variation between tissues; imaging is rendered unsuitable. This impractical nature of medical imaging can be understood in the analysis of seizure foci or cardiac arrhythmia foci. In both instances, cellular dysfunction is the source of pathology and is not captured in current imaging techniques. However, when a trained diagnostic imaging expert can detect subtle differences in signal intensity on a medical image, they are able to “see” and interpret pathology that is not perceived by the general practitioner.33 This characteristic of expertise in medical imaging formulates the foundation of one of the benefits of 3D modeling segmentation. As the radiology expert identifies segments, slice by slice, of “like” tissue and builds 3D models of the anatomy and pathology of like tissues, they are converting their subjective knowledge into objective 3D models. This commitment to an objectively defined 3D model represents a new ground truth by which all members of the treating team can evaluate imaging pathology. This objective rendering of anatomy and pathology from 3D DICOM grayscale blocks of data into 3D segmental puzzle pieces of patient-specific anatomy represents a significant benefit of 3D; however, extracting accurate 3D structures from the data also represents the main barrier to scaled deployment due to the need for isotropic imaging protocols and trained raters or algorithms to perform accurate segmentations. If the clinically relevant components can be seen on medical images, then they can be segmented into 3D models provided the source data was acquired in a 3D near-isotropic format.

This tissue-specific segmentation is critical for 3D modeling in medicine. As radiologist translates 2D images to 3D mental anatomical models, they do so in a manner that reproduces their analysis of each 2D image in their mind’s eye. As the radiologist reviews an axial computed tomography (CT) slice through the chest, the radiologist can segment (in their mind’s eye) the various tissues of the lung parenchyma, airways, bone, myocardial tissue, contrast-filled vessels, bone, muscle, fat, and skin. The radiologist can utilize contrast and brightness of the image to help distinguish between various tissues. However in VR, we are not aiming for new methods of visualizing data in a 2D cut-plane; rather, we seek superman-like “X-ray vision” of these 3D segments in a meaningful, clinically relevant viewing experience. Now, imagine this same 3D grayscale DICOM dataset in VR. If contrast and brightness controls of a 2D image were exchanged for opacity and brightness, any global effort to transform the 3D DICOM dataset would fail to achieve the superman X-ray effect because as the skin, fat, and muscular signal intensity layers are washed away to reveal the internal anatomy, critical components of the internal anatomy are also rendered transparent due to the similar signal intensity characteristics shared by the external anatomy. Therefore, the process of image segmentation is required to perform tissue-specific segmentation on the source images to link the expert categorization of signal intensity of like tissues into separate 3D segmental models. This results in 3D segmented and labeled datasets that allow transparency control over segments that align with clinical decision making. As a result, in VR, the clinician can choose to vary the opacity of the skin, fat, muscle, and bone, to reveal the internal anatomy in a clinically relevant manner, achieving a true 3D understanding of the anatomy. This objective segmentation by the expert grows in impact as the 2D tissue differentiating qualities diminish.

Automated segmentation using machine learning/deep learning (DL) approaches has grown tremendously over the past 5 years, driven by open science data competitions to make the best glioma segmentation algorithm, for example, see Refs. 20 and 34 and Fig. 1. These competitions have brought together large datasets for training segmentation algorithms, including heterogeneous imaging data from a variety of clinics and accompanying ground truth segmentations. A potential disadvantage of the readily available training data is that many algorithms do not explore the use of common preprocessing pipelines for imaging data, such as bias field correction, which can have a significant impact on segmentation accuracy.35

Fig. 1

Glioma patient segmentation examples and VR visualization. (a) Segmentation of the tumor into edematous, enhancing, and necrotic regions. (b) Visualization of ML-algorithm auto-segmentation of tumor core. (c) Gold-standard manual segmentation of tumor core of the same patient for comparison. (d) VR visualization of a patient with a large glioma, showing gray matter (in pink), ventricles (in blue), and tumors in yellow.

JMI_11_6_062608_f001.png

Segmentation needs to include both the pathology of interest and the underlying anatomy. All relevant features of the patient anatomy must be extracted from the images as objects, with separate segmentations required for tissue types, pathological regions, bones, blood vessels, and other separate anatomical structures, for example, see the 3D models in Figs. 1 and 2.

Fig. 2

VR experience using a Vive VR headset of a congenital heart surgeon reviewing a complex heart prior to surgery.

JMI_11_6_062608_f002.png

3.2.

Medical Imaging Optimized

The imaging datasets need to be acquired in a suitable near-isotropic resolution 3D format, and second, the images should be selected for clinical delineation of anatomic and pathologic features. In our experience, CT imaging is frequently adequate, but due to historical constraints of digital storage, many CT datasets were resampled and compressed, and critical information was not saved. With the transition of the imaging team’s mindset to enabling 3D modeling, these cases are now stored in high isotropic resolution. Magnetic resonance imaging (MRI) presents a different dynamic and frequently excels at delineating soft tissue variance in tissue characteristics. However, again, due to historical practice patterns, thick 2D imaging stacks in 3D planes (axial, sagittal, and coronal) are typically acquired rather than an equivalent near-isotropic 3D dataset. As there is a shift in the use of the data for VR and for automated segmentation and diagnostics, the imaging protocols must adapt. Fortunately, with recent advances in imaging speed enabled by parallel imaging and compressed sensing, transitioning to 3D imaging can occur without sacrificing existing diagnostic imaging efforts or extending the imaging session.36

3.3.

Registration for Augmented Reality

For merging of information into 3D models from multiple medical imaging modalities such as CT and MRI, there are many image registration methods using both standard alignment algorithms, with optimization functions such as mutual information37 for merging grayscale information from disparate modalities, and machine learning/DL approaches.38 However, a more challenging registration is required for using the 3D models in AR interactively in the surgical suite. Image registration algorithms are needed to align the 3D volumetric data with the surface of the 3D object and a view angle of the operator or depth indicator from the surgical instrument or other navigation device. This process requires knowing the position of the patient’s relevant anatomy in the camera’s field of view in the operating room and merging in the patient’s grayscale imaging data or 3D volumetric models in that same coordinate system, all while providing the observer the correct projection of this complex and layered information relative to their viewing position. This can be done with fiducial markers (such as a small object affixed to the patient that is imaged across multiple modalities) carefully placed and localized to establish correspondence between the 3D model from imaging and the visual space of the operator. More recently, it can be done with markerless registration to use features of the patient’s anatomy to provide the basis for registration. We also note that the location of surgical instruments or other intervention devices may also need to be tracked, and location information merged with the 3D model anatomical information to provide real-time surgical navigation or incorporation of optical probe information. These approaches seek to enable sensitive surgical procedures to be performed through a laparoscopic intervention but require algorithms that can handle registrations with limited information and take into account the deformable structure of the tissues.39 Although automated registration and deformation tracking algorithms are drastically improving in accuracy, manual adjustment of registration that takes into account local landmarks must often be done to ensure alignment.40 Other technologies, including ultrasound and electromagnetic probes, are also being developed and improved upon that may aid in AR registration prior to surgery.41,42 Although not used previously with AR, placement of the SaviSout radar system prior to lumpectomy has shown the ability to provide wireless localization of breast tumors.43

AR-based image overlays during surgery merge high-resolution pre-surgical anatomical information with the interactive procedure. Surgical interventions on soft tissues result in deformations of those tissues and this displacement must be accounted for during the procedure to maintain accuracy of the AR overlay. In addition to the intervention itself, differences in gravity between the presurgical scan and the surgical positioning may also need to be addressed.44 This can be done by performing rigid registrations followed by finite element method (FEM)-based deformation estimations.45,46 Often, fiducial points on the surface of the skin are used to provide an initial correspondence with pre-surgery imaging data and to drive updates of soft tissue deformation on the surface to guide deformation estimates based on FEM.47 These systems have become very accurate with current systems showing less than a few mm of positioning error, enabling more reliance on the registered imaging data during the intervention.48 Surgical AR provides unprecedented views into the patient and a minimal footprint, often being incorporated into a holographic headset. This technology is set to change surgery practice as several companies, such as Medivis and Mediview, have recently achieved Food and Drug Administration (FDA) approval.

3.4.

Considerations for 3D Visualization

3.4.1.

3D model optimization

When 3D modeling for medical decision making first emerged, it was in physical 3D-printed formats which required engineering expertise to take a segmented model and optimize it for 3D printing output. Now, with VR, the optimization of the 3D digital segmented models is different. In VR, the optimization of the 3D model involves applying visual skins that more closely approximate the anatomy than the grayscale-derived data represents and decimating the model to remove unnecessary faces/vertices to render the data file down to a manageable size that can be rendered by most VR solutions.

3.4.2.

VR formatting

Ultimately, all the above efforts must result in the practitioner interacting with the 3D model in a digital 3D-stereoscopic format that enables the superman “X-ray” view experience. In our experience, these are the key features needed to achieve the above impact.

  • 1. 3D model permanence: the best mental representation may occur when the practitioner can interact with the model by standing over the digital representation and walking around the model rather than sitting down and moving the model relationally to the user.49

  • 2. In AR, lighting and visualization of 3D objects is different than in VR. AR utilizes a pure transparency model so that brightness is favored, but as an object darkens, it moves toward transparency. When compared with VR, the visual representations of an object are limited.

  • 3. Ability for the clinical practitioner to vary the opacity of the separate 3D segments of anatomy. This capability is analogous to the 2D contrast/brightness control of DICOM imaging. The practitioner, particularly, the surgeon needs to pre-create the surgical field in 3D prior to the actual surgery.50 In instances of solid organs, such as the brain, the opacity of the various structures needs to be controlled by the practitioner for the individual to relate to the 3D anatomy prior to surgery. Lighting, color, object size, and opacity are all critical interacting factors for developing and improving users’ depth perception in mixed reality.51

  • 4. Time sequential animation: whether evaluating a 4D beating heart or the seismic waves of a seizure through the brain, the ability to translate time sequential 3D models into VR remains challenging requiring new standards and creation tools.52

3.5.

Use of Artificial Intelligence and Deep Learning with Mixed Reality Imaging

Artificial intelligence (AI) and DL have a significant role to play in enabling the high dimensionality information management that has to happen with a VR/XR system to integrate with patient medical imaging data. First, creating the patient-specific anatomy into actionable objects requires precise segmentation of medical imaging data into 3D structures, capturing both healthy tissue and lesions. Specific segmentation algorithms have been trained using 2D and 3D convolution neural networks (CNNs) using the framework of a U-Net53 to identify specific anatomy and pathology in the bones in the hand,54 parcellations of cortical and subcortical structures of the brain, and myocardium and blood pool from the heart,55,56 among many other specific algorithms in a rapidly growing research area. The need for specific segmentation tools for each task, trained with a particular task-specific set of training data such as cardiac magnetic resonance (MR) with a particular pulse sequence, is a significant limitation as the right tool must be chosen for application to a particular imaging task. To counter this and enable a broader use of automated segmentation algorithms, multimodal and multiorgan segmentation tools have been constructed from CNNs to handle a variety of tissue types57 and pathology,5860 with some algorithms merging diverse contrasts from multiple imaging modalities.61 In contrast to task-specific segmentation, there are other general image segmentation algorithms that have been applied to medical images, such as the Segment Anything Model62 which will segment structures in generic images and generate labels of different structures without prior training data specific to that data type. These models have shown some promise in medical imaging data,62 with a significant advantage in not requiring large amounts of manually labeled training data. However, improvements are needed to surpass the performance of specially designed segmentations for a particular task. Recent algorithms, such as nnU-Net and TotalSegmenter,63,64 represent an excellent tradeoff between robustness and serving multiple tasks with a high degree of accuracy by bundling adaptive algorithms with smart preprocessing workflows. With TotalSegmenter, CT data can be segmented into 104 anatomical structures automatically. An important aspect of this tool is that extensive training data are made available to help users test their own algorithms in comparison and to provide a full description of the diverse training data to enable users to understand the range of data that the algorithm was trained on.

Further roles for AI in VR/XR will be developed to enable the clinician to interact with a broad set of data, including normative models, predictive progression models, and data from recent literature, all while viewing the high dimensional data of a specific patient and discussing the case with the AI assistant. Similar to segmentation tasks, the integration of AI into these tasks may be driven by the rapidly developing medically focused language learning models.6567

4.

Intraoperative Application for Extended Reality

Apart from utilizing XR for preoperative preparation, these technologies have also begun to be integrated into intraoperative work for overlaying unique patient images and identifying key structures. Intraoperative AR techniques utilize preoperative MRI or CT scans and an external reference point on the patient to overlay images and indicate different structures during the procedure. Neurosurgery and orthopedic specialties are two of the more common surgical subspecialties that have started integrating these technologies, and studies have demonstrated improved depth perception, better placement of devices, and uninterrupted line of sight during surgery.68 Physicians using these technologies have reported easy usability and improved motor tasks with minimal disruptions during surgery.13 However, these benefits have not translated to objective improvements in patient outcomes in these studies. There were no differences in the complication rate or need for revision surgery between surgeries utilizing XR and those that did not.12,13 This use of this technology has also not been widely applied to all cases at this point. AR technologies have mostly been applied to open surgery as opposed to endoscopic procedures. Due to the limited use in endoscopic cases, further studies are required to understand the impacts of AR integration in endoscopic work.69 There are also limitations to the use of AR due to the high costs of implementing it in operating rooms and the additional training that is necessary to successfully implement these technologies into practice. A potential step to improving the integration of technology in medicine is to introduce these platforms early on in a medical professional’s career alongside learning traditional surgical approaches and techniques. There has been a push recently for medical schools to combine the forefront of innovation with traditional medical skills.70 The benefit of this is to supplement learning while still allowing trainees to gain the necessary skills to complete tasks without these technologies, especially in cases of device failure or lack of access to them.

5.

Use of Extended Reality for Medical Education

Medical education is an ever-changing field with the ultimate goal of training physicians using techniques that are both engaging, safe, and effective. To incorporate a more engaging and safe learning environment, medical programs have begun integrating VR and AR teaching techniques into their curriculum.71 Utilization of XR allows for repeated practice without adverse effects and can be applied to a range of medical disciplines.

VR is mainly used in medical education to create a 3D computer-generated environment that allows the student to interact with the created environment. It allows the user to practice medical skills, show 3D anatomy, and simulate surgery and surgical planning among other medical therapies and interventions.18 It has been shown that Mixed Reality Simulation has had a positive impact on improving medical education.61,62,65 Computer-based clinical simulations can expand healthcare students’ clinical experience by providing practice-based learning.10 This type of learning can be repeated as many times as the user requires and can serve as an immersive collaborative platform that can sustain multiple learners if needed. An article published in 2011, by L. Rogers, PhD, showed that using a VR-generated clinical simulation with multi-users indicated that learners were able to engage with the VR world and develop problem-solving skills in a collaborative environment.72 This social and interpersonal connection that was demonstrated with this study shows a potential social connection that can be sustained while in a computer-generated environment. In a more recent publication, VR was used in medical education to teach empathy.73 Dyer et al.73 used VR with a scenario where the healthcare learner interacted with patients with age-related diseases. They found that students had an improved understanding of age-related disease and increased empathy toward patients based on pre- and post-assessments.

The use of AR and VR in undergraduate medical education has ranged from anatomic learning sessions to simulated patient encounters to assessment of clinical skills. The use of these technologies also increases the access of these learning sessions, such as replacing cadavers for anatomy labs and allowing for a diverse standardized patient population.74,75 When compared with traditional education resources, VR-based medical training in undergraduate medical education settings was reported to be more engaging and enjoyable for participants.76 This may demonstrate a desire by the students to integrate these new and exciting technologies into their education. Students at Carle Illinois College of Medicine reported a preference for a VR anatomy session over other learning resources.77 In addition, the use of VR in these educational sessions resulted in similar or improved outcomes on knowledge-based exams.78,79

AR, VR, and mixed reality technologies are increasingly being integrated into surgical residency training to enhance education, skill development, and patient outcomes. These technologies offer innovative ways to simulate surgical scenarios, provide immersive learning experiences, and improve surgical techniques. These technologies collectively contribute to a more comprehensive surgical residency training experience.80 This supports the integration of VR and AR technologies alongside traditional training programs. These technologies have also been applied to introduce trainees to rare experiences that they may not otherwise have experience with.81,82 Finally, as residency programs transition to competency-based curricula, more standardized forms of assessment are needed. XR-based simulations can be incorporated into core surgery procedures in conjunction with validated assessment tools that would allow residency programs to track trainee competence more objectively on predetermined tasks.83

In addition to incorporating AR and VR in core training years, these technologies have the potential to be utilized for continued medical education for physicians at all levels. It has most commonly been recommended to aid with simulation sessions for hospitalists.84,85 Several programs have introduced VR and AR to enhance medical education, and further research around how to best implement these sessions is still growing. Similar to in-person teaching simulations, studies have recommended utilizing introduction and debrief sections to orient the participants.84 Overall, XR can be applied across different scenarios and training levels, and future research will reveal the best way to incorporate it into medical education.

The barriers to including XR in medical training are the costs of implementing the systems, limited access, and physical effects, such as nausea and vertigo, that can result from prolonged use of these systems. In graduate medical education, there has been limited evidence that supports improved outcomes when mixed reality is utilized in the medical setting, which may be a consequence of the infancy of this field at this point. These challenges are discussed in further detail below, but overall, additional research is required to demonstrate the utility of these technologies.

6.

Barriers and Opportunities of Extended Reality in Medicine

6.1.

Human-Centered System Design

As is true for most technology design, and especially so for medical technology, usability and safety issues are critical for eventual clearance or approval by the FDA. Adoption of the technology by end-users also depends on whether the new technology disrupts workflow or creates additional cognitive and/or physical workload. To increase the likelihood of success, medical innovation must satisfy the “market-product fit” criterion—the technology addresses a real need/pain point, and customers are willing to pay for it. Human-centered approaches to design and development ensure that system constraints and complex relationships in the work environment are made explicit and that metrics for assessment and validation can be defined. Until valid assessment tools are created and applied to demonstrate the value of XR, there is a barrier to the wide adoption of this technology into the medical space.

6.2.

Technological Challenges

This paper describes the ongoing evolution of XR in medical imaging and related applications but also lays out large untapped potential limited by current technologies. A challenge with XR is that there is no one silver bullet that will address the technological barriers today. The accuracy and realism needed for real-time AR registration and 3D reconstruction in the described scenarios require significant algorithmic advances in computer vision and graphics, driven by both classical and recent, more transformative, AI techniques. Due to the accuracy required in medical procedures, these systems must be accurate within millimeters and have realistic representations of the procedure that are not achieved by today’s vision transformers. For user comfort, these compute-intensive algorithms must run on much smaller form factor headsets and wearables than we have today, without the need for tethering to large PCs. This requires either highly energy-efficient sensing and computing on the headset with long battery life or efficient offloading of the computation to larger servers through wireless networking. The former is made challenging due to barriers to reducing atomic-scale dimensions of current semiconductor devices (commonly referred to as the end of the era of Moore’s law and Dennard scaling). The latter is challenging due to the stringent latency requirements (especially for AR, e.g., <5  ms motion to photon latency) and the high bandwidth needed for transmitting rich reconstructions in real time. Current network technologies are specifically a key limitation for telesurgery and remote medicine due to the poor reliability and latency of the systems. Commercial systems have barely scratched the surface of new modalities such as haptics that would provide a major leap in the realism of the user experience for medical application scenarios. Accompanying all the above is the lack of consideration for security and privacy in XR systems today.86 Finally, there is little work on the assessment and benchmarking of XR end-to-end systems in general and for medical applications in particular. A large missing component is the science of the design and evaluation of XR systems. Although some of these challenges are common to all XR systems today, some are particularly so in the context of the medical domain (e.g., the need for provably high registration accuracy, reconstruction fidelity, and privacy). Recent projects such as ILLIXR (Illinois Extended Reality testbed) and ARENA (Augmented Reality Edge Networking Architecture) have begun to provide open-source community infrastructures to address some of the above challenges by enabling rapid prototyping of research technologies and end-to-end system benchmarking and assessment.8789

6.3.

Regulatory Issues

The Medical Extended Reality (MXR) domain faces major regulatory science and technology gaps and challenges, which also were identified by the Medical XR Program at the FDA’s Center for Devices and Radiological Health. This program researches to ensure the safety and effectiveness of innovative XR-based medical devices, tackling various issues, among them are as follows: (i) MXR platforms lack characterization and evaluation methods, mainly for critical medical applications such as interventional procedures, surgery, and rehabilitation; (ii) consumer-grade sensors employed in MXR platforms, including accelerometers, inertial measurement units, gyroscopes, magnetometers, and cameras lack validation for use in clinical contexts; and (iii) there is a dearth of usability assessment tools for devices such as cognitive load measurement. These tools are critical for enhancing device safety and efficacy for surgical and diagnostics applications.90

The Medical XR Program bridges the knowledge gaps by undertaking research inquiries related to the design, development, and assessment of novel MXR devices and their applications,88 thus forming a scientific foundation for developing a regulatory framework pertaining to innovative MXR devices.91

6.4.

Industry-University Collaboration

Engagement of academia with industry, national labs, and governmental agencies is critical to the successful translation of XR applications in the biomedical domain. Some of the examples of evolving innovative partnerships are the Center for Medical Innovations in Extended Reality (MIXR) and IMMERSE-Center for Immersive Computing at Illinois. Addressing these challenges will require interdisciplinary collaborations between academic and industry researchers in the various underlying technologies, medical domain experts, and human factors researchers to assess these systems from a human-centric viewpoint. Although such efforts are challenging in themselves, in Illinois, we have recently engaged in the MIXR Center, which is a National Science Foundation Industry-University Cooperative Research Center which was developed initially in 2022 between University of Michigan (Dr. Mark Cohen), the University of Maryland (Dr. Amitabh Varshney), and Maryland Shock Trauma Center (Dr. Sarah Murthi) as well as launching the campus-wide Center for Immersive Computing (IMMERSE).

MIXR’s mission is to advance the global democratization of XR in improving health, training, and recruiting the next generation for a more diverse XR workforce, and bringing together computer scientists, engineers, and healthcare providers with access to diverse patient populations to advance medical XR.92 As an industry-academic partnership, it has brought together companies such as Microsoft, Sony, Magic Leap, and the FDA to discuss the future opportunity of XR in healthcare as well as how industry-academic partnerships can solve barriers to adoption and scalability of these technologies in healthcare. Illinois’ expertise in this domain has led to ongoing collaborations with the MIXR center with a goal of expanding its capabilities and impact.

IMMERSE brings together campus strengths in XR technologies, the college of medicine, and human factors.93 We are bringing together research in sensing, hardware acceleration, network protocols, computer vision, graphics, generative AI, and frameworks for human-centric assessment. We are building end-to-end testbeds to prototype and evaluate these new techniques and build new benchmarking methodologies with the goal to accelerate and democratize the development and adoption of XR in the medical domain.

7.

Conclusions

Mixed reality including VR and AR are growing fields across multiple career spaces and applications. In medicine, these technologies are being used for clinical care within planning and completing procedures and in medical education for training at every level of a physician’s career. We expect the use of XR within the medical field to exponentially grow in the next decade with improvement in the capacity of the technology and understanding of the benefits; however, there are still barriers to the use of these technologies that must be addressed. Most noticeably, human factors, technological challenges, and regulatory issues need to be further studied in future work. In addition, further research must demonstrate a clear improvement in patient outcomes associated with mixed reality use. Collaboration between academic centers and industry groups will be a key aspect of developing the use of AR and mixed reality in the medical field. With this in mind, programs such as IMMERSE and MIXR will help foster a new training and practicing environment that integrates these advanced technologies into medicine.

Disclosures

Matthew Bramlet is a co-founder of Enduvo, Inc.

Mark Cohen is an advisor to GigXR, Inc., and WideawakeVR, Inc.

No other authors have any disclosures.

Code and Data Availability

All data in support of the findings of this paper are available within the article.

References

1. 

A. Wiebe et al., “Virtual reality in the diagnostic and therapy for mental disorders: a systematic review,” Clin. Psychol. Rev., 98 102213 https://doi.org/10.1016/j.cpr.2022.102213 CPSRDZ (2022). Google Scholar

2. 

J. T. Verhey et al., “Virtual, augmented, and mixed reality applications in orthopedic surgery,” Int. J. Med. Robot., 16 (2), e2067 https://doi.org/10.1002/rcs.2067 (2020). Google Scholar

3. 

P. R. Swiatek et al., “COVID-19 and the rise of virtual medicine in spine surgery: a worldwide study,” Eur. Spine J., 30 (8), 2133 –2142 https://doi.org/10.1007/s00586-020-06714-y (2021). Google Scholar

4. 

S. Chandra et al., “Zooming‐out COVID‐19: virtual clinical experiences in an emergency medicine clerkship,” Med. Educ., 54 (12), 1182 –1183 https://doi.org/10.1111/medu.14266 0308-0110 (2020). Google Scholar

5. 

J. K. Dhaliwal et al., “Expansion of telehealth in primary care during the COVID-19 pandemic: benefits and barriers,” J. Amer. Assoc. Nurse Pract., 34 (2), 224 https://doi.org/10.1097/JXX.0000000000000626 (2022). Google Scholar

6. 

S. Condino et al., “Bioengineering, augmented reality, and robotic surgery in vascular surgery: a literature review,” Front. Surg., 9 966118 https://doi.org/10.3389/fsurg.2022.966118 (2022). Google Scholar

7. 

S. Chawla et al., “Evaluation of simulation models in neurosurgical training according to face, content, and construct validity: a systematic review,” Acta Neurochir., 164 (4), 947 –966 https://doi.org/10.1007/s00701-021-05003-x (2022). Google Scholar

8. 

Q. Wu et al., “Virtual simulation in undergraduate medical education: a scoping review of recent practice,” Front. Med., 9 855403 https://doi.org/10.3389/fmed.2022.855403 FMBEEQ (2022). Google Scholar

9. 

D. Kanschik et al., “Virtual and augmented reality in intensive care medicine: a systematic review,” Ann. Intensive Care, 13 81 https://doi.org/10.1186/s13613-023-01176-z (2023). Google Scholar

10. 

J. Pottle, “Virtual reality and the transformation of medical education,” Future Healthc. J., 6 (3), 181 –185 https://doi.org/10.7861/fhj.2019-0036 (2019). Google Scholar

11. 

C. Dennler et al., “Augmented reality in the operating room: a clinical feasibility study,” BMC Musculoskelet. Disord., 22 (1), 451 https://doi.org/10.1186/s12891-021-04339-w (2021). Google Scholar

12. 

A. Arjomandi Rad et al., “Extended, virtual and augmented reality in thoracic surgery: a systematic review,” Interact. Cardiovasc. Thorac. Surg., 34 (2), 201 –211 https://doi.org/10.1093/icvts/ivab241 (2021). Google Scholar

13. 

K. Móga et al., “Augmented or mixed reality enhanced head-mounted display navigation for in vivo spine surgery: a systematic review of clinical outcomes,” J. Clin. Med., 12 (11), 3788 https://doi.org/10.3390/jcm12113788 (2023). Google Scholar

14. 

W. S. Khor et al., “Augmented and virtual reality in surgery—the digital surgical environment: applications, limitations and legal pitfalls,” Ann. Transl. Med., 4 (23), 454 https://doi.org/10.21037/atm.2016.12.23 (2016). Google Scholar

15. 

A. P. Mahajan et al., “International mixed reality immersive experience: approach via surgical grand rounds,” J. Amer. Coll. Surg., 234 (1), 25 https://doi.org/10.1016/j.jamcollsurg.2021.09.011 JACSEX 1072-7515 (2022). Google Scholar

16. 

A. Salavitabar et al., “Extended reality international grand rounds: an innovative approach to medical education in the pandemic era,” Acad. Med., 97 (7), 1017 https://doi.org/10.1097/ACM.0000000000004636 ACMEEO 1040-2446 (2022). Google Scholar

17. 

P. Parekh et al., “Systematic review and meta-analysis of augmented reality in medicine, retail, and games,” Vis. Comput. Ind. Biomed. Art, 3 (1), 21 https://doi.org/10.1186/s42492-020-00057-7 (2020). Google Scholar

18. 

T. Baniasadi, S. M. Ayyoubzadeh and N. Mohammadzadeh, “Challenges and practical considerations in applying virtual reality in medical education and treatment,” Oman Med. J., 35 (3), e125 https://doi.org/10.5001/omj.2020.43 (2020). Google Scholar

19. 

M. Venkatesan et al., “Virtual and augmented reality for biomedical applications,” Cell Rep. Med., 2 (7), 100348 https://doi.org/10.1016/j.xcrm.2021.100348 (2021). Google Scholar

20. 

B. H. Menze et al., “The multimodal brain tumor image segmentation benchmark (BRATS),” IEEE Trans. Med. Imaging, 34 (10), 1993 –2024 https://doi.org/10.1109/TMI.2014.2377694 ITMID4 0278-0062 (2015). Google Scholar

21. 

M. C. Betancourt et al., “The quantitative impact of using 3D printed anatomical models for surgical planning optimization: literature review,” 3D Print. Addit. Manuf., 10 (5), 1130 –1139 https://doi.org/10.1089/3dp.2021.0188 (2023). Google Scholar

22. 

J. Awori et al., “3D models improve understanding of congenital heart disease,” 3D Print. Med., 7 26 https://doi.org/10.1186/s41205-021-00115-7 (2021). Google Scholar

23. 

V. Lyuksemburg et al., “Virtual reality for preoperative planning in complex surgical oncology: a single-center experience,” J. Surg. Res., 291 546 –556 https://doi.org/10.1016/j.jss.2023.07.001 JSGRA2 0022-4804 (2023). Google Scholar

24. 

M. Fogarasi, J. C. Coburn and B. Ripley, “Algorithms used in medical image segmentation for 3D printing and how to understand and quantify their performance,” 3D Print. Med., 8 (1), 18 https://doi.org/10.1186/s41205-022-00145-9 (2022). Google Scholar

25. 

J. Cornejo et al., “Anatomical engineering and 3D printing for surgery and medical devices: international review and future exponential innovations,” BioMed. Res. Int., 2022 6797745 https://doi.org/10.1155/2022/6797745 (2022). Google Scholar

26. 

M. Bramlet et al., “Virtual reality for preoperative surgical planning in complex pediatric oncology,” J. Laparoendosc. Adv. Surg. Tech. A, 34 (9), 861 –865 https://doi.org/10.1089/lap.2023.0039 (2024). Google Scholar

27. 

C. Craig et al., “Using cognitive task analysis to identify critical decisions in the laparoscopic environment,” Hum. Factors, 54 (6), 1025 –1039 https://doi.org/10.1177/0018720812448393 HUFAA6 0018-7208 (2012). Google Scholar

28. 

S. N. Yu et al., “Importance of surgical expertise in septal myectomy for obstructive hypertrophic cardiomyopathy,” Gen. Thorac. Cardiovasc. Surg., 68 (10), 1094 –1100 https://doi.org/10.1007/s11748-020-01320-7 (2020). Google Scholar

29. 

J. L. Hermsen et al., “Scan, plan, print, practice, perform: development and use of a patient-specific 3-dimensional printed model in adult cardiac surgery,” J. Thorac. Cardiovasc. Surg., 153 (1), 132 –140 https://doi.org/10.1016/j.jtcvs.2016.08.007 JTCSAQ 0022-5223 (2017). Google Scholar

30. 

J. L. Evans et al., “SEEG4D: a tool for 4D visualization of stereoelectroencephalography data,” Front. Neuroinfor., 18 1465231 https://doi.org/10.3389/fninf.2024.1465231 (2024). Google Scholar

31. 

C. J. Paschall et al., “An immersive virtual reality platform integrating human ECOG & sEEG: implementation & noise analysis,” in 44th Annu. Int. Conf. IEEE Eng. in Med. & Biol. Soc. (EMBC), 3105 –3110 (2022). https://doi.org/10.1109/EMBC48229.2022.9871754 Google Scholar

33. 

R. N. Bryan et al., “Medical image analysis: human and machine,” Acad. Radiol., 27 (1), 76 –81 https://doi.org/10.1016/j.acra.2019.09.011 (2020). Google Scholar

34. 

F. Kofler et al., “BraTS toolkit: translating BraTS brain tumor segmentation algorithms into clinical and scientific practice,” Front. Neurosci., 14 125 https://doi.org/10.3389/fnins.2020.00125 1662-453X (2020). Google Scholar

35. 

Y. Wang et al., “Fully automatic segmentation of 4D MRI for cardiac functional measurements,” Med. Phys., 46 (1), 180 –189 https://doi.org/10.1002/mp.13245 MPHYA6 0094-2405 (2019). Google Scholar

36. 

S. Mönch et al., “Magnetic resonance imaging of the brain using compressed sensing - quality assessment in daily clinical routine,” Clin. Neuroradiol., 30 (2), 279 –286 https://doi.org/10.1007/s00062-019-00789-x (2020). Google Scholar

37. 

F. Maes et al., “Multimodality image registration by maximization of mutual information,” IEEE Trans. Med. Imaging, 16 (2), 187 –198 https://doi.org/10.1109/42.563664 ITMID4 0278-0062 (1997). Google Scholar

38. 

Y. Fu et al., “Deep learning in medical image registration: a review,” Phys. Med. Biol., 65 (20), 20TR01 https://doi.org/10.1088/1361-6560/ab843e PHMBA7 0031-9155 (2020). Google Scholar

39. 

X. Zhang et al., “A markerless automatic deformable registration framework for augmented reality navigation of laparoscopy partial nephrectomy,” Int. J. Comput. Assist. Radiol. Surg., 14 (8), 1285 –1294 https://doi.org/10.1007/s11548-019-01974-6 (2019). Google Scholar

40. 

R. G. Louis et al., “Early experience with virtual and synchronized augmented reality platform for preoperative planning and intraoperative navigation: a case series,” Oper. Neurosurg., 21 (4), 189 –196 https://doi.org/10.1093/ons/opab188 (2021). Google Scholar

41. 

X. Liu et al., “Laparoscopic stereoscopic augmented reality: toward a clinically viable electromagnetic tracking solution,” J. Med. Imaging, 3 (4), 045001 https://doi.org/10.1117/1.JMI.3.4.045001 JMEIET 0920-5497 (2016). Google Scholar

42. 

L. Ma et al., “Augmented reality navigation with ultrasound-assisted point cloud registration for percutaneous ablation of liver tumors,” Int. J. Comput. Assist. Radiol. Surg., 17 (9), 1543 –1552 https://doi.org/10.1007/s11548-022-02671-7 (2022). Google Scholar

43. 

G. R. Vijayaraghavan et al., “Savi-Scout radar localization: transitioning from the traditional wire localization to wireless technology for surgical guidance at lumpectomies,” Semin. Ultrasound. CT MR, 44 (1), 12 –17 https://doi.org/10.1053/j.sult.2022.10.004 (2023). Google Scholar

44. 

M. I. Miga et al., “Model-updated image guidance: initial clinical experiences with gravity-induced brain deformation,” IEEE Trans. Med. Imaging, 18 (10), 866 –874 https://doi.org/10.1109/42.811265 ITMID4 0278-0062 (1999). Google Scholar

45. 

D. M. Cash et al., “Compensating for intraoperative soft-tissue deformations using incomplete surface data and finite elements,” IEEE Trans. Med. Imaging, 24 (11), 1479 –1491 https://doi.org/10.1109/TMI.2005.855434 ITMID4 0278-0062 (2005). Google Scholar

46. 

M. I. Miga et al., “Modeling of retraction and resection for intraoperative updating of images,” Neurosurgery, 49 (1), 75 https://doi.org/10.1097/00006123-200107000-00012 NEQUEB (2001). Google Scholar

47. 

W. L. Richey et al., “Soft tissue monitoring of the surgical field: detection and tracking of breast surface deformations,” IEEE Trans. Biomed. Eng., 70 (7), 2002 –2012 https://doi.org/10.1109/TBME.2022.3233909 IEBEAX 0018-9294 (2023). Google Scholar

48. 

L. Ma et al., “Visualization, registration and tracking techniques for augmented reality guided surgery: a review,” Phys. Med. Biol., 68 (4), 1 –40 https://doi.org/10.1088/1361-6560/acaf23 PHMBA7 0031-9155 (2023). Google Scholar

49. 

C. Roy et al., “Did it move? Humans use spatio-temporal landmark permanency efficiently for navigation,” J. Exp. Psychol. Gen., 152 (2), 448 –463 https://doi.org/10.1037/xge0001279 JPGEDD 1939-2222 (2023). Google Scholar

50. 

S. A. Sabri and P. J. York, “Preoperative planning for intraoperative navigation guidance,” Ann. Transl. Med., 9 (1), 87 https://doi.org/10.21037/atm-20-1369 (2021). Google Scholar

51. 

J. Ping et al., “Effects of shading model and opacity on depth perception in optical see-through augmented reality,” J. Soc. Inf. Disp., 28 (11), 892 –904 https://doi.org/10.1002/jsid.947 JSIDE8 0734-1768 (2020). Google Scholar

52. 

M. Bindschadler et al., “HEARTBEAT4D: an open-source toolbox for turning 4D cardiac CT into VR/AR,” J. Digit. Imaging, 35 (6), 1759 –1767 https://doi.org/10.1007/s10278-022-00659-y JDIMEW (2022). Google Scholar

53. 

O. Ronneberger, P. Fischer and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” (2015). Google Scholar

54. 

B. Kayalibay, G. Jensen and P. van der Smagt, “CNN-based segmentation of medical imaging data,” (2017). Google Scholar

55. 

C. Chen et al., “Deep learning for cardiac image segmentation: a review,” Front. Cardiovasc. Med., 7 25 https://doi.org/10.3389/fcvm.2020.00025 (2020). Google Scholar

56. 

L. Henschel et al., “FastSurfer–a fast and accurate deep learning based neuroimaging pipeline,” NeuroImage, 219 117012 https://doi.org/10.1016/j.neuroimage.2020.117012 NEIMEF 1053-8119 (2020). Google Scholar

57. 

P. Moeskops et al., “Deep learning for multi-task medical image segmentation in multiple modalities,” Lect. Notes Comput. Sci., 9901 478 –486 https://doi.org/10.1007/978-3-319-46723-8_55 LNCSD9 0302-9743 (2016). Google Scholar

58. 

J. Ma and X. Yang, “Automatic brain tumor segmentation by exploring the multi-modality complementary information and cascaded 3D lightweight CNNs,” Lect. Notes Comput. Sci., 11384 25 –36 https://doi.org/10.1007/978-3-030-11726-9_3 LNCSD9 0302-9743 (2019). Google Scholar

59. 

Y.-X. Zhao, Y.-M. Zhang and C.-L. Liu, “Bag of tricks for 3D MRI brain tumor segmentation,” Lect. Notes Comput. Sci., 11992 210 –220 https://doi.org/10.1007/978-3-030-46640-4_20 LNCSD9 0302-9743 (2020). Google Scholar

60. 

C. Duncan et al., “Some new tricks for deep glioma segmentation,” Lect. Notes Comput. Sci., 12659 320 –330 https://doi.org/10.1007/978-3-030-72087-2_28 LNCSD9 0302-9743 (2021). Google Scholar

61. 

Z. Guo et al., “Deep learning-based image segmentation on multimodal medical imaging,” IEEE Trans. Radiat. Plasma Med. Sci., 3 (2), 162 –169 https://doi.org/10.1109/TRPMS.2018.2890359 (2019). Google Scholar

62. 

S. He et al., “Computer-vision benchmark segment-anything model (SAM) in medical images: accuracy in 12 datasets,” (2023). Google Scholar

63. 

J. Wasserthal et al., “TotalSegmentator: robust segmentation of 104 anatomical structures in CT images,” Radiol. Artif. Intell., 5 (5), e230024 https://doi.org/10.1148/ryai.230024 (2023). Google Scholar

64. 

F. Isensee et al., “nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation,” Nat. Methods, 18 (2), 203 –211 https://doi.org/10.1038/s41592-020-01008-z 1548-7091 (2021). Google Scholar

65. 

R. Yang et al., “Large language models in health care: development, applications, and challenges,” Health Care Sci., 2 (4), 255 –263 https://doi.org/10.1002/hcs2.61 (2023). Google Scholar

66. 

E. Loh, “ChatGPT and generative AI chatbots: challenges and opportunities for science, medicine and medical leaders,” BMJ Lead., 8 (1), 1 –10 https://doi.org/10.1136/leader-2023-000797 (2024). Google Scholar

67. 

K. Singhal et al., “Med-PaLM: a medical large language model–Google research,” https://sites.research.google/med-palm/ (). Google Scholar

68. 

J. Zhang, V. Lu and V. Khanduja, “The impact of extended reality on surgery: a scoping review,” Int. Orthop., 47 (3), 611 –621 https://doi.org/10.1007/s00264-022-05663-z (2023). Google Scholar

69. 

T. C. Steineke and D. Barbery, “Extended reality platform for minimally invasive endoscopic evacuation of deep-seated intracerebral hemorrhage: illustrative case,” J. Neurosurg. Case Lessons, 4 (12), CASE21390 https://doi.org/10.3171/CASE21390 (2022). Google Scholar

70. 

K. D. Overton, O. Coiado and E. T. Hsiao-Wecksler, “Exploring the intersection of engineering and medicine through a neuroscience challenge laboratory,” Med. Sci. Educ., 32 (6), 1481 –1486 https://doi.org/10.1007/s40670-022-01676-w (2022). Google Scholar

71. 

V. R. Curran et al., “Use of extended reality in medical education: an integrative review,” Med. Sci. Educ., 33 (1), 275 –286 https://doi.org/10.1007/s40670-022-01698-4 (2023). Google Scholar

72. 

L. Rogers, “Developing simulations in multi-user virtual environments to enhance healthcare education,” Br. J. Educ. Technol., 42 (4), 608 –615 https://doi.org/10.1111/j.1467-8535.2010.01057.x BJETDK 0007-1013 (2011). Google Scholar

73. 

E. Dyer, B. J. Swartzlander and M. R. Gugliucci, “Using virtual reality in medical education to teach empathy,” J. Med. Libr. Assoc., 106 (4), 4 https://doi.org/10.5195/jmla.2018.518 (2018). Google Scholar

74. 

I. Miltykh et al., “A new dimension in medical education: virtual reality in anatomy during COVID-19 pandemic,” Clin. Anat., 36 (7), 1007 –1015 https://doi.org/10.1002/ca.24098 CLANE8 1098-2353 (2023). Google Scholar

75. 

C. A. Elzie and J. Shaia, “A pilot study of the impact of virtually embodying a patient with a terminal illness,” Med. Sci. Educ., 31 (2), 665 –675 https://doi.org/10.1007/s40670-021-01243-9 (2021). Google Scholar

76. 

S. Barteit et al., “Augmented, mixed, and virtual reality-based head-mounted devices for medical education: systematic review,” JMIR Serious Games, 9 (3), e29080 https://doi.org/10.2196/29080 (2021). Google Scholar

77. 

R. Galvez et al., “Use of virtual reality to educate undergraduate medical students on cardiac peripheral and collateral circulation,” Med. Sci. Educ., 31 (1), 19 –22 https://doi.org/10.1007/s40670-020-01104-x (2020). Google Scholar

78. 

H. S. Maresky et al., “Virtual reality and cardiac anatomy: exploring immersive three-dimensional cardiac imaging, a pilot study in undergraduate medical anatomy education,” Clin. Anat., 32 (2), 238 –243 https://doi.org/10.1002/ca.23292 CLANE8 1098-2353 (2019). Google Scholar

79. 

D. T. Nicholson et al., “Can virtual reality improve anatomy education? A randomised controlled study of a computer-generated three-dimensional anatomical ear model,” Med. Educ., 40 (11), 1081 –1087 https://doi.org/10.1111/j.1365-2929.2006.02611.x 0308-0110 (2006). Google Scholar

80. 

M. S. Shafarenko et al., “The role of augmented reality in the next phase of surgical education,” Plast. Reconstr. Surg. Glob. Open, 10 (11), e4656 https://doi.org/10.1097/GOX.0000000000004656 (2022). Google Scholar

81. 

P. M. Singh, M. Kaur and A. Trikha, “Virtual reality in anesthesia ‘simulation’,” Anesth. Essays Res., 6 (2), 134 –139 https://doi.org/10.4103/0259-1162.108289 (2012). Google Scholar

82. 

C. G. Corrêa et al., “Virtual reality simulator for dental anesthesia training in the inferior alveolar nerve block,” J. Appl. Oral Sci., 25 (4), 357 –366 https://doi.org/10.1590/1678-7757-2016-0386 (2017). Google Scholar

83. 

G. W. Burnett et al., “Survey of regional anesthesiology fellowship directors in the USA on the use of simulation in regional anesthesiology training,” Reg. Anesth. Pain Med., 44 (11), 986 –989 https://doi.org/10.1136/rapm-2019-100719 (2019). Google Scholar

84. 

J. H. Hepps, C. E. Yu and S. Calaman, “Simulation in medical education for the hospitalist,” Pediatr. Clin. North Amer., 66 (4), 855 –866 https://doi.org/10.1016/j.pcl.2019.03.014 (2019). Google Scholar

85. 

J. O. Lopreiato and T. Sawyer, “Simulation-based medical education in pediatrics,” Acad. Pediatr., 15 (2), 134 –142 https://doi.org/10.1016/j.acap.2014.10.010 (2015). Google Scholar

86. 

D. Cayir et al., “Augmenting security and privacy in the virtual realm: an analysis of extended reality devices,” https://www.computer.org/csdl/magazine/sp/2024/01/10339392/1SBLbYamDfi (). Google Scholar

87. 

N. Pereira et al., “ARENA: the augmented reality edge networking architecture,” in IEEE Int. Symp. Mixed and Augmented Reality (ISMAR), 479 –488 (2021). https://doi.org/10.1109/ISMAR52148.2021.00065 Google Scholar

88. 

M. Huzaifa et al., “ILLIXR: an open testbed to enable extended reality systems research,” IEEE Micro, 42 (4), 97 –106 https://doi.org/10.1109/MM.2022.3161018 IEMIDZ 0272-1732 (2022). Google Scholar

89. 

, “ILLIXR consortium,” https://illixr.org (2024). Google Scholar

91. 

B. Sutton et al., “Extended reality (XR) and the erosion of anonymity and privacy,” https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9619999 (). Google Scholar

92. 

A. Varshney et al., “Center for medical innovations in extended reality,” https://www.mixrcenter.org (2024). Google Scholar

93. 

S. Adve et al., “IMMERSE: center for immersive computing,” https://ws.engr.illinois.edu/sitemanager/getfile.asp?id=6261 (2024). Google Scholar

Biography

Alexa R. Lauinger is a second-year medical student at Carle Illinois College of Medicine. She received her undergraduate degree in biology from the California Institute of Technology in 2020. She has previous experience with research in neuroscience, neurosurgery, and medical education.

Meagan McNicholas is a third-year medical student at Carle Illinois College of Medicine. She received her undergraduate degree in nuclear, plasma, and radiological engineering from the University of Illinois Urbana-Champaign in 2020. She has previous experience with research in cardiothoracic surgery, aortic disease, general surgery, and medical training and education.

Matthew Bramlet is an associate professor of clinical pediatrics at the University of Illinois College of Medicine. He is the director of the congenital cardiac MRI program at the Children’s Hospital of Illinois as well as the lead investigator of the Advanced Imaging and Modeling (AIM) Lab at Jump Simulation in Peoria, IL. The AIM lab at Jump Simulation sponsors the NIH 3D Heart Library. He co-developed VR authoring software through his lab at the University of Illinois and co-founded Enduvo, Inc. His research interests are in scaled VR technologies and automated segmentation solutions.

Maria Bederson is a third-year medical student in the Carle Illinois College of Medicine at the University of Illinois Urbana-Champaign. She is also the recipient of the Illinois Space Grant Consortium Fellowship (ISGC), a NASA-funded fellowship.

Bradley P. Sutton is a professor of bioengineering at the University of Illinois Urbana-Champaign. He received his BS degree from the University of Illinois Urbana-Champaign in 1998 and his MS and PhD degrees in biomedical engineering from the University of Michigan in 2001 and 2003. He is the author of more than 150 journal papers and nine patents associated with magnetic resonance imaging technologies, with his research focused on new acquisition, image reconstruction, and analysis approaches.

Caroline G. L. Cao is a professor of industrial and systems engineering at the Grainger College of Engineering and a professor of biomedical and translational sciences and a director of engineering innovation and medical simulation at the Carle Illinois College of Medicine at the University of Illinois Urbana-Champaign. She is an expert in human factors engineering, specializing in skills acquisition and performance evaluation. She is a recipient of the National Science Foundation Career Award and a US Fulbright Scholar award.

Irfan S. Ahmad is an assistant dean for Research and Executive Director for the Health Maker Lab at the Carle Illinois College of Medicine. He also is a research faculty at the Department of Biomedical and Translational Sciences, and the Holonyak Micro and Nanotechnology Laboratory at the Grainger College of Engineering at the University of Illinois at Urbana–Champaign. His research interests are in bionanotechnology, nanomedicine, and sensors.

Carlos Brown is a board-certified emergency medicine physician at Carle Health and a clinical associate professor at the Carle Illinois College of Medicine at the University of Illinois Urbana-Champaign. He is an expert on point-of-care ultrasound technology.

Shandra Jamison is the operations manager in the JUMP simulation center at the Carle Illinois College of Medicine at the University of Illinois Urbana-Champaign.

Sarita Adve is the Richard T. Cheng professor of computer science at the University of Illinois at Urbana–Champaign. Her research interests are in computer architecture and systems, parallel computing, and power and reliability-aware systems.

John Vozenilek is the chief medical officer of the Jump Trading Simulation and Education Center and the Duane and Mary Cullinan endowed professor for simulation outcomes at the University of Illinois College of Medicine.

Jim Rehg is a founder professor of computer science and industrial and enterprise systems engineering at the University of Illinois Urbana-Champaign.

Mark S. Cohen is the dean of the Carle Illinois College of Medicine and senior vice president and chief academic officer for Carle Health. He is a practicing surgical oncologist and endocrine surgeon and a tenured professor of surgery and biomedical and translational sciences at the Carle Illinois College of Medicine. For his work in mixed reality applications in medical education, he was awarded the 2019 Distinguished Faculty Award for Innovation and the 2021 Provost Award for Innovation in Education at the University of Michigan. He also founded five companies in the digital health, medical device, and medical therapeutics sectors and has mentored over 350 students and faculty on innovation projects and startups.

© 2024 Society of Photo-Optical Instrumentation Engineers (SPIE)
Alexa R. Lauinger, Meagan McNicholas, Matthew Bramlet, Maria Bederson, Bradley P. Sutton, Caroline G. L. Cao, Irfan S. Ahmad, Carlos Brown, Shandra Jamison, Sarita Adve, John Vozenilek, Jim Rehg, and Mark S. Cohen "Applications of mixed reality with medical imaging for training and clinical practice," Journal of Medical Imaging 11(6), 062608 (26 December 2024). https://doi.org/10.1117/1.JMI.11.6.062608
Received: 12 September 2023; Accepted: 1 December 2024; Published: 26 December 2024
Advertisement
Advertisement
KEYWORDS
3D modeling

Virtual reality

Education and training

Augmented reality

Image segmentation

Anatomy

3D image processing

Back to Top