Robotic and endoscopic surgery is increasingly used in clinical practice and typically relies on stereoscopic vision to enable 3D visualization of the surgical field. We combined this capability with a FLIm acquisition system suitable for the identification of tumor tissue to generate a 3D map of the surgical field that comprises both FLIm and white light image information. This result is achieved using semi-global matching and deep stereo matching neural network. In addition to the generation of a 3D model of the surgical cavity, this approach leads to a more realistic rendering of FLIm maps by including tissue shading.
KEYWORDS: Visualization, Augmented reality, Tumors, Luminescence, Tissues, Surgery, Information visualization, Data acquisition, Real time imaging, Navigation systems
Real-time visualization of imaging data constitutes a critical part of surgical workflow. Augmented reality (AR) is a promising tool to assist in conventional surgical navigation systems. We have been developing an AR framework for clinical imaging and guidance using an optical see-through head-mounted display (OST-HMD) and fluorescence lifetime imaging (FLIm) instrumentation. This framework supports in vivo scanning of FLIm data and the real-time visualization of diagnostic information overlaid on the interrogated tissue area. With the high discriminative power of FLIm, our FLIm-AR concept has the potential for indicating tumor margins and assisting with tumor excision surgery.
An important step in establishing the diagnostic potential for emerging optical imaging techniques is accurate registration between imaging data and the corresponding tissue histopathology typically used as gold standard in clinical diagnostics. We present a method to precisely register data acquired with a point-scanning spectroscopic imaging technique from fresh surgical tissue specimen blocks with corresponding histological sections. Using a visible aiming beam to augment point-scanning multispectral time-resolved fluorescence spectroscopy on video images, we evaluate two different markers for the registration with histology: fiducial markers using a 405-nm CW laser and the tissue block’s outer shape characteristics. We compare the registration performance with benchmark methods using either the fiducial markers or the outer shape characteristics alone to a hybrid method using both feature types. The hybrid method was found to perform best reaching an average error of 0.78±0.67 mm. This method provides a profound framework to validate diagnostical abilities of optical fiber-based techniques and furthermore enables the application of supervised machine learning techniques to automate tissue characterization.
As the HPC community starts focusing its efforts towards exascale, it becomes clear that we are looking at machines with a billion way concurrency. Although parallel computing has been at the core of the performance gains achieved until now, scaling over 1,000 times the current concurrency can be challenging. As discussed in this paper, even the smallest memory access and synchronization overheads can cause major bottlenecks at this scale. As we develop new software and adapt existing algorithms for exascale, we need to be cognizant of such pitfalls. In this paper, we document our experience with optimizing a fairly common and parallelizable visualization algorithm, threshold of cells based on scalar values, for such highly concurrent architectures. Our experiments help us identify design patterns that can be generalized for other visualization algorithms as well. We discuss our implementation within the Dax toolkit, which is a framework for data analysis and visualization at extreme scale. The Dax toolkit employs the patterns discussed here within the framework’s scaffolding to make it easier for algorithm developers to write algorithms without having to worry about such scaling issues.
KEYWORDS: Earthquakes, Visualization, Computer simulations, Volume rendering, Wave propagation, 3D modeling, Sensor networks, Data modeling, Linear filtering, Video
Comparing numerical simulation results with accelerograph readings is essential in earthquake investigations and discoveries. We provide a case study on the magnitude 7.6 Taiwan Chi-Chi earthquake in 1999. More than 400 seismic sensor stations recorded this event, and the readings from this event increased global strong-motion records fivefold so that the accuracy of the earthquake simulation was enhanced significantly. Direct volume rendering is used to depict the space-time relationships of numerical results and seismic readings. When earthquake simulation data are volume rendered, it reveals the sequence of seismic wave initiation, propagation, attenuation, and energy releasing events of fault ruptures so that the direction of seismic wave propagation can be observed. Both accelerograph readings and earthquake simulation data are used to generate a sequence of ground-motion maps. Stacking these maps up in sequence forms a volume data. Visual analysis of the time-varying component reveals hidden features for better comparison and evaluation. Earthquake scientists are able to obtain insights and evaluate their simulation criteria from volume rendering.
Modern computational science poses two challenges for scientific visualization: managing the size of resulting
datasets and extracting maximum knowledge from them. While our team attacks the first problem by implementing
parallel visualization algorithms on supercomputing architectures at vast scale, we are experimenting
with autostereoscopic display technology to aid scientists in the second challenge. We are building a visualization
framework connecting parallel visualization algorithms running on one of the world's most powerful supercomputers
with high-quality autostereo display systems. This paper is a case study of the development of an end-to-end
solution that couples scalable volume rendering on thousands of supercomputer cores to the scientists' interaction
with autostereo volume rendering at their desktops and larger display spaces. We discuss modifications to our
volume rendering algorithm to produce perspective stereo images, their transport from supercomputer to display
system, and the scientists' 3D interactions. A lightweight display client software architecture supports a variety
of monoscopic and autostereoscopic display technologies through a flexible configuration framework. This case
study provides a foundation that future research can build upon in order to examine how autostereo immersion
in scientific data can improve understanding and perhaps enable new discoveries.
The ability to extract meaning from the huge amounts of data obtained from simulations, experiments, sensors, or the
world wide web gives one tremendous advantage over others in the respective area of business or study. Visualization
becomes a hot topic because it enables that ability. As data size is growing from terascale to petascale and exascale, new
visualization techniques must be developed and integrated into data analysis tools and problem solving environments so
the collected data can be fully exploited. In this talk, I will point out a few important directions for advancing the
visualization technology, which include parallel visualization, knowledge-assisted visualization, intelligent visualization,
and in situ visualization. I will use some of the projects we have done at UC Davis in my discussion.
KEYWORDS: Brain, Image segmentation, Neuroimaging, RGB color model, Digital filtering, Visualization, Magnetic resonance imaging, 3D visualizations, Image processing, Data modeling
We present a semi-automatic technique for segmenting a large cryo-sliced human brain data set that contains 753 high resolution RGB color images. This human brain data set presents a number of unique challenges to segmentation and visualization due to its size (over 7 GB) as well as the fact that each image not only shows the current slice of the brain but also unsliced deeper layers of the brain. These challenges are not present in traditional MRI and CT data sets. We have found that segmenting this data set can be made easier by using the YIQ color model and morphology. We have used a hardware-assisted interactive volume renderer to evaluate our segmentation results.
The complexity of physical phenomena often varies substantially over space and time. There can be regions where a physical phenomenon/quantity varies very little over a large extent. At the same time, there can be small regions where the same quantity exhibits highly complex variations. Adaptive mesh refinement (AMR) is a technique used in computational fluid dynamics to simulate phenomena with drastically varying scales concerning the complexity of the simulated variables. Using multiple nested grids of different resolutions, AMR combines the topological simplicity of structured-rectilinear grids, permitting efficient computational and storage, with the possibility to adapt grid resolutions in regions of complex behavior. We present methods for direct volume rendering of AMR data. Our methods utilize AMR grids directly for efficiency of the visualization process. We apply a hardware-accelerated rendering method to AMR data supporting interactive manipulation of color-transfer functions and viewing parameters. We also present a cell-projection-based rendering technique for AMR data.
Unstructured grid discretizations have become increasingly popular for computational modeling of engineering problems involving complex geometries. However, the use of 3D unstructured grids complicates the visualization task, since the resulting data sets are irregular both geometrically and topologically. The need to store and access additional information about the structure of the grid can lead to visualization algorithms which incur considerable memory and computational overhead. These issues become critical with large data sets. In this paper, we present a layer data organization technique for data from 3D aerodynamics simulations using unstructured grids. This type of simulations typically model air flow surrounding an aircraft body, and the grid resolutions are very fine near the aircraft body. Scientists are usually interested in visualizing the flow pattern near the wing, sometimes very close to the wing surface. We have designed an efficient way to generate the layer representation, and experimented it with different visualization methods, form isosurface rendering, volume rendering to texture-based vector-field visualization. We found that the layer representation facilitates interactive exploration and helps scientists to quickly find regions of interest.
KEYWORDS: Computer aided design, Solid modeling, Visualization, 3D modeling, Systems modeling, Mathematical modeling, Control systems, Virtual reality, Finite element methods, Computing systems
This paper introduces the Designers Workbench, a semi- immersive virtual environment for two-handed modeling, sculpting and analysis tasks. The paper outlines the fundamental tools, design metaphors and hardware components required for an intuitive real-time modeling system. As companies focus on streamlining productivity to cope with global competition, the migration to computer-aided design (CAD), computer-aided manufacturing, and computer-aided engineering systems has established a new backbone of modern industrial product development. However, traditionally a product design frequently originates form a clay model that, after digitization, forms the basis for the numerical description of CAD primitives. The Designers Workbench aims at closing this technology or 'digital gap' experienced by design and CAD engineers by transforming the classical design paradigm into its fully integrate digital and virtual analog allowing collaborative development in a semi- immersive virtual environment. This project emphasizes two key components form the classical product design cycle: freeform modeling and analysis. In the freedom modeling stage, content creation in the form of two-handed sculpting of arbitrary objects using polygonal, volumetric or mathematically defined primitives is emphasized, whereas the analysis component provides the tools required for pre- and post-processing steps for finite element analysis tasks applied to the created models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.