Nowhere is the need to understand large heterogeneous datasets more important than in disaster monitoring
and emergency response, where critical decisions have to be made in a timely fashion and the discovery of
important events requires an understanding of a collection of complex simulations. To gain enough insights
for actionable knowledge, the development of models and analysis of modeling results usually requires that
models be run many times so that all possibilities can be covered. Central to the goal of our research is,
therefore, the use of ensemble visualization of a large scale simulation space to appropriately aid decision makers
in reasoning about infrastructure behaviors and vulnerabilities in support of critical infrastructure analysis. This
requires the bringing together of computing-driven simulation results with the human decision-making process
via interactive visual analysis. We have developed a general critical infrastructure simulation and analysis
system for situationally aware emergency response during natural disasters. Our system demonstrates a scalable
visual analytics infrastructure with mobile interface for analysis, visualization and interaction with large-scale
simulation results in order to better understand their inherent structure and predictive capabilities. To generalize
the mobile aspect, we introduce mobility as a design consideration for the system. The utility and efficacy of
this research has been evaluated by domain practitioners and disaster response managers.
Displays supporting stereoscopic and head-coupled motion parallax can enhance human perception of containing 3D
surfaces and 3D networks but less for so volumetric data. Volumetric data is characterized by a heavy presence of
transparency, occlusion and highly ambiguous spatial structure. There are many different rendering and visualization
algorithms and interactive techniques that enhance perception of volume data and these techniques‟ effectiveness have
been evaluated. However, how VR display technologies affect perception of volume data is less well studied. Therefore,
we conduct two formal experiments on how various display conditions affect a participant‟s depth perception accuracy
of a volumetric dataset. Our results show effects of VR displays for human depth perception accuracy for volumetric
data. We discuss the implications of these finding for designing volumetric data visualization tools that use VR displays.
In addition, we compare our result to previous works on 3D networks and discuss possible reasons for and implications
of the different results.
Rapid evacuation of large urban structures (campus buildings, arenas, stadiums, etc.) is a complex operation and of prime interest to emergency responders and planners. Although there is a considerable body of work in evacuation algorithms and methods, most of these are impractical to use in real-world scenarios (non real-time, for instance) or have difficulty handling scenarios with dynamically changing conditions. Our goal in this work is towards developing computer visualizations and real-time visual analytic tools for building evacuations, in order to provide situational awareness and decision support to first responders and emergency planners. We have augmented traditional evacuation algorithms in the following important ways, (1) facilitate real-time complex user interaction with first responder teams, as information is received during an emergency, (2) visual reporting tools for spatial occupancy, temporal cues, and procedural recommendations are provided automatically and at adjustable levels, and (3) multi-scale building models, heuristic evacuation models, and unique graph manipulation techniques for producing near real-time situational awareness. We describe our system, methods and their application using campus buildings as an example. We also report the results of evaluating our system in collaboration with our campus police and safety personnel, via a table-top exercise consisting of 3 different scenarios, and their resulting assessment of the system.
In this work, we propose new automation tools to process 2D building geometry data for effective communication
and timely response to critical events in commercial buildings. Given the scale and complexity of commercial
buildings, robust and visually rich tools are needed during an emergency. Our data processing pipeline consists of
three major components, (1) adjacency graph construction, representing spatial relationships within a building
(between hallways, offices, stairways, elevators), (2) identification of elements involved in evacuation routes
(hallways, stairways), (3) 3D building network construction, by connecting the oor elements via stairways and
elevators. We have used these tools to process a cluster of five academic buildings. Our automation tools (despite
some needed manual processing) show a significant advantage over manual processing (a few minutes vs. 2-4
hours). Designed as a client-server model, our system supports analytical capabilities to determine dynamic
routing within a building under constraints(parts of the building blocked during emergencies, for instance).
Visualization capabilities are provided for easy interaction with the system, on both desktop (command post)
stations as well as mobile hand-held devices, simulating a command post-responder scenario.
A typical approach to exploring Light Detection and Ranging (LIDAR) datasets is to extract features using pre-defined
segmentation algorithms. However, this approach only provides a limited set of features that users can investigate. To
expand and represent the rich information inside the LIDAR data, we introduce a linked feature space concept that
allows users to make regular, conjunctive, and disjunctive discoveries in non-uniform LIDAR data by interacting with
multidimensional transfer functions. We achieve this by providing interactions for creating multiple scatter-plots of
varying axes, establishing chains of plots based on selection domains, linking plots using logical operators, and viewing
selected brushing results in both a 3D view and selected scatter-plots. Our highly interactive approach to visualizing
LIDAR feature spaces facilitates the users' ability to explore, identify, and understand data features in a novel way. Our
approach for exploring LIDAR data can directly lead to better understanding of historical LIDAR datasets, and increase
the turnaround time and quality of results from time-critical LIDAR collections after urban disasters or on the battlefield.
Infrastructure management (and its associated processes) is complex to understand, perform and thus, hard to
make efficient and effective informed decisions. The management involves a multi-faceted operation that requires
the most robust data fusion, visualization and decision making. In order to protect and build sustainable critical
assets, we present our on-going multi-disciplinary large-scale project that establishes the Integrated Remote Sensing
and Visualization (IRSV) system with a focus on supporting bridge structure inspection and management.
This project involves specific expertise from civil engineers, computer scientists, geographers, and real-world
practitioners from industry, local and federal government agencies.
IRSV is being designed to accommodate the essential needs from the following aspects: 1) Better understanding
and enforcement of complex inspection process that can bridge the gap between evidence gathering
and decision making through the implementation of ontological knowledge engineering system; 2) Aggregation,
representation and fusion of complex multi-layered heterogeneous data (i.e. infrared imaging, aerial photos and
ground-mounted LIDAR etc.) with domain application knowledge to support machine understandable recommendation
system; 3) Robust visualization techniques with large-scale analytical and interactive visualizations
that support users' decision making; and 4) Integration of these needs through the flexible Service-oriented
Architecture (SOA) framework to compose and provide services on-demand.
IRSV is expected to serve as a management and data visualization tool for construction deliverable assurance
and infrastructure monitoring both periodically (annually, monthly, even daily if needed) as well as after extreme
events.
Aviation disaster prevention has always been of interest to homeland security, especially after the recent use of aircrafts
as weapons by terrorists. With better understanding of the deficiency of different types of aircraft and their
corresponding effects on the craft's safety, better maintenance and response plans can be devised to prevent disasters
from occurring. In this paper, we present a visual analytical technique to examine the Federal Aviation Agency's
Accident/Incident Database, which contains more than 90,000 incidents across 53 dimensions over the last 30 years, for
identifying trends of relationships between dimensions over time. Our technique is based on the integration of the
ThemeRiver technique directly within a parallel coordinates framework, and simultaneously presents both a "forward
flow view" and a "backward flow view" between each dimension. The forward flow view shows the trends over time of
each of the elements-of-interest in the first dimension, while the backward flow view illustrates how the elements in the
second dimension contribute to the overall trends seen in the first dimension. Through the use of our technique, we were
able to identify characteristics of aircrafts and suggest plausible explanations to their common failures.
KEYWORDS: Interfaces, Human-machine interfaces, Visual analytics, Visualization, Cameras, Control systems, Chemical elements, Display technology, Geographic information systems, Computing systems
In time critical visual analytic environments collaboration between multiple expert users allows for rapid knowledge discovery and facilitates the sharing of insight. New collaborative display technologies, such as multi-touch tables, have shown great promise as the medium for such collaborations to take place. However, under such new technologies, traditional selection techniques, having been developed for mouse and keyboard interfaces, become inconvenient, inefficient, and in some cases, obsolete. We present selection techniques for multi-touch environments that allow for the natural and efficient selection of complex regions-of-interest within a hierarchical geospatial environment, as well as methods for refining and organizing these selections. The intuitive nature of the touch-based interaction permits new users to quickly grasp complex controls, while the consideration for collaboration coordinates the actions of multiple users simultaneously within the same environment. As an example, we apply our simple gestures and actions mimicking real-world tactile behaviors to increase the usefulness and efficacy of an existing urban growth simulation in a traditional GIS-like environment. However, our techniques are general enough to be applied across a wide range of geospatial analytical applications for both domestic security and military use.
Infrastructure safety affects millions of U.S citizens in many ways. Among all the infrastructures, the bridge
plays a significant role in providing substantial economy and public safety. Nearly 600,000 bridges across the
U.S are mandated to be inspected every twenty-four months. Although these inspections could generate great
amount of rich data for bridge engineers to make critical maintenance decisions, processing these data has become
challenging due to the low efficiency from those traditional bridge management systems. In collaboration with
North Carolina Department of Transportation (NCDOT) and other regional DOT collaborators, we present our
knowledge integrated visual analytics bridge management system. Our system aims to provide bridge engineers a
highly interactive data exploration environment as well as knowledge pools for corresponding bridge information.
By integrating the knowledge structure with visualization system, our system could provide comprehensive
understandings of the bridge assets and enables bridge engineers to investigate potential bridge safety issues and
make maintenance decisions.
KEYWORDS: Visualization, Visual analytics, Bridges, Data integration, Data processing, Human-machine interfaces, Data storage, Data mining, Inspection, LIDAR
In the information age today, we are experiencing an explosion of data and information from a variety of sources
unlike anything that the world has seen before. While technology has advanced to keep up with the collection
and storage of data, what we lack now is the ability to analyze and understand the meaning behind the data.
Traditionally, data mining and data management techniques require the data to be uniform such that a single
process can search for knowledge within the data. However, in analysis of complex tasks where knowledge and
information need to be pieced together from different sources of data, a new paradigm is required. In this paper,
we present a framework of using visual analytical approaches to integrate multiple heterogeneous processes that
can each analyze a specific type of data. Under this framework, stand-alone software solutions can focus on
specific aspects of the problem based on domain-specific techniques. The framework serves as a visual repository
for all the information and knowledge discovered by each individual process, and allows the user to interactively
perform sense-making analysis to form a cohesive and comprehensive understanding of the problem at hand. We
demonstrate the effectiveness of this framework by applying it to inspecting bridge conditions that utilizes data
sources from 2D imagery, 3D LiDAR, and multi-dimensional data based on bridge reports.
We present the framework for a battlefield change detection system that allows military analysts to coordinate and utilize
live collection of airborne LIDAR range data in a highly interactive visual interface. The system consists of three major
components: The adaptive and self-maintaining model of the battlefield selectively incorporates the minority of new
data it deems significant, while discarding the redundant majority. The interactive interface presents the analyst with
only the minute portion of the data the system deems relevant, provides tools to facilitate the decision making process,
and adjusts its behavior to reflect the analyst's objectives. Finally, the cycle is completed by the generation of a goal
map for the LIDAR collection hardware that instructs as to which areas should be sampled next in order to best advance
the change detection task. All together, the system empowers analysts with the ability to make sense of a deluge of
measurements by extracting the salient features and continually refining its definitions of relevancy.
With the increase of terrorist activity around the world, it has become more important than ever to analyze and
understand these activities over time. Although the data on terrorist activities are detailed and relevant, the complexity of
the data has rendered the understanding and analysis difficult. We present a visual analytical approach to effectively
identify related entities such as terrorist groups, events, locations, etc. based on a 2D layout. Our methods are based on
sequence comparison from bioinformatics, modified to incorporate the element of time. By allowing the user the
freedom to link entities by their activities over time, we provide a new framework for comparison of event sequences.
Our scoring mechanism is robust and flexible, giving the user the flexibility to define the extent to which time is
considered in aligning entities. Incorporated with high interactivity, the user can efficiently navigate through tens of
thousands of records recorded in over a hundred dimensions of data by choosing combinations of categories to examine.
Exploration of the terrorist activities in our system reveals relationships between entities that are not easily detectable
using traditional methods.
KEYWORDS: Visualization, Databases, Inspection, Data visualization, Data centers, Data communications, Visual analytics, Firearms, Explosives, Algorithm development
Presenting information on a geopolitical map can offer powerful insight into a problem by leveraging an individual's
innate capacity to discover patterns and to use map-related cues to incorporate pre-existing knowledge. This mode of
presentation is not without its flaws, however, as the act of placing information at specific coordinates can imply a false
sense of the data's geo-spatial certainty. Traditional uncertainty visualization techniques, such as those that change
primitive attributes or employ animation, can create large amounts of clutter or actively distract when visualizing geospatially
uncertain events within large datasets. To effectively identify geo-spatial trends within the Global Terrorism
Database of the START Center, we have developed a novel usage of squarified treemaps that maintains the strengths of
traditional map-viewing but incorporates some measure of data verity.
This paper explores the basis and usefulness of a predictive model for the architecture of data and knowledge
visualizations based on human higher-cognition, including human tendencies in reasoning heuristics and
cognitive biases. The strengths and weakness of would-be human and computer collaborators are explored,
and a model framework is outlined and discussed.
Most terrain models are created based on a sampling of real-world terrain, and are represented using linearly-interpolated
surfaces such as triangulated irregular networks or digital elevation models. The existing methods for the creation of
such models and representations of real-world terrain lack a crucial analytical consideration of factors such as the errors
introduced during sampling and geological variations between sample points. We present a volumetric representation of
real-world terrain in which the volume encapsulates both sampling errors and geological variations and dynamically
changes size based on such errors and variations. We define this volume using an octree, and demonstrate that when
used within applications such as line-of-sight, the calculations are guaranteed to be within a user-defined confidence
level of the real-world terrain.
This paper explores the development of an idea conceived a number of years ago for the integration of three diverse technologies into a system capable of supporting a wide variety of applications. The concept of Virtual Geographic Information System (VGIS) was developed in the early 1990's even though no technology base existed that would support real time implementation of the idea. The VGIS concept grew out of the integration of Geographic Information Systems (GIS), Remote Sensing (RS), and Visualization (Viz) into a comprehensive tool that is now used at Georgia Tech in terrain analysis, environmental analysis, weather radar visualization, weather model understanding, situational visualization for emergency response, etc. The implementation of the concept culminated in the development of a system called the Georgia Tech Virtual GIS System (GTVGIS). This paper will discuss the evolution of the system, its applications at Georgia Tech, and the new directions and commercial utilization of similar concepts.
KEYWORDS: Data modeling, Visualization, Current controlled current source, 3D modeling, OpenGL, Transform theory, Virtual reality, Buildings, Geographic information systems, Systems modeling
A data organization, scalable structure, and multiresolution visualization approach is described for precision markup modeling in a global geospatial environment. The global environment supports interactive visual navigation from global overviews to details on the ground at the resolution of inches or less. This is a difference in scale of 10 orders of magnitude or more. To efficiently handle details over this range of scales while providing accurate placement of objects, a set of nested coordinate systems is used, which always refers, through a series of transformations, to the fundamental world coordinate system (with its origin at the center of the earth). This coordinate structure supports multi-resolution models of imagery, terrain, vector data, buildings, moving objects, and other geospatial data. Thus objects that are static or moving on the terrain can be displayed without inaccurate positioning or jumping due to coordinate round-off. Examples of high resolution images, 3D objects, and terrain-following annotations are shown.
KEYWORDS: Buildings, Visualization, Optical spheres, Solid modeling, Data modeling, 3D modeling, Visual process modeling, 3D image processing, Computer aided design, Data storage
This paper describes an approach for the organization and simplification of high-resolution geometry and imagery data for 3D buildings for interactive city navigation. At the highest level of organization, building data are inserted into a global hierarchy that supports the large-scale storage of cities around the world. This structure also provides fast access to the data suitable for interactive visualization. At this level the structure and simplification algorithms deal with city blocks. An associated latitude and longitude coordinate for each block is used to place it in the hierarchy. Each block is decomposed into building facades. A facade is a texture-mapped polygonal mesh representing one side of a city block. Therefore, a block typically contains four facades, but it may contain more. The facades are partitioned into relatively flat surfaces called faces. A texture-mapped polygonal mesh represents the building facades. By simplifying the faces first instead of the facades, the dominant characteristics of the building geometry are maintained. At the lowest level of detail, each face is simplified into a single texture-mapped polygon. An algorithm is presented for the simplification transition between the high- and low-detail representations of the faces. Other techniques for the simplification of entire blocks and even cities are discussed.
KEYWORDS: Visualization, Data modeling, Radar, Data acquisition, Databases, 3D modeling, Sensors, Doppler effect, Visual process modeling, Human-machine interfaces
Over the past several years there has been a broad effort towards realizing the Digital Earth, which involves the digitization of all earth-related data and the organization of these data into common repositories for wide access. Recently the idea has been proposed to go beyond these first steps and produce a Visual Earth, where a main goal is a comprehensive visual query and data exploration system. Such a system could significantly widen access to Digital Earth data and improve its use. It could provide a common framework and a common picture for the disparate types of data available now and contemplated in the future. In particular mcuh future data will stream in continuously from a variety of ubiquitous, online sensors, such as weather sensors, traffic sensors, pollution gauges, and many others. The Visual Earth will be especially suited to the organization and display of these dynamic data. This paper lays the foundation and discusses first efforts towards building the Visual Earth. It shows that the goal of interactive visualization requires consideration of the whole process including data organization, query, preparation for rendering, and display. Indeed, visual query offers a set of guiding principles for the integrated organization, retrieval, and presentation of all types of geospatial data. These include terrain elevation and imagery data, buildings and urban models, maps and geographic information, geologic features, land cover and vegetation, dynamic atmospheric phenomena, and other types of data.
This paper describes the visualization of 3D Doppler radar with global, with high-resolution terrain. This is the first time such data have been displayed together in a real-time environment. Associated data such as buildings and maps are displayed along with the weather data and the terrain. Requirements for effective 3D visualization for weather forecasting are identified. The application presented in this paper meets most of these requirements. In particular the application provides end-to-end real-time capability, integrated browsing and analysis, and integration of relevant data in a combined visualization. The last capability will grow in importance as researchers develop sophisticated models of storm development that yield rules for how storms behave in the presence of hills or mountains and other features.
KEYWORDS: Data acquisition, Visualization, Radar, Doppler effect, Volume rendering, Data storage, 3D optical data storage, 3D displays, Data archive systems, Data centers
In this paper 'real-time 3D data' refers to volumetric data that are acquired and used as they are produced. Large scale, real-time data are difficult to store and analyze, either visually or by some other means, within the time frames required. Yet this is often quite important to do when decision-makers must receive and quickly act on new information. An example is weather forecasting, where forecasters must act on information received on severe storm development and movement. To meet the real-time requirements crude heuristics are often used to gather information from the original data. This is in spite of the fact that better and better real-time data are becoming available, the full use of which could significantly improve decisions. The work reported here addresses these issues by providing comprehensive data acquisition, analysis, and storage components with time budgets for the data management of each component. These components are put into a global geospatial hierarchical structure. The volumetric data are placed into this global structure, and it is shown how levels of detail can be derived and used within this structure. A volumetric visualization procedure is developed that conforms to the hierarchical structure and uses the levels of detail. These general methods are focused on the specific case of the VGIS global hierarchical structure and rendering system,. The real-time data considered are from collections of time- dependent 3D Doppler radars although the methods described here apply more generally to time-dependent volumetric data. This paper reports on the design and construction of the above hierarchical structures and volumetric visualizations. It also reports result for the specific application of 3D Doppler radar displayed over photo textured terrain height fields. Results are presented results for the specific application of 3D Doppler radar displayed over photo textured terrain height fields. Results are presented for display of time-dependent fields as the user visually navigates and explores the geospatial database.
We have developed a semi-automated procedure for generating correctly located 3D tree objects form overhead imagery. Cross-platform software partitions arbitrarily large, geocorrected and geolocated imagery into management sub- images. The user manually selected tree areas from one or more of these sub-images. Tree group blobs are then narrowed to lines using a special thinning algorithm which retains the topology of the blobs, and also stores the thickness of the parent blob. Maxima along these thinned tree grous are found, and used as individual tree locations within the tree group. Magnitudes of the local maxima are used to scale the radii of the tree objects. Grossly overlapping trees are culled based on a comparison of tree-tree distance to combined radii. Tree color is randomly selected based on the distribution of sample tree pixels, and height is estimated form tree radius. The final tree objects are then inserted into a terrain database which can be navigated by VGIS, a high-resolution global terrain visualization system developed at Georgia Tech.
The capabilities of the Georgia Tech Virtual Geographic Information System (GTVGIS) have been extended recently to fake full advantage of the internal client-server internal structure that we have used in our stand-alone visualization capabilities. Research is underway in the creation of 2 additional client-server modes that will allow GTVGIS capabilities to be accessed by laptop client systems with high quality rendering capability and by very inexpensive lightweight client laptop and possibly hand held computing platforms that need only to support standard web browsers. Interface to large, remote, databases is over IP protocols is necessary for these new GTVGIS modes. This paper describes each mode of GTVGIS and the capabilities and requirements for hardware and software for each of the modes.
KEYWORDS: Buildings, 3D modeling, Visualization, Databases, Systems modeling, Cameras, Visual process modeling, Software development, 3D image processing, Data modeling
We have developed a set of tools that attack the problem of rapid construction of 3D urban terrains containing buildings, roads, trees, and other features. Heretofore, the process of creating such databases has been painstaking, with no integrated set of tools to model individual buildings, apply textures, place objects accurately with respect to other objects, and insert them into a database structure appropriate for real-time display. Since fully automated techniques for routinely building 3D urban environments using machine vision have not yet been entirely successful, our approach has been to build a set of semiautomated tools that support and make efficient a human interpreter, running a PC under Windows NT.
KEYWORDS: Visualization, Human-machine interfaces, Computer simulations, Data modeling, Visual analytics, Binary data, Device simulation, 3D modeling, Data communications, Data visualization
This paper presents a structure and set of tools to address the needs of groups of scientists working on large, time- dependent simulations. It describes a direct manipulation, 3D steering environment that is integrated with a controller for instrumenting parallel computations, collecting output, and passing it in binary mode between heterogeneous machines. The instrumentation allows collection of data at chosen points in the model and control of the computation through changes of parameters of insertion of alternate data. The steering interface and controller are joined with a library of collaborative communication tools. With these tools a user may steer a simulation and share visualization tools within the group. In addition a time-dependent steering interface has been introduced. Here time is treated on exactly the same basis as the spatial dimensions so there is a 4D environment, 3 shown spatially and one through animation. The steering interface is built upon a flexible visualization/analysis system. This permits the immediate display of time-dependent results from the dynamic simulations and refined interaction with the results to bring out the character and correlations of multivariate data. The user can then launch new simulations at any stage in this exploration using the visualization to define and focus the simulation parameters, region of interest, and time frame.
Georgia Tech has developed the Virtual GIS (VGIS) system, a real time visualization system for terrain, image, and geographic information systems (GIS) data sets. The initial systems developed at Georgia Tech were non- realtime, but had fast generation of perspective scenes from multisources data sets and the ability to query for GIS attributes associated with terrain of 3D structures inserted within the terrain. The basic concept of a virtual GIS was implemented in realtime using the Silicon Graphics International graphics language. This system has been extended in capability to allow realtime traversal within a very large geographic database and to show the finest detail information available when it is near to the view point. Extensive work has been done in the management of large arrays of information and the efficient paging of that information into the rendering system. An effective level of detail management system is implemented to dynamically allocate the appropriate amount of detail relative to the viewer location. A major use of this system has been in the area of battlefield visualization. The advent of OpenGL as a defacto standard has now made it possible to provide the VGIS capacity on a number of other platforms, thereby extending its usefulness to other applications and users. OpenGL has been developed as a general purpose Graphics rendering toolkit that will be supported on various computers and special purpose rendering systems. There are hardware and software implementations of OpenGL. This should allow VGIS to operate on many systems, taking advantage of specialized graphics hardware when it is present. This paper addresses the implementation of the VGIS system in OpenGL and the use of the system in driving the Evans and Sutherland Freedom series graphics rendering hardware.
The Army's Common Picture of the Battlefield will produce immense amounts of data associated with tactical goals and options, dynamic operations, unit and troop movement, and general battlefield information. These data will come form sensors (in real-time) and from simulations and must be positioned accurately on high-fidelity 3-D terrain. This paper is associated with the Army's 2-D symbols for operations and tactics so that the information content of this symbolic structure is retained. A hierarchy is developed based on military organization to display this symbology. Using this hierarchy, even complex battlefield scenarios can be displayed and explored in real-time with minimal clutter. The user may also move units around by direct manipulation, define paths, create or delete hierarchical elements, and make other interactions. To strengthen the capacity for distributed simulations and for using sensor information from multiple sources, DIS capability has been integrated with the symbology for dynamic updates of position, direction and speed, and hierarchical structure. This paper will also discuss how the techniques used here can be applied to general (non-military) organizational structures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.