PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The most common type of electronic stereoscopic viewing devices available are LC (Liquid Crystal) shutter glasses, such as CrystalEyes made by StereoGraphics Corp. These type of stereo glasses work by alternating each eye's shutter in sync with a left or right display field. In order to support this technology on PCs, StereoGraphics has been actively working with hardware display vendors, software developers, and VESA (Video Electronic Standards Association) to establish standard stereoscopic display interfaces. With Microsoft licensing OpenGL for Windows NT systems and developing their own DirectX software architecture for Windows 9x, a variety of 3D accelerator boards are now available with 3D rendering capabilities which were previously only available on proprietary graphics workstations. Some of these graphics controllers contain stereoscopic display support for automatic page-flipping of left/right images. The paper describes low-level stereoscopic display support included in VESA BIOS Extension Version 3 (VBE 3.0), the VESA standard stereoscopic interface connector, the GL_STEREO quad buffer model specified in OpenGL v1.1, and a proposal of a FlipStereo() API extension to Microsoft DirectX specification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The PC, as the dominant computer platform, is the most exciting market for stereoscopic displays and applications. Several low-cost stereoscopic display systems have been introduced for PCs, including liquid-crystal shutter (LCS) glasses, low-resolution head-mounted displays, and polarized displays with passive polarized glasses. However, each stereoscopic system has its own proprietary driver, and few drivers support Windows. LCDBios, a DOS driver developed by Donald Sawdai, solved the difficult timing problem of accurately synchronizing LCS glasses to the monitor's refresh without degrading computer system performance. More important, LCDBios also provided the stereoscopic industry with a defacto standard API for displaying stereoscopic images with any LCS glasses. However, the LCDBios API only supported LCS glasses for DOS applications without hardware graphics acceleration. The Stereoscopic Device Interface (SSDI), developed by the authors, now provides a standard architecture and API for driving any stereoscopic display system under DOS and Windows while taking advantage of hardware graphics acceleration. The SSDI architecture consists of the SSDI core, SSDI rendering platform drivers, and SSDI device drivers specific to the stereoscopic hardware. The SSDI architecture is broad enough to support device driver modules for all current stereoscopic hardware, including extensions for head-tracking. SSDI currently runs under Windows 95/98 and DOS, while Windows NT support is under development.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Although there is a rather large interest in implementing stereoscopy on computer systems, there are few broad based tools for developing these applications. As a result, the application developer is often forced into developing his own low-level approach, or leaving out stereoscopy altogether. The majority of computer based stereoscopic solutions are very hardware dependent. Specialty solutions ,such as Redline from Rendition, or Glide (eventually) from 3Dfx, or even interlacing, provide stereoscopy to just a fraction of the potential users. Microsoft Windows, including Wind 95 and Win NT, has a very large customer base, running on a wide variety of hardware. It is generally understood that Windows provides a `hardware independent' solution to an application developer, in that the developer need not be concerned with the hardware on the system. Instead, the operating system and low-level drivers will manage the hardware interface. A goal in developing a stereoscopic toolkit for Windows would accomplish this, and not require a specific set of hardware be used. The following is a description of techniques to provide a hardware-independent stereoscopic application interface.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Several authors have recently investigated the ability to compute intermediate views of a scene using given 2D images from arbitrary camera positions. The methods fall under the topic of image based rendering. In the case we give here, linear morphing between two parallel views of a scene produces intermediate views that would have been produced by parallel movement of a camera. Hence, the technique produces images computed in a way that is consistent with the standard off-axis perspective projection method for computing stereo pairs. Using available commercial 2D morphing software, linear morphing can be used to produce stereo pairs from a single image with bilateral symmetry such as a human face. In our case, the second image is produced by horizontal reflection. We describe morphing and show how it can be used to provide stereo pairs from single images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In stereoscopic/multiview video, the reconstruction of intermediate images is needed to assure continuous motion- parallax and/or comfortable 3D perception. In this context, we propose a block-based disparity estimation followed by disparity-compensated linear interpolation. We progressively deal with deficiencies of the traditional block matching algorithms. First, we employ a spatial smoothness constraint for disparity to overcome inherent matching ambiguity in low-texture areas. Secondly, as a measure of matching error we use a robust function instead of the quadratic that is sensitive to outliers. We also extend the formulation to include color. Finally, we relax the rigidity of the block support for disparities by employing a quadtree block structure (blocks are allowed to split). The proposed algorithm is implemented in a hierarchical coarse-to-fine fashion with a Gaussian pyramid to reduce the computational burden. To correct luminance and color mismatches between images, a 3-component balancing similar to that proposed by MPEG-2's, `Multiview Profile Ad Hoc Group' is used. We tested the proposed algorithm on stereoscopic video sequences acquired in natural surroundings by almost parallel cameras. In informal viewing, every feature of the algorithm listed above resulted in clear improvements of the reconstruction quality. Overall the reconstructed image quality was very good to excellent, depending on the image used.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a new method for converting monoscopic video to stereoscopic video. The key characteristic of our proposed method is that it can process non-horizontal camera/object motion existing in most of image scenes. It is well known that the non-horizontal motion causes the vertical parallax to human eyes and accordingly visual discomfort. The proposed methodology is composed of four major steps. First, given a current video frame, we estimate a motion vector for each block by a conventional block matching motion estimation algorithm. The motion vector is composed of horizontal and vertical disparities. Second, the norm of the motion vector is computed for each block. Here, the vertical disparity is eliminated due to the usage of the norm of the motion vector. Due to the unreliability of estimated motion vectors, a low-pass filter is performed on the norm of the motion vector in order to enhance the reliability. Third, each block is shifted to the horizontal direction by the norm of the motion vector, which is transformed to binocular parallax. The shift of blocks in the horizontal direction eliminates the effects of the vertical disparity. Finally, all the shifted blocks are synthesized and a synthesized image is then generated. A stereoscopic image pair being composed of the original image and its associated synthesized image is produced. With proper 3D viewing devices, users can feel 3D depth from seeing the stereoscopic image. Preliminary experiments have demonstrated that stable stereoscopic image pairs can be produced by applying our proposed method to a variety of monoscopic video with non-horizontal camera panning and/or object motion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a new low-level scheme to achieve high definition 3D-stereoscopy within the bandwidth of the monoscopic HDTV infrastructure. Our method uses a studio quality monoscopic high resolution color camera to generate a transmitted `main stream' view, and a flanking 3D- stereoscopic pair of low cost, low resolution monochrome camera `outriggers' to generate a depth map of the scene. The depth map is deeply compressed and transmitted as a low bandwidth `auxiliary stream'. The two streams are recombined at the receiver to generate a 3D-stereoscopic pair of high resolution color views from the perspectives of the original outriggers. Alternately, views from two arbitrary perspectives between (and, to a limited extent, beyond) the low resolution monoscopic camera positions can be synthesized to accommodate individual viewer preferences. We describe our algorithms, and the design and outcome of initial experiments. The experiments begin with three NTSC color images, degrade the outer pair to low resolution monochrome, and compare the results of coding and reconstruction to the originals.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Our goal in this study is to construct a 3D model from a pair of (stereo) images and then project this 3D model to image planes at new locations and orientations. We first compute the disparity map from a pair of stereo images. Although the disparity map may contain defects, we can still calculate the depth map of the entire scene by filling in the missing (occluded) pixels using bilinear interpolation. One step further, we synthesize new stereo images at different camera locations using the 3D information obtained from the given stereo pair. The disparity map used to generate depth information is one key step in constructing 3D scenes. Therefore, in this paper we investigate various types of occlusion to help analyzing the disparity map errors and methods that can provide consistent disparity estimates. The edge-directed Modified Dynamic Programming scheme with Adaptive Window, which significant improves the disparity map estimates, is thus proposed. Our preliminary simulations show quite promising results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When rendering or capturing stereoscopic images, two arrangements of the cameras are possible: radial (`toed-in') and parallel. In the radial case all of the cameras' axes pass through a common point; in the parallel case these axes are parallel to one another. The radial configuration causes distortions in the viewed stereoscopic image, manifest as vertical misalignments between parts of the images seen by the viewer's two eyes. The parallel case does not suffer from this distortion, and is thus considered to be the more correct method of capturing stereoscopic imagery. The radial case is, however, simpler to implement than the parallel: standard cameras or renderers can be used with no modification. In the parallel case special lens arrangements or modified rendering software is required. If a pinhole camera is assumed it should be readily apparent that the same light rays pass through the pinhole in the same directions whether the camera is aligned radially to or parallel to the other cameras. The difference lies in how these light rays are sampled to produce an image. In the case of a non-pinhole (real) camera, objects in focus should behave as for the pinhole case, while objects out of focus may behave slightly differently. The geometry of both radial and parallel cases is described and it is shown how a geometrical transform of an image produced in one case can be used to generate the image which would have been produced in the other case. This geometric transform is achieved by a resampling operation and various resampling algorithms are discussed. The resampling process can result in a degradation in the quality of the image. An indication of the type and severity of this degradation is given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Correct perspective is crucial to orthostereoscopy. That is to say, the observer must view from the same points in space, relative to the image, that the stereo camera's lenses had relative to the scene. Errors in placement of the observation points result in distortion of the reconstructed stereo image. Although people adapt easily to visual distortions, they may not do it well enough or quickly enough for critical telepresence and telerobotic applications. Further, it is difficult for humans to reliably determine by sight the correct observation point relative to an image. A mathematical guide to correct perspective is therefore useful. The mathematical key to perspective is that all images must subtend at the eye the same angles which the objects that generated them subtended at the camera. The center of perspective on the object side of a camera is the entrance pupil, but where is the center of perspective on the image side of an asymmetrical lens? A simple formula simply derived answers that question. By way of background, pertinent optics and stereoscopic reconstruction errors, including perspective error, are reviewed in this paper. New work begins in the fourth section.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Research on shooting methods for 3D program production for natural 3D images has been continued. Toed-in and parallel camera configurations are available for shooting 3D images. The former is usually used because shooting and viewing conditions are simply set to get a desired 3D design. It has been shown, however, that the toed-in camera arrangement brings about an inconsistency between depth information from perspective of the lenses and that from binocular parallax, which leads to the size distortion known as the Puppet Theater Effect. On the contrary, the geometrical calculations shows that the parallel camera arrangement does not cause such inconsistency under the specific shooting and viewing conditions called `orthostereoscopic conditions' and also shows that 3D images shot under the orthostereoscopic conditions copy the real space into reproduced 3D space correctly. Possibilities of representation in 3D images shot by toed-in and parallel camera configurations are studied through 3D program production. Subjective evaluation tests show that 3D images shot under orthostereoscopic conditions duplicate the real space at a certain display size and also show that 3D programs shot under orthostereoscopic conditions look more natural than those shot using the toed- in camera configuration at any display size.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Byatt describes the use of a multiple electrode or segmented liquid crystal switchable polarizer in combination with a monitor for a field sequential stereoscopic display. The benefit of this approach is to suppress crosstalk caused by the phosphor afterglow. The multiple-segment Byatt modulator has a noticeable drawback: the segments are visible as individual units. In our paper we describe how to make these segments invisible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The geometry between the horizontal and the vertical shifts of the lens to the CCD plane is introduced for the automatic vergence control of the parallel stereo camera. Under the condition that the disparity of stereo image remains constant, the horizontal shift of camera lens causing stereo disparity and the vertical shift causing focus have linear geometry. With this geometry, a simple auto-focusing algorithm is applied to the stereo camera lens for the vergence control of the parallel stereo camera.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Magnified real image of 3D object, provided by conventional optical means is more stretched in depth by the factor of lateral magnification. We found, that our recently reported sliding aperture multistereoscopic technique, utilizing sequential views projection on the scanning mirror, is capable of producing equally magnified 3D images. This capability is based on image depth control that can be used for compensation of perspective distortion, caused by the optical magnification. Image formation analysis shows that relative visual position of a certain image point in depth can be exactly corrected in order to get the same scaling in both depth and lateral direction. Longitudinal position of other points of image will also be corrected, but with some error, depending on their lateral and longitudinal positions. This error results from the difference between the nonlinear components of perspective distortion and compensation mechanism. Error magnitude is usually small enough for the image to look equally magnified.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Stereoscopic Display Applications and New Developments
This paper describes the production and presentation of an experimental virtual museum of Japanese Buddhist art. This medium can provide an easy way to introduce a cultural heritage to people of different cultures. The virtual museum consisted of a multimedia program that included stereoscopic 3D movies of Buddhist statues; binaural 3D sounds of Buddhist ceremonies and the fragrance of incense from the Buddhist temple. The aim was to reproduce both the Buddhist artifacts and atmosphere as realistically as possible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Scientific visualization brings human perceptual processes to bear in organizing and understanding data about physical phenomena. Information visualization has a similar goal for elements in often semantic domains. Document Explorer is an information visualization and retrieval system that displays 3D associative network representations of document and term relations. The system's networks of documents can be constructed from existing measures of association, e.g., link structure in hypertexts, or derived by the system using content similarity among documents. The system also maintains a network of terms which is available to the user for query formulation. Recently, we have added stereoscopic display of the system's several networks to enhance users' perception of structure. Additionally, users' head movements are tracked and used to change point and angle of view to further enhance structure perception and allow additional user interface mechanisms for navigation in three dimensions. By viewing and interacting with these networks using head-tracked stereoscopic display the user is better able to perceive relationships in the networks and, thus, better able to distinguish clusters of documents and categories of terms during the information retrieval process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, research using VR (Virtual Reality) technology has been ongoing in developing remote control systems used for working in dangerous or severe environments. However, some problems concerning recognition, operation and so on remain. In this study, the authors propose a practical remote control method which solves some of these problems. First, the system has more than two stereoscopic cameras which are used to see the robot's hand, objects and so on. The reason for this is that the shadows behind the objects or other objects in only one field of vision decrease the amount of sufficient information needed. The method of showing these views to the operator is by switching the views on one display. Secondly, operation using the view coordinate system (the view oriented operation method) is adopted to solve the problem of operation confusion. With this method, an device coordinate system is made to coincide with a view coordinate system, this helps the operator to recognize the operation direction according to the operator's own will. Thirdly, a work point tracking method is adopted not to lose sight of the object when switching views. In experimenting and testing, several kinds of tasks were performed to confirm the effectiveness of these methods. Using the above methods, these tasks could be performed faster than before, and moreover, the operator could operate the robot more dexterously. In conclusion, these methods can create a more efficient remote control system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new stereoscopic video system (the Q stereoscopic video system), which has high resolution in the central area, has been developed using four video cameras and four video displays. The Q stereoscopic camera system is constructed using two cameras with wide-angle lenses, which are combined as the stereoscopic camera system, and two cameras with narrow-angle lenses, which are combined (using half mirrors) with each of the wide-angle cameras to have the same optical center axis. The Q stereoscopic display system is composed of two large video displays that receive images from the wide-angle stereoscopic cameras, and two smaller displays projecting images from the narrow-angle cameras. With this system, human operators are able to see the stereoscopic images of the smaller displays inserted in the images of the larger displays. Completion times for the pick-up task of a remote controlled robot were shorter when using the Q stereoscopic video system rather than a conventional stereoscopic video system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Stereoscopic Haptic Acoustic Real-Time Computer (SHARC) is a multi-sensory computer system which integrates technologies for autostereoscopic display, acoustic sensing and rendering, and haptic interfaces into the same computing environment. This paper describes the system organization and the interface between different sensory components. This paper also discusses our findings from developing and using the SHARC system in application to a virtual environment in terms of interface, speed, and bandwidth issues, together with recommendations for future work in this area.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
At the 1997 conference DTI first reported on a low cost, thin, lightweight backlight for LCDs that generates a special illumination pattern to create autostereoscopic 3D images and can switch to conventional diffuse illumination for 2D images. The backlight is thin and efficient enough for use in portable computer and hand held games, as well as thin desktop displays. The system has been embodied in 5' (13 cm) diagonal backlights for gambling machines, and in the 12.1' (31 cm) diagonal DTI Virtual Window(TM) desktop product. During the past year, DTI has improved the technology considerably, reducing crosstalk, increasing efficiency, improving components for mass production, and developing prototypes that move the 3D viewing zones in response to the observer's head position. The paper will describe the 2D/3D backlights, improvements that have been made to their function, and their embodiments within the latest display products and prototypes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The stereoscopic display using a curved-directional- reflection screen is a promising approach towards realizing a 3D imaging system that does not require use of special glasses. However, the limited viewing area has been a drawback. In this paper, we describe how the viewing area can be made wider. Our system uses four sets of liquid- crystal projectors that provide four images corresponding to each viewing area. The viewing area is expanded about three times wider than that of a conventional system using two projectors. Especially if the viewer is sitting in a fixed chair, precise control of the viewer's position is not needed due to this wide viewing area.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed a prototype 3D Display system without any eyeglasses, which we call `Rear Cross Lenticular 3D Display' (RCL3D), that is very compact and produces high quality 3D image. The RCL3D consists of a LCD panel, two lenticular lens sheets which run perpendicular to each other, a Checkered Pattern Mask and a backlight panel. On the LCD panel, a composite image which consists of alternately arranged horizontally striped images for right eye and left eye, is displayed. This composite image form is compatible with the field sequential stereoscopic image data. The light from backlight panel goes through the apertures of the Checkered Pattern Mask and illuminates the horizontal lines of images for right eye and left eye on LCD and goes to the right eye position and left eye position separately by the function of the two lenticular lens sheets. With this principle, the RCL3D shows 3D image to an observer without any eyeglasses. We applied simulation of viewing zone, using random ray tracing to the RCL3D and found that illuminated areas for right eye and left eye are separated clearly as series of alternate vertical stripes. We will present the prototype of the RCL3D (14.5', XGA) and simulation results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A tracking based autostereoscopic 3D Display (D4D) has been developed at Dresden University of Technology. It employs a Flat Panel Display (FPD) and is designed for single users. Both stereoscopic half images are displayed simultaneously in alternating FPD columns. The separation of both half images to the viewer's eyes is accomplished using a prism mask in front of the FPD. The tracking is performed in two ways. Firstly, the separation mask is shifted mechanically against the stereoscopic image according to the viewer's position. Secondly, the stereoscopic image is shifted electronically. The D4D is extremely flat. Regular flat panel displays or even notebooks can be made autostereoscopic using the D4D technology Add-Ons. The development of the D4D has been funded by the Saxonian State Ministry of Economy and Labor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Parallax-barrier panoramagrams (PPs) can present high- quality autostereoscopic images viewable from different perspectives. The limiting factor in constructing PP computer displays is the display resolution. First, we suggest a new PP display based on time multiplexing in addition to the usual space multiplexer; the barriers move horizontally in front of the display plane. The time multiplexing increases the horizontal resolution. In addition, it permits us to use wider barriers than are acceptable for static displays. We then analyze these displays, showing that wide-barrier PPs have advantages relating to depth-resolution and smoothness, and we present a novel algorithm for rendering the images on a computer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A multiview 3D imaging system utilizing a holographic screen is described. This system is consisted of a horizontal array of 8 cameras, a signal converter, a full color beam projector and a full color holographic screen. The camera array is composed of 8 CCD cameras having VGA image quality. The cameras are aligned in an arc having the radius of curvature 3 m. The separation between each camera is 3.5 cm. The signal converter is to convert 8 parallel image data from the camera array into a time multiplexed image signal train for each of 3 TV primary colors, and to control the system. The image sampling from each camera of the camera array is done by the field image rate as in the usual TV. The total image fields are 480 per second. The beam projector is consisted of 3 CRTs for the 3 primary colors, a large aperture objective and an 8 strips LCD shutter for each CRT. The projector is to extract 8 view images from the time multiplexed image signal train and projects the images on the holographic screen through the 8 strips LCD shutter located in front of each projector's objective lens. The switching of each strip in the shutter is synchronized with the camera sampling sequence by the signal converter. The holographic screen has the property of a spherical mirror and can create 6 viewing zones. This system can also display PC animated images. The images displayed on the screen are impressive.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In comparison to conventional displays, 3D stereoscopic displays convey additional information about the 3D structure of a scene by providing information that can be used to extract depth. In the present study we evaluated the psychovisual impact of stereoscopic images on viewers. Thirty-three non-expert viewers rated sensation of depth, perceived sharpness, subjective image quality, and relative preference for stereoscopic over non-stereoscopic images. Rating methods were based on procedures described in ITU- Rec. 500. Viewers also rated sequences in which the left- and right-eye images were processed independently, using a generic MPEG-2 codec, at bit-rates of 6, 3, and 1 Mbits/s. The main finding was that viewers preferred the stereoscopic version over the non-stereoscopic version of the sequences, provided that the sequence did not contain noticeable stereo artifacts, such as exaggerated disparity. Perceived depth was rated greater for stereoscopic than for non-stereoscopic sequences, and perceived sharpness of stereoscopic sequences was rated the same or lower compared to non-stereoscopic sequences. Subjective image quality was influenced primarily by apparent sharpness of the video sequences, and less so by perceived depth.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In visual scenes, depth information is derived from a variety of monocular and binocular cues. When in conflict, a monocular cue is sometimes able to override the binocular information. We examined the accuracy of relative depth judgments in orthographic, stereoscopic displays and found that perceived relative size can override binocular disparity as a depth cue in a situation where the relative size information is itself generated from disparity information, not from retinal size difference. A size discrimination task confirmed the assumption that disparity information was perceived and used to generate apparent size differences. The tendency for the apparent size cue to override disparity information can be modulated by varying the strength of the apparent size cue. In addition, an analysis of reaction times provides supporting evidence for this novel depth reversal effect. We believe that human perception must be regarded as an important component of stereoscopic applications. Hence, if applications are to be effective and accurate, it is necessary to take into account the richness and complexity of the human visual perceptual system that interacts with them. We discuss implications of this and similar research for human performance in virtual environments, the design of visual presentations for virtual worlds, and the design of visualization tools.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Does the addition of stereoscopic depth aid steering--the perceptual control of locomotor heading--around an environment? This is a critical question when designing a tele-operation or Virtual Environment system, with implications for computational resources and visual comfort. We examined the role of stereoscopic depth in the perceptual control of heading by employing an active steering task. Three conditions were tested: stereoscopic depth; incorrect stereoscopic depth and no stereoscopic depth. Results suggest that stereoscopic depth does not improve performance in a visual control task. A further set of experiments examined the importance of a ground plane. As a ground plane is a common feature of all natural environments and provides a pictorial depth cue, it has been suggested that the visual system may be especially attuned to exploit its presence. Thus it would be predicted that a ground plane would aid judgments of locomotor heading. Results suggest that the presence of rich motion information in the lower visual field produces significant performance advantages and that provision of such information may prove a better target for system resources than stereoscopic depth. These findings have practical consequences for a system designer and also challenge previous theoretical and psychophysical perceptual research.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Ra-Space method proposed by us is one of the key tools which can be applied to wide areas of 3D image processing and handling. It will contribute to the creation of 3D spatial communication and virtual societies. The Ray-Space is defined as the intensity function F(P), where P represent 4D ray parameters. In this paper, we present a novel Ray- Space coding scheme using an arbitrary-shaped DCT. This scheme consists of the following two steps. The Ray-Space data is first segmented into quadrilateral-shaped primitives. Then, the texture data in each primitive is coded by the arbitrary-shaped DCT. In the experiment, the coding performance of the proposed scheme was examined. The results shows that the proposed coding scheme shows the better coding performance than the block-based Ray-Space coder.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Conventional 3D-TV codecs processing one down-compatible (either left, or right) channel may optionally include the extraction of the disparity field associated with the stereo-pairs to support the coding of the complementary channel. A two-fold improvement over such approaches is proposed in this paper by exploiting 3D features retained in the stereo-pairs to reduce the redundancies in both channels, and according to their visual sensitiveness. Through an a-priori disparity field analysis, our coding scheme separates a region of interest from the foreground/background in the volume space reproduced in order to code them selectively based on their visual relevance. Such a region of interest is here identified as the one which is focused by the shooting device. By suitably scaling the DCT coefficient n such a way that precision is reduced for the image blocks lying on less relevant areas, our approach aims at reducing the signal energy in the background/foreground patterns, while retaining finer details on the more relevant image portions. From an implementation point of view, it is worth noticing that the system proposed keeps its surplus processing power on the encoder side only. Simulation results show such improvements as a better image quality for a given transmission bit rate, or a graceful quality degradation of the reconstructed images with decreasing data-rates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A digital 3D broadcasting system which uses a 525-line progressively scanned digital broadcasting system was developed. Up to now, extensive research has been conducted on 3D systems themselves, but little research has been carried out on the actual broadcasting of 3D signals. One reason has been the inherent difficulty of sending multiple channels using conventional terrestrial analog broadcasting technology. We have recently developed a vertical multiplex method which converts a stereoscopic image consisting of two 525-line interlaced scan video signal (525i), into one 525- line progressive scan video signal (525p). This converted signal is then transmitted using a 525p digital broadcasting system. This 3D system has several merits: (1) It is possible to broadcast 3D video signals using simple equipment attached to a 525p broadcasting system. (2) It is possible to receive left and right eye's information at the same time and no synchronization process is necessary. In order to test this 3D system, we have carried out experiments using a 525p broadcasting system which was constructed for satellite transmission experimentation the year before last. Through these experiments, operating at a bit rate of 9 Mbps, we have confirmed that the 3D system we have constructed does work without any loss of quality of the 3D effect. We are expecting this 3D broadcasting system to be one of the applications of our 525 p experimental broadcasting.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
On Independence Day 1997 Mars Pathfinder bounced to a stop on the Ares-Tiu Vallis floodplain and began returning the first pictures from the surface of Mars in 20 years. The IMP camera took panoramas first in a stowed position (about seated height) and after day 2 from a deployed height of 1.85 m (about standing height). The eye separation of the stereoscopic camera (15 cm) allowed a humanistic view of the surrounding terrain. Months of calibration paid off in producing color images with the 5 visible filters; color was extended into the near IR with 10 additional filters three of which were doubled on each eye for stereo views. Because of low data rates from the direct transmission to Earth, the resolution was limited to 1 mrad/pixel and the FOV was fixed at 14 degrees square. The pointing motors allowed the camera to point in any direction and a complete panorama required 120 images per color. The mission lasted 83 sols (martian days of 24 hours and 39 minutes) and returned over 16,000 image frames. The science goals included contour mapping the site to study the geomorphology and multispectral imaging to sort out the mineralogy of the rocks and soils. In addition, the camera was used to help guide the Sojourner rover using virtual reality visualization techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the development of a Chaotic Environment Engine for the multiplexed management of Dynamic Virtual World Heritage Environments. The Chaotic Environment Engine enables immersive, networked environments to be complex, allowing real-time processing of relationships between (1) many-to-many users, (2) users, artificial life and data storage, (3) virtual environmental hydrology, (4) many-to many-environments, and (5) the aggregate relationship between users, artificial life and environments. The Chaotic Environment Engine is similar to a multiplexing system, yet extended to process communication protocols between humans, artificial life, machines and environments. This research is currently a work in progress and is the first stage of a larger, international project to develop an interactive, Virtual World Heritage Network using advanced virtual reality and multimedia technologies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the LEGOWORLD project, which is a multi-sensory virtual prototyping environment. The environment provides a multi-sensory dynamic rendering of a set of virtual LEGOTM blocks. The environment is rendered in real time on a multi-sensory computing platform, with integrated 3D-visual, audio, and haptic components. The environment is fully real-time interactive, so that the blocks can be manipulated to assemble virtual LEGOTM models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Using Java as the implementation language and the Netscape Communicator package, a client/server environment is established to allow requests from client stations to download the selected virtual environment to be run on the client. Various security measures, such as certification, are included in the environment to ensure proper transfer of files and data packets. Once the operations in the downloaded virtual environment are completed, the environment automatically cleans up the experiment site of the client. This paper discusses the results on our experiments using this client/server environment and the experiences we had in implementing this environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Awareness of the viewer's gaze position in a virtual environment can lead to significant savings in scene processing if fine detail information is presented `just in time' only at locations corresponding to the participant's gaze, i.e., in a gaze-contingent manner. This paper describes the evolution of a gaze-contingent video display system, `gcv'. Gcv is a multithreaded, real-time program which displays digital video and simultaneously tracks a subject's eye movements. Treating the eye tracker as an ordinary positional sensor, gcv's architecture shares many similarities with contemporary virtual environment system designs. Performance of the present system is evaluated in terms of (1) eye tracker sampling latency and video transfer rates, and (2) measured eye tracker accuracy and slippage. The programming strategies developed for incorporating the viewer's point-of-regard are independent of proprietary eye tracking equipment and are applicable to general gaze- contingent virtual environment designs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Identifying appropriate roles for the components of advanced interfaces is a significant research challenge. We expect head movements to assume their natural role in controlling viewpoint, and we are investigating the use of head tracking to provide perspective control. We apply this to a task of adjusting the viewpoint to detect a target at the bottom of a virtual cylinder. The cylinder varies in diameter, height and orientation. We record viewpoint trajectories and elapsed times. Observers participate in one of two conditions: in the Head-as-Head condition, viewpoint changes correspond to observing a real scene; in the Head-as-Hand condition, rotational directions are reversed, simulating manipulation of an object. To evaluate initial learning and consolidation effects there are two sessions of massed trials, two day apart. The results show a rapid learning effect, and solid retention over the two day interval. Performance levels are similar for the two opposite mappings, indicating flexibility in the use of head- controlled viewpoint. We continue to apply the paradigm to other questions about head-controlled viewpoint manipulation, such as establishing a boundary between small movements producing natural parallax changes, versus extended movements involving large scale viewpoint changes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a hand-held user interface for interacting with virtual environments displayed on a Virtual Model Display. The tool, constructed entirely of transparent materials, is see-through. We render a graphical counterpart of the tool on the display and map it one-to-one with the real tool. This feature, combined with a capability for touch- sensitive, discrete input, results in a useful spatial input device that is visually versatile. We discuss the tool's design and interaction techniques it supports. Briefly, we look at the human factors issues and engineering challenges presented by this tool and, in general, by the class of hand-held user interfaces that are see-through.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To realize and integrate various kinds of media information with the least data, a new hierarchical software architecture has been developed. Aiming at easier manipulation, this system is based on a model driven method. Four kinds of generic models; data, object, role, and process models are employed in this system. These models have hierarchical interfaces from data to process layers. In case of the data model, attribute values of data are defined in template forms. If necessary, several constraints are attached to them. In the object model case, every object is defined by `formal' and `feature' structures. Formal structures are defined by our object network which is composed of noun and verb objects. Feature structures are mainly composed of a set of properties, which are described by constraints. For the role model, schemes of various levels of coordination relating multiple roles are represented to satisfy their intentions. These structures are defined by generic goals and constraints. The process model is designed so that all roles are executed concurrently in order to satisfy their interactive intentions under cooperative or competitive conditions. Integrated results of various media can be provided by using or Extensible WELL (Window-based Elaboration Language) system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Shared virtual worlds are one of today's major research topics. While limited to particular application areas and high speed networks in the past, they become more and more available to a large number of users. One reason for this development was the introduction of VRML (the Virtual Reality Modeling Language), which has been established as a standard of the exchange of 3D worlds on the Internet. Although a number of prototype systems have been developed to realize shared multi-user worlds based on VRML, no suitable network protocol to support the demands of such environments has yet been established. In this paper we will introduce our approach of a network protocol for shared virtual environments: DWTP--the Distributed Worlds Transfer and communication Protocol. We will show how DWTP meets the demands of shared virtual environments on the Internet. We will further present SmallView, our prototype of a distributed multi-user VR system, to show how DWTP can be used to realize shared worlds.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A panorama video server system has been developed. This system produces a continuous panoramic view of the entire surrounding area in real time and allows multiple users to select and view visual fields independently. A significant feature of the system is that each user can select the visual field he or she wants to see at the same time. This new system is composed of video cameras, video signal conversion units, video busses, and visual field selection units. It can be equipped with up to 24 video cameras. The most appropriate camera arrangement can be decided by considering both the objects to be taken and the viewing angle of the cameras. The visual field selection unit picks up the required image data from video busses, on which all of the video data is provided. The number of users who can access simultaneously depends only on the number of visual field selection units. To smoothly connect two images captured by different cameras, a luminance-compensating function and a geometry-compensating function are included. This system has many interesting applications, such as in the distribution of beautiful scenery, sports, and monitoring.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new approach of combining real video and synthetic objects. The purpose of this work is to use the proposed technology in the fields of advanced animation, virtual reality, games, and so forth. Computer graphics has been used in the fields previously mentioned. Recently, some applications have added real video to graphic scenes for the purpose of augmenting the realism that the computer graphics lacks in. This approach called augmented or mixed reality can produce more realistic environment that the entire use of computer graphics. Our approach differs from the virtual reality and augmented reality in the manner that computer- generated graphic objects are combined to 3D structure extracted from monocular image sequences. The extraction of the 3D structure requires the estimation of 3D depth followed by the construction of a height map. Graphic objects are then combined to the height map. The realization of our proposed approach is carried out in the following steps: (1) We derive 3D structure from test image sequences. The extraction of the 3D structure requires the estimation of depth and the construction of a height map. Due to the contents of the test sequence, the height map represents the 3D structure. (2) The height map is modeled by Delaunay triangulation or Bezier surface and each planar surface is texture-mapped. (3) Finally, graphic objects are combined to the height map. Because 3D structure of the height map is already known, Step (3) is easily manipulated. Following this procedure, we produced an animation video demonstrating the combination of the 3D structure and graphic models. Users can navigate the realistic 3D world whose associated image is rendered on the display monitor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A 3D Distributed Interactive Simulation (DIS) application was developed and demonstrated in a PC environment. The application is capable of running in the stealth mode or as a player which includes battlefield simulations, such as ModSAF. PCs can be clustered together, but not necessarily collocated, to run a simulation or training exercise on their own. A 3D perspective view of the battlefield is displayed that includes terrain, trees, buildings and other objects supported by the DIS application. Screen update rates of 15 to 20 frames per second have been achieved with fully lit and textured scenes thus providing high quality and fast graphics. A complete PC system can be configured for under $2,500. The software runs under Windows95 and WindowsNT. It is written in C++ and uses a commercial API called RenderWare for 3D rendering. The software uses Microsoft Foundation classes and Microsoft DirectPlay for joystick input. The RenderWare libraries enhance the performance through optimization for MMX and the Pentium Pro processor. The RenderWare and the Righteous 3D graphics board from Orchid Technologies with an advertised rendering rate of up to 2 million texture mapped triangles per second. A low-cost PC DIS simulator that can partake in a real-time collaborative simulation with other platforms is thus achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
While the potential of Virtual Environments (VE's) for training simulators has been recognized right from the start of the emergence of the technology, to date most VE systems that claim to be training simulators have been developed in an adhoc fashion. Based on requirements of the Royal Netherlands Army and Air Force, we have recently developed VE based training simulators following basic systems engineering practice. This paper reports on our approach in general, and specifically focuses on two examples. The first is a distributed VE system for training Forward Air Controllers (FAC's). This system comprises an immersive VE for the FAC trainee, as well as a number of other components, all interconnected in a network infrastructure utilizing the DIS/HLA standard protocols for distributed simulation. The prototype VE FAC simulator is currently being used in the training program of the Netherlands Integrated Air/Ground Operations School. Feedback from the users is being collected as input for a follow-on development activity. A second development is aimed at the evaluation of VE technology for training gunnery procedures with the Stinger man-portable air-defense system. In this project, a system is being developed that enables us to evaluate a number of different configurations with respect to both human and systems performance characteristics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a prototype system to be developing using GPS (Global Positioning System) as a tracker in order to combine real images with virtual geographical images in real time. To cover long distances, this system is built using a monitor-based configuration and divided into two parts. One is the real scene acquisition system that includes a vehicle, a wireless CCD camera, a GPS attitude determination device and a wireless data communication device. The other is the processing and visualization system that includes a wireless data communication device, a PC with a video overlay card and a 3D graphics accelerator. The pilot area of the current system is the part of SERI (Systems Engineering Research Institute) which is the institute we are working for now. And virtual objects are generated with 3D modeling data of the main building, the new building to be planned, and so on in SERI. The wireless CCD camera attached to a vehicle acquires the real scenes. And GPS attitude determination device produces a wireless CCD camera's position and orientation data. And then this information is transmitted to the processing and visualization part by air. In the processing and visualization system, virtual images are rendered using the received information and combined with the real scenes. Applications are an enhanced bird's-eye view and disaster rescue work such as earthquake.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Virtual Explorer project at the University of California, San Diego, is creating immersive, highly- interactive virtual environments for scientific visualization and education. We are creating an integrated model system to demonstrate the potential applications of VR in the educational arena, and are also developing a modulator software framework for the further development of the Virtual Explorer model for other fields.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This individual combatant simulator (ICS) provides ground force leaders opportunities to practice tactical skills on the simulated battlefield by directing dismounted computer- generated forces in combatant and non-combatant exercises. Integrated hardware and software systems allow leaders to operate on the simulated battlefield as they would on the physical battlefield using combinations of voice commands, arm signals, virtual tools, and virtual weapons. Hardware components include image generator, head-mounted display, 3D sound, spatial tracking, instrumented glove, synthesized speech, and voice recognition systems. The training simulator can be operated on a network. Four types of evaluations, including performance of authentic tasks and subjective evaluations, were conducted using dismounted infantry soldiers and university students as participants. The results indicated that the ICS was easy to learn and use, could be used to conduct training exercises, supported skillful performance in training exercises, and was engaging and compelling for the users. These initial evaluations indicated ease of learning and use of the simulator, as well as the potential for training effectiveness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.