|
1.IntroductionHead-mounted displays (HMDs) with see-through vision displaying a variety of information and images superimposed on the real-world view are being developed with the advent of augmented reality (AR). Some HMDs show only two-dimensional (2-D) images, but some can also display three-dimensional (3-D) visualizations, which create a scene with depth. A conventional 3-D HMD is based on a stereoscopic imaging technique that shows slightly different images to each eye. However, the 3-D image is optically represented in the depth of the screen, and the focus cue (accommodation stimulus) is not equivalent to the perceived depth. The difference in the depths causes asynchronous stimuli between the movement of the eyes (vergence) and accommodation, resulting in the so-called vergence accommodation conflict. It causes of eyestrain and fatigue for viewers when viewing such 3-D images.1 Holography is known as an ideal 3-D display technology that satisfies all human physiological requirements enabling recognition of objects 3-D without the conflicts. Holography is a technology recording and reconstructing 3-D images using diffraction and interference of two light waves: the object light propagated from the object and the reference light propagated from a high-coherence light source. The interference makes a pattern on a hologram that we call a “fringe pattern.” In electroholography, the fringe pattern is displayed on an electronic device such as a spatial light modulator (SLM); the SLM reconstructs the light wave of 3-D objects by illuminating the SLM with reference light, the wavelength and position of which are the same as the reference light. Fringe patterns that represent the virtual objects are generated by computer simulation and are called computer-generated holograms (CGHs). Many reports have studied desktop-type displays that enable several observers to view the 3-D images simultaneously.2,3 However, these displays are disadvantageous due to the large size of the optical system and the narrow visual fields. A trade-off relationship exists between the visual field angle and viewing zone angle. It is possible for HMDs to have larger visual fields than desktop-type display because they require a narrow viewing zone. Holographic HMDs for personal use have also been studied.4–8 HMDs require only a narrow viewing zone and have larger visual fields than desktop-type displays. The first attempt at a holographic HMD was proposed by Takemori,5 and the HMD had a practical small and wide visual field. An HMD for both eyes requires adjustments to enable the generation of vergence stimuli that can accommodate individual interpupillary distances (PDs). However, the Takemori HMD system did not consider the synchronicity of accommodation and vergence stimuli as individual PD is not taken into account, so the reconstructed images could not be matched to human vision correctly. Development of a holographic HMD is in progress, but at present, there is no HMD arrangement that generates accurate vergence and accommodates stimuli to specific observers. This paper proposes a holographic HMD with accurate vergence and accommodation stimuli that provides images in full color and a see-through optical system for an AR representation. Moreover, we have clarified that the HMD has correct characteristics of accommodation and vergence without conflict. We will discuss the compact and light weight optical system for the HMD in Sec. 2 and the calculation method for the fringe patterns of the optical system in Sec. 3, including generation methods of accommodation and vergence stimuli. Then, in Sec. 4, we will describe the structures and devices that make up the HMD system. Finally, the experimental results with our HMD will be presented in Sec. 5. 2.Optical SystemThe optical system for an HMD needs to be compact and lightweight. To satisfy these requirements, we adopted Fourier transform optical systems (FTOS) and a field sequential color method. Although the HMD system is binocular and uses the optical systems of the same structure, we explain a single optical system in this section. 2.1.Fourier Transform Optical SystemsIn electroholography, the pixel density of the display device for a hologram determines the visual field and viewing zone, which indicate the maximum displayable size and the maximum area that viewers can see, respectively. Figure 1(a) outlines an ordinary arrangement of electroholography, where a hologram is illuminated by parallel light. The light is diffracted by the hologram, and the visual field is displayed across the hologram plane. In Fig. 1(a), the visual field angle is given by where is the wavelength of the light and is the pixel pitch of the display device. Equation (1) indicates that depends on the pixel pitch , and this factor makes it difficult to expand the viewing angle of usual holographic displays, unless SLMs with higher resolution are used.To improve this, we adopted a reconstruction method based on an FTOS developed device. The FTOS consists of an SLM, a lens, and a point light source. This simple structure has a great advantage for developing small holographic displays. When the SLM is reflective, the structure is represented as in Fig. 1(b). The point light source is arranged at the focal point of the lens, so the emitted light from the point light source is reflected by the SLM and converges at the focal point of the lens. The fringe pattern on the hologram used in the FTOS is different from that on an ordinary hologram, which is described in Sec. 3. The light diffracted by the hologram reconstructs images around the focal point of the lens. All the light reconstructing images pass through a specified area in front of the SLM. The area is called a viewing window. The width of the viewing window is given as where is the focal length of the lens. When the viewpoint is inside the viewing window, an observer can see the entire image. This window represents the viewing zone of the FTOS, which is narrower than that of the optical system shown in Fig. 1(a).The visual field is enlarged as follows: where is the hologram size. Equation (3) suggests that the visual field of the FTOS is larger than that of the arrangement in Fig. 1(a). So, when a large display and a lens with a short focal length are used, the visual field is expanded. Unexpected light such as zeroth-order light and ghost images converge at the focal point of the lens, and they are easily removed by arranging a barrier in front of the view point. Therefore, the FTOS can effectively enlarge the visual field in a small device.2.2.Colorization MethodThere are two colorization methods in electroholography. One is a method in which images of primary colors are spatially overlapped to reconstruct full-color images.3,9,10 This method has the advantage of being able to use SLMs with a low refresh rate. However, the size of the optical system tends to be large because some optical components, such as lenses, the SLM, and point light sources for each of the primary colors, are necessary. Another colorization method is the field sequential color method,11,12 in which images of the primary colors are successively reconstructed in the same position. Although SLM with a high refresh rate is needed to reconstruct full-color images without flickering, this method has the advantage of having a small reconstruction unit because it uses only one SLM to display holograms for all the primary colors. For the HMD we developed, the field sequential color method was implemented for the colorizing of the reconstructed images. With the field sequential color method, the calculated holograms of each primary color need to be successively displayed only while the reconstruction light for this color is on. These successively reconstructed images of the primary colors are integrated into a full-color image by human vision. Sequential images with a frequency of more than 60 Hz allow observation of full-color images without flickering. A 180-Hz SLM was used for our experiments to display the holograms to keep the frequency for displaying the sets of the three colors at 60 Hz. 2.3.Resolution of Reconstructed ImageThe resolution of reconstructed images displayed by a hologram needs to be higher than that of the human visual system because insufficient resolution creates image blur in reconstructed images. In the FTOS, the size of the viewing window and the aperture size of the light source for reconstruction light determine the resolution. When the size of the viewing window determined by Eq. (2) is less than that of the pupil, the hologram cannot provide enough light to the whole aperture area of the pupil. Thus, the resolution is less than that of human vision. Under the condition aperture, the resolution of reconstructed images is where is the distance between the view point and an object. To obtain a higher resolution, a high-resolution SLM is necessary.According to the theory, a reconstruction light is assumed to be an ideal point light, but an actual reconstruction light has the definite aperture size . The aperture size of the reconstruction light makes the resolution as follows: The lower of the two resolutions determines the resolution of the system. Our system uses RGB LEDs as reconstruction light, whose spectral bandwidths are broad, so the resolution of a reconstructed image is affected. As the diffraction angle on the hologram plane is approximately inverse proportion to the wavelength, the depth of the reconstructed image will be changed as follows: where is the resolution in the depth, is a bandwidth of LED, and is the center wavelength of LED, respectively. The change of the image depth makes the image blur, and the resolution is indicated as3.Calculation Method3.1.Point Source MethodThere are a number of methods to calculate CGHs, and the point source method is used most often.13 This method considers objects as clouds of independent point light sources, and it allows the expression of arbitrarily shaped objects. As suggested in Fig. 2, with the coordinate of an ’th point source of an object dataset defined as and the coordinates of the hologram plane defined as , the complex amplitude distribution on the hologram is expressed as where is the amplitude of a point source, is the propagation distance from to , is an imaginary unit, and is the initial phase of the point source. Therefore, the total complex amplitude distribution from all point light sources is expressed as3.2.Calculation of the Propagation DistanceIn the FTOS, the propagation distance from the point source to the hologram plane is not like that in usual CGH calculations. A depth-free calculation method14 has been introduced to reconstruct images at arbitrary depths. Through the FTOS, a hologram is slightly enlarged by a lens. As the reconstructed images would be expanded and deformed with the FTOS, the CGH calculation method for FTOS is different from an ordinary hologram. It is necessary to compensate for the change in coordinates for the FTOS. When propagation distances from to are and the coordinates of the objects are , the coordinates of the optimized location of the object are obtained with Here, the distance is calculated as and , are expressed as3.3.Calculation for a Binocular SystemThe points to be reconstructed in a binocular view must have parallax information and must display different interference patterns on each of the two SLMs in the respective display units. To ensure the consistency of these two images reconstructed by each of the display units, two coordinate systems for each eye and the world coordinate system that indicates the position of the virtual point to be reconstructed are shown as Fig. 3. The origin of the world coordinate system is located at the center of the view points of the left and right eyes, and the -axis is set inverse to the viewing direction with the -axis orthogonally crossing the viewing direction and passing through the left and right view points. The coordinates of the virtual objects represented by the world coordinate system need to be transformed into the two coordinate systems to calculate consistent CGHs for the left and right display units. When the origin of the coordinate system of the left eye in the world coordinate system is defined as , the coordinates of the point to be reconstructed in the world coordinate system are transformed into the left coordinates system using the movement and rotation transformation, expressed as where and is the angle between the viewing direction of the left system and the -axis of the world coordinate system. The right coordinate system is similarly calculated using the following equation: where is the angle between the viewing direction of the right system and the -axis of the world coordinate system. Equations (13) and (15) are related to the base PD. When the PD is changed by adding [mm], the origin of the coordinate system for the right eye in the world coordinate system moves to become because the right optical unit slides to adjust the PD in our binocular display system. Due to this movement calculation, the objects have the correct parallax for each of the various PDs that can be reconstructed.In a binocular system, the binocular visual field angle is expressed as where is the monocular angle of the left and the right display units, is the depth nearest from the view point where the reconstructed images are observed binocularly, and is the depth where the outside limits of the field of view of the monocular angles cross as shown in Fig. 3. The value of is calculated as where PD is the interpupillary distance of the observer. Equation (17) indicates that when the angles and become larger, the reconstructed images can be observed at a closer point. However, the binocular visual field decreases when objects are located farther away according to Eq. (16). Thus, the parameters and need to be optimized to match the purpose of the use of the display system.3.4.Calibration for Installation ErrorsOptical units commonly have some installation errors, and these errors often cause considerable errors in reconstructed images. However, these errors are difficult to remove manually. Here we propose a calibration method to correct installation errors. The calibration method is divided into three steps. The first step corrects the depth of the reconstructed images. In this step, depth-direction errors are corrected using a method with the linear least squares. First, measurement depths are defined as , and the measured depth which is obtained by measuring the object located at ’th depth is determined. We assume that the relation between the and is represented as a linear model, and the fitting line is defined as The and are given by minimizing the following sum: using the linear least squares method. expresses the trend of installation errors. Next, to remove installation errors, depth of a virtual object is shifted to the depth obtained by multiplying the inverse function . Correction of the depth is independent for each eye regardless of PD. For this reason, if measurement is made when the system is being manufactured, using the measured values can create various hologram data with various PDs. This correction calculation almost takes no time.In the second step, the image size is corrected. The size of the image is changed after the first step, so the size of the image has to be changed to the original object size. The size of the depth-corrected object is represented by the original object size as follows: By multiplying the inverse function of Eq. (20), the size of the original virtual object is corrected.In the third step, the vergence value is also corrected using linear least squares. The vergence value can be determined by the -coordinates of the images reconstructed by the left and right display units. As shown in Fig. 4, when the -axis-direction error in the world coordinate system between the objects reconstructed by the left and right display units is measured to be at the depth , the linear function followed by is expressed with the constants and as and to make following sum minimum: The error is calculated as where and are the -coordinates of the reconstructed object by the right display and by the left display units, respectively. Here, , the gazed depth by both eyes through each system, is calculated with by the following equation: This equation indicates that when there are no errors (), the gazed depth becomes equal to the ideal depth , so the final step is completed by subtracting from the -coordinates of the object to make this error zero.In summary, the object data are corrected to , which are represented as Note that the value of the function is positive when calculating CGHs for the right display unit and negative for the left one.This is a practical calibration method because it adjusts the installation errors of lenses, reconstruction lights, and other optical elements, and the depth-direction distortions caused by the lenses are also corrected at once. 4.Fabrication of Head Mounted DisplaysWe fabricated the holographic HMD system with the optical parameters detailed in Table 1. The range of adjustable PDs was set to 50 to 70 mm because the average PD of adult males is around 65 mm. The viewing direction of the display units for the left and right eyes was defined as being 800 mm from the center of the viewing points. The minimum depth is about 300 mm because this is the nearest depth human beings can focus on without effort. Table 1Optical parameters.
The HMD was 350-mm high × 200-mm wide × 200-mm deep, and the weight was 1480 g. 4.1.Optical StructureWe fabricated a holographic HMD based on the aforementioned details with a sufficiently solid structure to facilitate adjustments of the lens, light source unit, and SLM as shown in Fig. 5. The optical parts are integrated into one component with a surrounding frame. The reconstruction light should be arranged to be located just at the focal point of the lens to reduce installation errors. To create binocular vision, the proposed our HMD has two display units, one for the left eye and one for the right, as shown in Fig. 5. These two units are independent and combined symmetrically for the left and right eyes, and both units are attached on a helmet. Because each display unit has a narrow viewing zone, the binocular display system is equipped with a sliding structure under the right display unit to enable it to be adjusted to the individual PDs of observers. For each of the display units, the FTOS is comprised of an SLM, a lens, and a light source unit, all of which are arranged on one axis. To superpose the real object and virtual image in the observer’s vision, a half mirror is located between the lens and the light source unit. See-through type displays have to make the reconstructed images bright enough to be observed with ordinary room light. To avoid the attenuation of the reconstruction light, the propagation distance is shortened by a lens with short focal length, and only a one-half mirror is used to make see-through vision possible. To cut out unexpected light, barriers are located in front of the eyes. The unexpected light is blocked by the barriers, and only the reconstruction light is propagated through the barriers. 4.2.Full-Color Point Light Source UnitTo obtain sharp and clear full-color images, very small and high-power point light sources are necessary because the resolution of the reconstructed images depends on the quality of the reconstruction light. Lasers for each of the primary colors combined by half-mirrors have usually been adopted as one high-power and full-color point light source.3,9,10,15 However, multiple lasers result in a bulky apparatus and speckle noise in the reconstructed images due to the high coherence of laser light. For these reasons, we used small and high-power light-emitting diodes (LEDs) for the reconstruction light. The full-color LED used has independent LED tips for the primary colors arranged at slightly different positions. Due to the minor errors it gives rise to, the reconstructed images also have slight errors, which would normally cause some color blurring. To avoid that, a sharpening transparency acryl fiber is put on the LEDs as shown in Fig. 6. The custom-made light source was designed and developed by us. The fiber is about 18-mm long and has a diameter of 5 mm. Except for the top and bottom of this fiber, the surface is mirror-coated. Light emitted from the LEDs is combined inside the fiber, and the colors of each beam of the light are emitted only at the top of the mirror-coated fiber. Figure 7 shows the top of the fiber. The aperture of the fiber, with the light spot area shown in the figure, is about 0.2 mm. The size is determined by the balance of the brightness, resolution, and directivity. This point light source unit has a little directivity and a half-power angle of 10.8 deg but is still sufficiently large enough to cover the entire active area of the SLM. To synchronize the timing of lighting of a primary color LED and displaying the color of the hologram, a synchronizing circuit connects the SLM and the light source unit. This circuit transmits the color information signal from the SLM driver to the light source unit, and each color the LED is lit corresponding to the received color signal. 5.Experiments and ResultsTo test the effectiveness of the proposed holographic HMD, we measured the optical characteristics, and conducted objective and subjective evaluations of reconstructed images. 5.1.Optical CharacteristicsFigure 8 shows the left and right images of this holographic HMD. This reconstructed image is generated from the virtual objects shown in Fig. 8(a), with a metal ball at the center, a transparent object on the right, and a diffuse reflection sphere on the left. It can be seen that the checkerboard below is reflected on the surface of the reflective sphere. Figures 8(b) and 8(c) have parallax, and they can be viewed stereoscopically by the parallel method. In this manner, the texture of the object’s surface can be displayed. Also, when the observer is focusing on the sphere in the front, the rear sphere became blurred and a natural expression regarding the focal length was observed. Figure 9 shows the reconstructed image of a visual field chart with a dot interval of 1.0 deg, except for 0.5 deg at both ends of the horizontal row. This result indicates that visual fields of the holographic HMD are 9.4 deg horizontally and 5.6 deg vertically, respectively. As a lens with a small diameter and a short focal length is used, the chromatic aberration can be seen. To correct this, it is necessary to considerate calculation algorithm about the chromatic aberration correction. Figure 10 shows the reconstructed images of test charts to reveal the resolutions of the proposed holographic HMD. The charts are located at a depth of 500 mm, and the numbers on them are line spaced (mm). The figures indicate that both horizontal and vertical resolutions are slightly larger than 1.0 mm. The measured resolutions agree with that predicted by Eq. (5) and the aperture size of reconstruction light described in Sec. 2.3. Figure 11 shows reconstructed images of two Maltese-cross targets arranged at a depth of 300 and 1000 mm, respectively. In Fig. 11(a) in which the focus is on the left target, the left target image is sharp whereas the right one is defocused. Focusing on the opposite target produced an effect opposite to that as shown in Fig. 11(b). These results indicate that the reconstructed images provide accommodation stimulus. 5.2.Calibration of the Depth of the Reconstructed ImagesIn the first experiment, the depth of the reconstructed noncorrected images were measured and corrected using the calibration method described in Sec. 3.4 and we tested the accuracy of the depths and vergence values of the images. The measured depths were defined from 400 to 1000 mm with 100-mm intervals, and the depths of the targets were measured using the focus feature of a camera. The focus of the camera was aligned with the reconstructed image, and in this state, the actual index was placed at the focus of the camera. The depth of this real index was measured. The camera was a Nikon D5100, the lens an , and the depth of field was at 1 m. The use of the RGB LEDs affects the image depths described in Sec. 2.3. The central wave length (spectral bandwidths) of the RGB LEDs are 625 nm (17 nm), 525 nm (34 nm), and 465 nm (23 nm), respectively. The largest depth change (green color) is 16 mm at the depth of 1000 mm, which is smaller than the depth of field of the camera. The resolution occurred by bandwidth of the wavelength is 0.24 mm (green color) at a depth of 1000 mm, which is smaller than the resolution occurred by the aperture size. Figure 12 shows the results of this experiment: Fig. 12(a) is the results of the left optical unit and (b) shows the results of the right one. In this figure, the horizontal and the vertical axes correspond to the target depth and the measured depth , respectively. The measured depth of the noncorrected targets is plotted as “+” marks and the approximated curve of these values is expressed as the chained line. These data were corrected using Eqs. (20) and (25) and expressed as “” marks. The measured depths of the corrected images are in agreement with the theoretically ideal line that satisfies and are represented by the dotted line. These results after correction show that both the left and right units reconstruct the images at the correct depths. Next, the errors of the vergence values of the display depth-corrected targets located at the theoretically ideal depth were measured. These errors were also measured using a camera with the following procedure: a scale was arranged in the depth displaying the reconstructed images, and we captured the reconstructed images and measure from the left and right viewpoints using camera. If the parallax is correct, the reconstructed image should be located at the same position on the scale. Figure 13 shows difference of positions between the left and right images. In Fig. 13(a), the measured errors were sufficiently small and needed no correction; the maximum value was a 10-mm error at the 1000-mm deep target. Figure 13(b) shows these experimental results with the measured errors related to the PDs. No calibration of the vergence value was conducted for these experiments. The error became almost negligible. 5.3.Subjective EvaluationThe second experiment was subjective evaluations of depth perceptions. In this experiment, the depths of the depth-corrected images were evaluated binocularly by five subjects in their 20s, all of whom had 20/20 vision. The target depths were located from 400- to 1000-mm distance from the observers at 100-mm intervals. The observed depth was measured by moving the actual index to a position, where a subject recognized as the same as the image. The movement is controlled by a subject using an electric laser. There was one experiment for each subject. The results of the evaluation are shown in Fig. 14, where (diopter) is a metric expression of ; a unit expressing the distortion power of the lens. Figure 14(a) shows the relationship between the stimuli depths [D] of the displayed targets and the observed depths [D] after the depth correction. In the figure, the dotted line is an ideal line satisfying and the plotted marks are observed depth. These results indicate that the proposed calibration is able to correct the depths of reconstructed images. Figure 14(b) shows the individual PDs in the horizontal axis and the stimuli of the depths of the observed targets in the vertical axis. These results indicate that the holographic HMD regenerates images of the objects at correct depths for the various individual PDs. 6.ConclusionWe developed a holographic HMD that shows full-color 3-D images with a visual field of 9.4 deg. To generate accurate accommodation and vergence stimuli, we proposed correction methods for the focusing depths and vergence angle. The CGH calculations for these corrections make it possible to use a low-accuracy assembly and a freely adjustable optical system. The results of the objective and subjective evaluations indicated that the display represents 3-D images at correct depths. The apparatus was also allowed to generate adjusted stimuli that accommodate individual interpupillary distances. Although the whole system is large, heavy, and impractical, but the optical system entity is not heavy, and is primarily the weight of the frame. Consequently, the overall arrangement of this system can be considered small and light. ReferencesD. M. Hoffman et al.,
“Vergence-accommodation conflicts hinder visual performance and cause visual fatigue,”
J. Vision, 8
(3), 33
(2008). https://doi.org/10.1167/8.3.33 Google Scholar
J. Barabas et al.,
“Depth perception and user interface in digital holographic television,”
Proc. SPIE, 8281 828109
(2012). https://doi.org/10.1117/12.908538 PSISDG 0277-786X Google Scholar
T. Senoh et al.,
“Full-color wide viewing-zone-angle electronic holography system,”
in Digital Holography and Three-Dimensional Imaging, OSA Technical Digest,
(2011). Google Scholar
E. Moon et al.,
“Holographic head-mounted display with RGB light emitting diode light source,”
Opt. Express, 22
(6), 6526
–6534
(2014). https://doi.org/10.1364/OE.22.006526 OPEXFF 1094-4087 Google Scholar
T. Takemori,
“3-dimensional display using liquid crystal devices–fast computation of hologram–,”
13
–19 Tokyo, Japan
(1997). Google Scholar
M. Kitamura et al.,
“Depth perception with see-through holographic display,”
in Digital Holography and Three-Dimensional Imaging, OSA Technical Digest,
(2011). Google Scholar
W. Su, L. Chen and H. Lin,
“Full color image in a holographic head-mounted display,”
in 20th Int. Display Workshops (IDW ’13),
1280
–1283
(2013). Google Scholar
H. E. Kim et al.,
“Three-dimensional holographic display using active shutter for head mounted display application,”
Proc. SPIE, 7863 78631Y
(2011). https://doi.org/10.1117/12.872680 PSISDG 0277-786X Google Scholar
H. Nakayama et al.,
“Real-time color electroholography using multiple graphics processing units and multiple high-definition liquid-crystal display panels,”
Appl. Opt., 49 5993
–5996
(2010). https://doi.org/10.1364/AO.49.005993 APOPAI 0003-6935 Google Scholar
F. Yaras, H. Kang and L. Onural,
“Real-time phase-only color holographic video display system using LED illumination,”
Appl. Opt., 48 H48
–H53
(2009). https://doi.org/10.1364/AO.48.000H48 APOPAI 0003-6935 Google Scholar
H. Nakayama et al.,
“An electro-holographic colour reconstruction by time division switching of reference lights,”
Appl. Opt., 49 5993
–5996
(2010). https://doi.org/10.1364/AO.49.005993 APOPAI 0003-6935 Google Scholar
T. Shimobaba and T. Ito,
“A color holographic reconstruction system by time division multiplexing with reference lights of laser,”
Opt. Rev., 10 339
–341
(2003). https://doi.org/10.1007/s10043-003-0339-6 1340-6000 Google Scholar
J. P. Waters,
“Holographic image synthesis utilizing theoretical methods,”
Appl. Phys. Lett., 9 405
–407
(1966). https://doi.org/10.1063/1.1754630 APPLAB 0003-6951 Google Scholar
Y. Sato and Y. Sakamoto,
“Calculation method for reconstruction at arbitrary depth in CGH with Fourier transform optical system,”
Proc. SPIE, 8281 82810W
(2012). https://doi.org/10.1117/12.907615 PSISDG 0277-786X Google Scholar
Y. Shimozato et al.,
“Four-primary-color digital holography,”
in Digital Holography and Three-Dimensional Imaging, OSA Technical Digest,
(2011). Google Scholar
|