Open Access
14 March 2018 Continuous combination of viewing zones in integral three-dimensional display using multiple projectors
Naoto Okaichi, Masato Miura, Hisayuki Sasaki, Hayato Watanabe, Jun Arai, Masahiro Kawakita, Tomoyuki Mishina
Author Affiliations +
Abstract
We propose a method for arranging multiple projectors in parallel using an image-processing technique and for enlarging the viewing zone in an integral three-dimensional image display. We have developed a method to correct the projection distortion precisely using an image-processing technique combining projective and affine transformations. To combine the multiple viewing zones formed by each projector continuously and smoothly, we also devised a technique that provides accurate adjustment by generating the elemental images of a computer graphics model at high speed. We constructed a prototype device using four projectors equivalent to 4K resolution and realized a viewing zone with measured viewing angles of 49.2 deg horizontally and 45.2 deg vertically. Compared with the use of only one projector, the prototype device expanded the viewing angles by approximately two times in both the horizontal and vertical directions.

1.

Introduction

Many studies on integral three-dimensional (3-D) imaging systems based on the principle of integral photography proposed by Lippmann1 in 1908 have been carried out.28 An integral 3-D image is characterized by presenting a full parallax image to the viewer, where the 3-D image can be viewed without wearing special glasses and the image viewed changes according to the viewing position. In optical systems that display integral 3-D images, it is common to display elemental images, of which the light-ray information in various directions is recorded on a display device, and to view them through a lens array composed of a large number of minute lenses on the front face. With the lens array, the same information as the ray information emitted by the actual object is optically reconstructed in the space, and it can be viewed as an integral 3-D image.

Since integral imaging is a method of reconstructing many viewpoint images horizontally and vertically, it is necessary to display a large number of pixels on the display device to display a high-performance integral 3-D image.9 Research on displaying integral 3-D images using a single high-resolution display device has advanced. Yamashita et al.10 constructed a projector with a 16K equivalent resolution using the liquid crystal on silicon display elements of 8K Super Hi-Vision assigned to the red, green, and blue (RGB) signals. In the projector, a wobbling element was arranged to shift the positions at which the G1 and G2 signals are displayed by half of a pixel diagonally. They realized an integral 3-D image consisting of 100,000  pixels using the projector.11 However, it is currently difficult to fabricate a single display element that significantly exceeds 8K, and it is difficult to improve the performance of an integral 3-D image further using a single display device.

Therefore, various methods have been proposed to improve the performance of 3-D images by combining multiple display devices or multiple lens arrays.1220 Martínez-Cuenca et al.12 proposed a method that enhances the viewing angle using a multiaxis telecentric relay system that prevents the overlapping between elemental images in the pickup and the flipping in the display. Takaki and Nago13 and Kawakita et al.14 proposed a method that increases the number of viewpoints at which 3-D images can be seen using multiple display devices, but they have motion parallax only in the horizontal direction. Liao et al.15 and Jang et al.16 increased the number of pixels in integral 3-D images using multiple projectors, but the viewing zone remains narrow since it has not been enlarged. Tolosa et al.17 presented a technique to improve the field of view by eliminating the flipping effect of the conventional integral displays using a system based on Köhler illumination, which is composed of a collimating lens and two lens arrays. Alam et al.18 proposed a technique to combine the viewing zones of two projectors by time division, but it suffers from a reduction in the frame rate and a limitation on the number of devices. We have realized an increase in the number of pixels in integral 3-D images using multiple direct-view displays and a multi-image combining optical system.19,20 The direct-view display has advantages such as thinning of the entire device, but it is difficult to change the screen size. In an integral 3-D image display using a direct-view display, the color moiré caused by the RGB pixel arrangement also appears.

In this paper, we propose a method that combines the viewing zones formed by each projector and enlarges the viewing zone of an integral 3-D image using multiple projectors. In this combination, the image distortion caused by the oblique projection is precisely corrected using image processing, and an integral 3-D image without projection distortion is reconstructed. Furthermore, to combine the viewing zones formed by each projector continuously and smoothly, we propose a method that facilitates precise adjustment by generating the elemental images of a computer graphics (CG) model at a high speed. Using the proposed method, the viewing zone can be enlarged according to the number of projectors, and it becomes possible to see integral 3-D images in a wide viewing angle.

2.

Combination of an Integral Three-Dimensional Display with a Wide Viewing Zone by Multiple Projectors

In this section, the combination of an integral 3-D display with a wide viewing zone using multiple projectors is explained.

2.1.

Integral Three-Dimensional Display Using a Single Display Device

First, as a premise, a method for reconstructing an integral 3-D image using a single display device will be described. The differences in how the viewing zone of the integral 3-D image is formed for the case where a direct-view display is used and the case where a projector is used for the display device are explained in Fig. 1. As shown in Fig. 1(a), for the method using a direct-view display, elemental images are displayed on a liquid-crystal display or organic light-emitting diode panel, a lens array is placed in front of the direct-view display, and an integral 3-D image is reconstructed. The light-emitting part of the pixels of the direct-view display and the lens array are arranged such that the distance between them is equal to the focal length of the elemental lens fl. Moreover, a viewing zone is formed by the elemental image and the corresponding elemental lens. However, since the light of the pixels of the display is diffused, the next elemental image will be seen through the elemental lens at a viewing position outside the central viewing zone. The viewing zone at the center is called the primary viewing zone, and the viewing zones at other locations are called the secondary viewing zones. On the other hand, as shown in Fig. 1(b), when reconstructing an integral 3-D image using a projector, the light rays passing through a display element of the projector are emitted from the projection lens of the projector and spread in proportion to the projection distance. By arranging the collimating lens in front of the projector so that the distance between the projection lens of the projector and the collimating lens is the same as the focal length of the collimating lens fp, the projection light is collimated after expansion to the desired projection size. Therefore, the integral 3-D image is reconstructed by passing it through the lens array so that the elemental image and the elemental lens correspond to each other. The focal plane of the projected image should be positioned away from the lens array by a distance equal to fl. Since the projection size is somewhat large, a Fresnel lens, which can be fabricated with a large diameter, is used as the collimating lens.

Fig. 1

Difference in how viewing zones of integral 3-D image are formed when a single display device is used: (a) direct-view display and (b) projector.

OE_57_6_061611_f001.png

In the method using the direct-view panel, since the light rays are diffused as described above, an image at the secondary viewing zones is reconstructed in addition to the primary viewing zone. On the other hand, in the method using the projector, an image at the secondary viewing zones is not formed since only the integral 3-D image, which is formed with the corresponding elemental image and elemental lens, is reconstructed. Therefore, a 3-D image formed by one projector is reconstructed only around the direction of projection, and the secondary viewing zone does not appear.

2.2.

Integral Three-Dimensional Display Using Multiple Projectors

By applying the aforementioned features, an optical system using multiple projectors is constructed as shown in Fig. 2. If multiple projectors project images from different directions and the viewing zones formed by each projector are combined, the viewing zone can be enlarged according to the number of projectors. For example, if two projectors are used in the horizontal direction and two projectors are used in the vertical direction, it is possible to reconstruct a 3-D image having twice the viewing angle in both the horizontal and vertical directions. It is necessary for all of the projectors to have the same image size on the focal plane, so the sizes of the 3-D images reconstructed by each projector are equal.

Fig. 2

Integral 3-D display using multiple projectors.

OE_57_6_061611_f002.png

There are two problems to be solved when constituting an integral 3-D display with a wide viewing zone using multiple projectors. The first problem is the distortion in the projected image. Owing to projection from an oblique direction and the aberration of the collimating lens, the projected image is distorted on the focal plane, and the 3-D image cannot be reconstructed correctly. The second problem is that the 3-D image becomes discontinuous at the parts where the viewing zones are connected, unless elemental images corresponding to each projection angle are input for each projector. A method for solving these two problems and reconstructing a natural wide-view integral 3-D image without distortion will be described in Secs. 3 and 4, respectively.

3.

Distortion Correction for a Projected Image

3.1.

Image Distortion Caused by Projection

The image projected by the projector is distorted by aberration when it passes through the collimating lens. Moreover, when using a projector that does not have a sufficiently large lens shift or has no lens shift function, it is necessary to project the image to the collimating lens from an oblique direction by tilting the projector, resulting in a trapezoidal distortion in the projected image. When the projected image is distorted on the focal plane of the projected image, the correspondence between the lens array and the elemental image is lost, and it is not possible to display the desired integral 3-D image.21 Figure 3 shows an example of a reconstructed integral 3-D image when projecting from an oblique direction using one projector, a collimating lens, and a lens array. When viewing an image without correcting the projection distortion, the correspondence between the elemental image and the elemental lens is lost, and the desired integral 3-D image cannot be obtained, as shown in Fig. 3(a). By correcting the projection distortion and matching the shapes and positions of the elemental image and elemental lens, it is possible to reconstruct the desired integral 3-D image, as shown in Fig. 3(b).

Fig. 3

Example of a reconstructed integral 3-D image using one projector: (a) before and (b) after correcting for projection distortion.

OE_57_6_061611_f003.png

3.2.

Distortion Correction Method Using Image Processing

To solve this problem, image processing is applied to the elemental-images that are input to the projector, and the projection distortion is eliminated. As the image-processing method, a distortion correction method combining projective and affine transformations is applied.19,20 The device setup for projection distortion correction is shown in Fig 4. A diffuser plate is placed on the focal plane of the projected image, and image correction is performed while viewing the image projected onto the diffuser plate. An image of a triangular mesh is input to the projector and projected onto the diffuser plate. A reference sheet of a triangular mesh of the desired screen size is prepared, and the projected image of the triangular mesh is dynamically changed to match that of the reference sheet using image processing. The equation for combining the two transformations is expressed as

Eq. (1)

(xy1)=1Hgx+Hhy+1AH(xy1),

Eq. (2)

H=(HaHbHcHdHeHfHgHh1),

Eq. (3)

A=(AaAbAcAdAeAf001),
where x and y are the coordinates before correction, x and y are the coordinates after correction, and H and A are the projective and affine transformation matrices, respectively, which are derived using the control points of the triangular mesh. The transformation matrix is derived with the correction method using the triangular mesh if correction is performed once. Thus, as long as the optical system is not moved, it can be applied to all elemental images to reconstruct integral 3-D images without distortion.

Fig. 4

Device setup for projection distortion correction.

OE_57_6_061611_f004.png

A flowchart of the projection distortion correction method for multiple projectors is shown in Fig. 5. Distortion correction is performed for each projector, and a correction matrix for each projector is derived. Consequently, the images projected by all projectors are corrected, so they have the same size on the focal plane. After correction, the diffuser plate is removed, and the integral 3-D image is reconstructed by arranging the lens array away from the focal plane of the projected image at a distance equal to the focal length of the elemental lens.

Fig. 5

Flowchart of the projection distortion correction method for multiple projectors.

OE_57_6_061611_f005.png

4.

Continuous Combination of Multiple Viewing Zones Formed by Each Projector

4.1.

Discontinuity Between Different Viewing Zones

Viewers can see a 3-D image while moving through a wide angle by enlarging the viewing zone using multiple projectors. Therefore, it is necessary to smoothly combine the images of multiple viewing zones to see the 3-D image naturally. It is necessary for each projector to input elemental images corresponding to each projection angle. Although the projection angle can be roughly calculated from the arrangement of the optical system, a discontinuity due to misalignment of the optical system practically occurs at the parts connecting different viewing zones of the reconstructed 3-D image. For a more precise combination, we propose a method that carries out dynamic correction while changing elemental images by referring to the reconstructed 3-D image.

4.2.

High-Speed Generation of Elemental Images for Adjustment

By making it possible to render elemental images of a CG model in real time and making adjustments while dynamically changing the elemental images, it is possible to derive the projection angles of multiple projectors quickly and precisely. Therefore, a method for rendering elemental images of the CG model at high speed is applied.22 In this method, when creating elemental images using the CG model, the pixels in the same direction are collectively acquired using virtual cameras, so the calculation is performed efficiently. With this method, it is possible to reduce the number of cameras arranged in the 3-D virtual space to approximately the number of pixels of an elemental image, and it is possible to increase the speed at which elemental images are generated. Figure 6 shows collective light-ray acquisition by virtual cameras for the high-speed generation of elemental images corresponding to each projector. By changing the direction of acquisition of the virtual cameras in the 3-D virtual space, it is possible to generate elemental images according to the projection angle θ of the projector, as shown in Fig. 6. Using this method, elemental images are dynamically generated while minutely changing parameters in the direction of projection, and an adjustment is made so that the 3-D images reconstructed by multiple projectors are continuously combined.

Fig. 6

Collective light-ray acquisition by virtual cameras for the high-speed generation of elemental images corresponding to each projector: projector (a) A and (b) B.

OE_57_6_061611_f006.png

4.3.

Derivation of the Projection Angle According to Each Projector

Figure 7 shows a flowchart for deriving the projection angle. First, with respect to one projector (projector #1), the projection angle is determined from the arrangement of the optical system, and elemental images are created. The 3-D image reconstructed by this projector is used as a reference. Next, for another projector (projector #i), the elemental images are created on the basis of the arrangement of the optical system in the same way. The elemental images of projector #i are dynamically changed so that the positions of the 3-D images reconstructed by the two projectors match, and the projection angle θi of projector #i is derived. The above procedure is repeated for all projectors with projector #1 as a reference, and the projection angles of all projectors are determined.

Fig. 7

Flowchart of the projection angle derivation method for multiple projectors.

OE_57_6_061611_f007.png

The image displayed in front of the condensing point of light rays is used as the 3-D image to be displayed for adjustment. This is because the amount of displacement is enlarged using the 3-D image distant from the condensing point; thus, adjustment with such an image results in adjustment with a higher precision. Considering the influence of blur in the integral 3-D image,9 the 3-D image is reconstructed at the maximum depth position where the maximum spatial frequency is maintained. A square object without depth is used as an example of an integral 3-D image (Fig. 8). Figure 8 was taken from the overlap part of the viewing zones. Adjustment is dynamically performed using the high-speed elemental-image generation method, so the 3-D images displayed using all the projectors overlap at the same position in the parts connecting the viewing zones. Then, the adjustment parameter of this state is derived as the projection angle for each projector. The adjustment was performed with a visual check. Once the projection angle is determined, as long as the optical system is not moved, elemental images can be created on the basis of the information of the projection angle when new elemental images are created, and multiple viewing zones can be continuously combined.

Fig. 8

Example of an integral 3-D image for adjustment when combining multiple viewing zones: (a) before and (b) after adjustment.

OE_57_6_061611_f008.png

5.

Experiments and Results

5.1.

Experiment for Verifying the Continuous Combination of Multiple Viewing Zones

First, we conducted an experiment to verify the continuous combination of the viewing zones of multiple projectors as described in Sec. 4. Two projectors were arranged in the horizontal direction, and the experiment was conducted to combine their viewing zones. The projector (DLA-PX1) is manufactured by JVCKENWOOD, and images with a resolution equivalent to 4 K are displayed by projection with a half-pixel shift of a high-definition image by time division (e-shift technology). After applying the projection distortion correction method described in Sec. 3, the method for adjusting the continuous combination of multiple viewing zones described in Sec. 4 was applied.

As described in Sec. 4, to adjust the projection angle dynamically, a program for generating elemental images at high speed was coded using OpenGL with the OpenGL Shading Language. A computer equipped with an Intel(R) Xeon(R) E5-2687W v4 central processing unit and an NVIDIA Quadro P5000 graphics card was used for the experiment. A 30×30 array of virtual cameras was placed in the 3-D virtual space to generate elemental images with a resolution of 3840×2160  pixels. The frame rates at which elemental images were generated were 23.4 and 19.8 fps when rendering objects with 4 (a square model) and 34,834 (a bunny model) vertices, respectively. Using this program, elemental images can be generated almost in real time. Accordingly, it was possible to derive the projection angle of each projector and generate the corresponding elemental images quickly, precisely, and efficiently.

Figure 9 shows the integral 3-D images when viewed from the left, at the center, and from the right without or with adjustment of the projection angle when two projectors are used in the horizontal direction. In the parentheses in Fig. 9, the first and second values represent the angles of the viewing directions in the horizontal and vertical directions, respectively. In Fig. 9, the green ring is reconstructed about 32 mm in front of the condensing point, and the background of the chess board image is reconstructed about 16 mm behind the condensing point. Since it is difficult to combine multiple viewing zones without overlap, depending on the lens aberration, optical arrangement, and optical system specifications, we slightly overlapped two viewing zones. As shown in Fig. 9(a), when the adjustment for the continuous combination of viewing zones is not applied, multiple images appear since multiple 3-D images are combined in a state of inconsistency at the central viewpoint. When viewing while moving horizontally, the 3-D image was accompanied by a discontinuity at the part connecting multiple viewing zones. As shown in Fig. 9(b), the continuous and smooth combination of multiple viewing zones was realized by precisely deriving the projection angles of each projector using the proposed adjustment method and applying it to generate elemental images.

Fig. 9

Integral 3-D images viewed from various directions without or with adjustment of the projection angle when two projectors are used in the horizontal direction: (a) without and (b) with adjustment.

OE_57_6_061611_f009.png

5.2.

Experiment Demonstrating an Enhancement in the Viewing Zone Using Four Projectors

Next, we built a prototype to combine four viewing zones using four projectors, a collimating lens, a lens array, and a computer for correcting the projection distortion and deriving the projection angle. Figure 10 shows the appearance of the prototype and the arrangement of four projectors, two horizontally and two vertically. A projector with a resolution equivalent to 4 K, which is the same as the projector used in the experiment presented in Sec. 5.1, was used. Since each projector is projected toward the center of the lens array, each viewing zone is formed in a direction diagonal to the arrangement of the projector. The total viewing zone is enlarged by combining four viewing zones. The specifications of the device are listed in Table 1.

Fig. 10

Appearance of the prototype device.

OE_57_6_061611_f010.png

Table 1

Specifications of the prototype.

Lens arrayPitch1.21 mm
Focal length2.42 mm
Lens arrangement/lens shapeSquare/square
Collimating lensFocal length1000 mm
ProjectorResolutionEquivalent to 4 K (3840×2160)
Number of unitsFour
3-D imageResolution264(H)×148(V)
Measured viewing angles49.2  deg(H)×45.4  deg(V) (when using four units)
25.4  deg(H)×24.5  deg(V) (when using one unit)
Size320  mm(H)×180  mm(V)

First, the method for correcting the projection distortion as described in Sec. 3 was performed by placing a diffuser plate at the focal position of the projector. The control points of the triangular mesh were detected with a visual check. Projected images without distortion were realized for all four projectors by image processing using projective transformation and affine transformation, and their transformation matrices were derived.

Next, the diffuser plate was removed, and the lens array was set away from the focal plane of the projected image at a distance equal to the focal length of the elemental lens. Then, the elemental images were dynamically changed using the high-speed generation method for the elemental images described in Sec. 4, and the desired projection angles for all projectors were derived.

Figure 11 shows an example of an integral 3-D image displayed by the prototype device when viewed from various directions. In Fig. 11, the green ring is reconstructed about 48 mm in front of the condensing point, the red bunny is reconstructed at the condensing point, and the background of the sky image is reconstructed about 48 mm behind the condensing point. It was confirmed that the 3-D image changes according to the observation direction, and multiple viewing zones are combined continuously and smoothly. When only one projector was used, the measured viewing angles were as narrow as 25.4 deg horizontally and 24.5 deg vertically. On the other hand, when four projectors were used, the measured viewing angles were as wide as 49.2 deg horizontally and 45.4 deg vertically. Compared with the case of only one projector, the viewing angle is expanded by a factor of 2 in both the horizontal and vertical directions using four projectors. Regarding the 3-D image display performance of the prototype, the resolution of the 3-D image was 264  (H)×148  (V), the total number of pixels was 39,072, and the screen size was 320  mm(H)×180  mm(V).

Fig. 11

Example of an integral 3-D image displayed by the prototype device when viewed from various directions.

OE_57_6_061611_f011.png

As described in Sec. 5.1, we slightly overlapped the parts connecting multiple viewing zones. In the overlapped parts, since light from multiple projectors is multiplexed, the luminance is higher than that in other parts. Therefore, by lowering the luminance value of the pixels of the elemental images corresponding to the connected parts, the luminance throughout the image can be smoothed.

Although we used a Fresnel lens as the collimating lens, it is considered that various types of aberrations appear because the imaging characteristics are not sufficient with only a single Fresnel lens. Using the proposed distortion correction method, the projected image can be transformed precisely into a desired shape, so the distortion due to the aberrations can be sufficiently suppressed. Moreover, by correcting distortion for each of the RGB images, the influence of chromatic aberration can be corrected.

6.

Conclusion

In this paper, we proposed a method that enlarges the viewing angle of an integral 3-D image using multiple projectors. The distortion in the projection image was precisely corrected by dividing elemental images with a triangular mesh and using image processing combining a projective transformation and an affine transformation, and the integral 3-D image was reconstructed without distortion. Furthermore, when the viewing zones formed by each projector were combined, the elemental images were dynamically changed, and the projection angles of the projectors were derived using a high-speed generation method for the elemental images. Multiple viewing zones were continuously and smoothly combined by reconstructing the 3-D image according to each projection direction. Once the matrices of the distortion correction and the projection angles are determined, the elemental images can be corrected by applying the same parameters to them, unless the optical system is changed. In the experiment, we built a prototype comprising four projectors with a 4K equivalent resolution arranged in a 2×2 array horizontally and vertically. The viewing angles were enlarged by approximately two times compared with that using one projector. This method is not limited to 2×2 units. If the optical system is appropriately designed, it is possible to enlarge the viewing zone according to the number of projectors by further increasing the number of projectors.

References

1. 

M. G. Lippmann, “Épreuves, réversibles donnant la sensation du relief,” J. Phys. Theor. Appl., 7 (1), 821 –825 (1908). https://doi.org/10.1051/jphystap:019080070082100 Google Scholar

2. 

Y. Igarashi, H. Murata and M. Ueda, “3-D display system using a computer generated integral photograph,” Jpn. J. Appl. Phys., 17 (9), 1683 –1684 (1978). https://doi.org/10.1143/JJAP.17.1683 Google Scholar

3. 

N. Davies, M. McCormick and L. Yang, “Three-dimensional imaging systems: a new development,” Appl. Opt., 27 (21), 4520 –4528 (1988). https://doi.org/10.1364/AO.27.004520 APOPAI 0003-6935 Google Scholar

4. 

F. Okano et al., “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt., 36 (7), 1598 –1603 (1997). https://doi.org/10.1364/AO.36.001598 APOPAI 0003-6935 Google Scholar

5. 

H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett., 26 (3), 157 –159 (2001). https://doi.org/10.1364/OL.26.000157 OPLEDP 0146-9592 Google Scholar

6. 

B. Javidi, I. Moon and S. Yeom, “Three-dimensional identification of biological microorganism using integral imaging,” Opt. Express, 14 (25), 12096 –12108 (2006). https://doi.org/10.1364/OE.14.012096 OPEXFF 1094-4087 Google Scholar

7. 

J. Arai et al., “Integral three-dimensional television using a 2000-scanning-line video system,” Appl. Opt., 45 (8), 1704 –1712 (2006). https://doi.org/10.1364/AO.45.001704 APOPAI 0003-6935 Google Scholar

8. 

K. Suehiro et al., “Integral 3D TV using ultrahigh-definition D-ILA device,” Proc. SPIE, 6803 680318 (2008). https://doi.org/10.1117/12.766892 PSISDG 0277-786X Google Scholar

9. 

H. Hoshino, F. Okano and I. Yuyama, “Analysis of resolution limitation of integral photography,” J. Opt. Soc. Am. A, 15 (8), 2059 –2065 (1998). https://doi.org/10.1364/JOSAA.15.002059 JOAOD6 0740-3232 Google Scholar

10. 

T. Yamashita et al., “Progress report on the development of Super-Hi Vision,” SMPTE Motion Imaging J., 119 (6), 77 –84 (2010). https://doi.org/10.5594/J12203 Google Scholar

11. 

J. Arai et al., “Integral three-dimensional television with video system using pixel-offset method,” Opt. Express, 21 (3), 3474 –3485 (2013). https://doi.org/10.1364/OE.21.003474 OPEXFF 1094-4087 Google Scholar

12. 

R. Martínez-Cuenca et al., “Enhanced viewing-angle integral imaging by multiple-axis telecentric relay system,” Opt. Express, 15 (24), 16255 –16260 (2007). https://doi.org/10.1364/OE.15.016255 OPEXFF 1094-4087 Google Scholar

13. 

Y. Takaki and N. Nago, “Multi-projection of lenticular displays to construct a 256-view super multi-view display,” Opt. Express, 18 (9), 8824 –8835 (2010). https://doi.org/10.1364/OE.18.008824 OPEXFF 1094-4087 Google Scholar

14. 

M. Kawakita et al., “3D image quality of 200-inch glasses-free 3D display system,” Proc. SPIE, 8288 82880B (2012). https://doi.org/10.1117/12.912274 PSISDG 0277-786X Google Scholar

15. 

H. Liao et al., “High-quality integral videography using a multiprojector,” Opt. Express, 12 (6), 1067 –1076 (2004). https://doi.org/10.1364/OPEX.12.001067 OPEXFF 1094-4087 Google Scholar

16. 

J.-Y. Jang et al., “Multi-projection integral imaging by use of a convex mirror array,” Opt. Lett., 39 (10), 2853 –2856 (2014). https://doi.org/10.1364/OL.39.002853 OPLEDP 0146-9592 Google Scholar

17. 

Á. Tolosa et al., “Enhanced field-of-view integral imaging display using multi-Köhler illumination,” Opt. Express, 22 (26), 31853 –31863 (2014). https://doi.org/10.1364/OE.22.031853 OPEXFF 1094-4087 Google Scholar

18. 

Md. A. Alam et al., “Viewing-angle-enhanced integral imaging display system using a time-multiplexed two-directional sequential projection scheme and a DEIGR algorithm,” IEEE Photonics J., 7 (1), 6900214 (2015). https://doi.org/10.1109/JPHOT.2015.2396904 Google Scholar

19. 

N. Okaichi et al., “Integral 3D display using multiple LCDs,” Proc. SPIE, 9391 939114 (2015). https://doi.org/10.1117/12.2077514 PSISDG 0277-786X Google Scholar

20. 

N. Okaichi et al., “Integral 3D display using multiple LCD panels and multi-image combining optical system,” Opt. Express, 25 (3), 2805 –2817 (2017). https://doi.org/10.1364/OE.25.002805 OPEXFF 1094-4087 Google Scholar

21. 

M. Kawakita et al., “Projection-type integral 3-D display with distortion compensation,” J. Soc. Inf. Disp., 18 (9), 668 –677 (2010). https://doi.org/10.1889/JSID18.9.668 JSIDE8 0734-1768 Google Scholar

22. 

Y. Iwadate and M. Katayama, “Generating integral image from 3D object by using oblique projection,” in Proc. of IDW’11, 269 –272 (2011). Google Scholar

Biography

Naoto Okaichi received his BS degree in physics from Tokyo Institute of Technology and his MS degree in complexity science and engineering from the University of Tokyo, Tokyo, Japan, in 2006 and 2008, respectively. In 2008, he joined the Japan Broadcasting Corporation (NHK), Tokyo. Since 2012, he has been with the Science and Technology Research Laboratories, NHK, where he has been engaged in research on three-dimensional (3-D) display systems.

Masato Miura received his BS and MS degrees and PhD in computer and systems engineering from Kobe University, Hyogo, Japan, in 2004, 2005, and 2008, respectively. Since 2008, he has been with the Science and Technology Research Laboratories, Japan Broadcasting Corporation (NHK), Tokyo, Japan. His current research interests include 3-D imaging systems and 3-D signal processing.

Hisayuki Sasaki received his BS degree in engineering systems and his MS degree in engineering mechanics from the University of Tsukuba, Japan, in 1999 and 2001, respectively. He joined the Japan Broadcasting Corporation (NHK) in 2001. Since 2006, he has been engaged in research on 3-D television systems at the Science and Technology Research Laboratories, NHK. He was seconded to the National Institute of Information and Communications Technology as a research expert from 2012 to 2016.

Hayato Watanabe received his BS and MS degrees in information and computer science from Keio University, Kanagawa, Japan, in 2010 and 2012, respectively. In 2012, he joined the Japan Broadcasting Corporation (NHK), Tokyo. Since 2015, he has been engaged in research on 3-D imaging systems at the Science and Technology Research Laboratories, NHK.

Jun Arai received his BS and MS degrees and PhD in applied physics from Waseda University, Tokyo, Japan, in 1993, 1995, and 2005, respectively. In 1995, he joined the Science and Technology Research Laboratories, the Japan Broadcasting Corporation (NHK), Tokyo, Japan. Since then, he has been working on 3-D imaging systems.

Masahiro Kawakita received his BS and MS degrees in physics from Kyushu University, Japan, in 1988 and 1990, respectively, and his PhD in electronic engineering from the University of Tokyo, Japan, in 2005. In 1990, he joined the Japan Broadcasting Corporation (NHK), Tokyo. Since 1993, he has been with the Science and Technology Research Laboratories, NHK, where he has been researching applications of liquid-crystal devices and optically addressed spatial modulators, 3-D TV cameras, and display systems.

Tomoyuki Mishina received his BE and ME degrees in electrical engineering from the Tokyo University of Science, Tokyo, Japan, in 1987 and 1989, respectively, and his PhD in engineering from the Tokyo Institute of Technology, Tokyo, Japan, in 2007. He joined the Japan Broadcasting Corporation (NHK), Tokyo, in 1989. Since 1992, he has been engaged in research on a 3-D imaging system with the Science and Technology Research Laboratories, NHK.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Naoto Okaichi, Masato Miura, Hisayuki Sasaki, Hayato Watanabe, Jun Arai, Masahiro Kawakita, and Tomoyuki Mishina "Continuous combination of viewing zones in integral three-dimensional display using multiple projectors," Optical Engineering 57(6), 061611 (14 March 2018). https://doi.org/10.1117/1.OE.57.6.061611
Received: 20 October 2017; Accepted: 22 February 2018; Published: 14 March 2018
Lens.org Logo
CITATIONS
Cited by 10 scholarly publications and 1 patent.
Advertisement
Advertisement
KEYWORDS
Projection systems

3D image reconstruction

3D image processing

3D displays

Distortion

Prototyping

Optical engineering

RELATED CONTENT


Back to Top