Poster + Presentation + Paper
27 March 2021 Reducing motion to photon latency in multi-focal augmented reality near-eye display
Author Affiliations +
Conference Poster
Abstract
It is foreseen that the most convenient hardware for depiction of augmented reality (AR) will be optical seethrough head-mounted displays. Currently such systems are utilizing single focal plane and are inflicting vergenceaccommodation conflict to the human visual system – limiting wide acceptance. In this work, we analyze an optical seethrough AR head-mounted display prototype which has four focal planes operating in time-sequential mode thus mitigating limitation of single focal plane devices. Nevertheless, optical see-through nature implies requirement of very short motion-to-photon latency not to cause noticeable misalignment between the digital content and real-world scene. The utilized prototype display relies on commercial visual-SLAM spatial tracking module (Intel realsense T265) and within this work we analyzed factors improving motion-to-photon latency with the provided hardware setup. The performance analysis of the T265 module revealed slight translational and angular jitter – on the order of <1 mm and <15 arcseconds, and velocity readout of few cm/s from a completely still IMU. The experimentally determined motion-tophoton latency and render-to-photon latency was 46±6 ms and 38 ms respectively. To overcome IMU positional jitter, pose averaging with variable width of the averaging window was implemented. Based on immediate acceleration and velocity data the size of the averaging window was adjusted. To perform pose prediction a basic rotational-axis offset model was verified. Based on prerecorded head movements, a training model reduced the error between the predicted and actual recorded pose. The optimization parameters were corresponding offset values of the IMU’s rotational axis, translational and angular velocity as well as angular acceleration. As expected, the highest weight for the most accurate predictions was observed for velocities following angular acceleration. The role of offset values wasn’t significant. For improved perceived experience and motion-to-photon latency reduction we consider further investigation of simple trained neural networks for more accurate real-time pose prediction as well as investigation of content-driven adaptive image output overriding default order of image plane output in a time-sequential sequence.
Conference Presentation
© (2021) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Roberts Zabels, Rendijs Smukulis, Ralfs Fenuks, Andris Kučiks, Elza Linina, Kriss Osmanis, and Ilmars Osmanis "Reducing motion to photon latency in multi-focal augmented reality near-eye display", Proc. SPIE 11765, Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) II, 117650W (27 March 2021); https://doi.org/10.1117/12.2578144
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Augmented reality

Visualization

Data transmission

Displays

Head-mounted displays

Motion analysis

Solid state electronics

RELATED CONTENT


Back to Top