Paper
7 June 2004 Vision-based navigation in a dynamic environment for virtual human
Author Affiliations +
Proceedings Volume 5292, Human Vision and Electronic Imaging IX; (2004) https://doi.org/10.1117/12.527248
Event: Electronic Imaging 2004, 2004, San Jose, California, United States
Abstract
Intelligent virtual human is widely required in computer games, ergonomics software, virtual environment and so on. We present a vision-based behavior modeling method to realize smart navigation in a dynamic environment. This behavior model can be divided into three modules: vision, global planning and local planning. Vision is the only channel for smart virtual actor to get information from the outside world. Then, the global and local planning module use A* and D* algorithm to find a way for virtual human in a dynamic environment. Finally, the experiments on our test platform (Smart Human System) verify the feasibility of this behavior model.
© (2004) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Yan Liu, Ji-Zhou Sun, Jia-Wan Zhang, and Ming-Chu Li "Vision-based navigation in a dynamic environment for virtual human", Proc. SPIE 5292, Human Vision and Electronic Imaging IX, (7 June 2004); https://doi.org/10.1117/12.527248
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Visual process modeling

Virtual reality

Data modeling

Motion models

Solid modeling

Databases

Human vision and color perception

Back to Top