This paper proposes to detect leg posture, walking trajectory, and walking speed during walking in the static indoor environment, which are keys to monitor human walk normality. A small wearable depth camera installed on the knee is used as sensor to detect and monitor knee posture including angle, walking distance and trajectory. In this method, points of interest (PoI) are determined and tracked in each frame, and 3D coordinates of those PoI are used to calculated camera angle and moving distance. These are utilized as feature to train and test in a classifier for human walk normality diagnosis.
Eye gaze, hand pointing, and head generally approach the point in the same order when a human expresses an intention on a 3D point, and eye fixation during intention expression is a key to set the intention point position. This paper proposes a multimodal fusion of eye gazing, hand pointing, and head to determine the state of 3D intention. In order to reach the 3D point for intention expression, the method uses human psychovisual knowledge in the sequence of eye gaze, hand pointing, and head as a pattern to determine intention mode. In the intention state, the position of the intention point in three-dimensional space is primarily determined by the sequence of eye gaze, hand pointing, and head velocity.
When a blind person starts to cross a crosswalk, he/she may need to track moving objects such as pedestrians, pets, and so on in in order to avoid collision. This paper proposes a method of moving-obstacle tracking on a crosswalk for blindperson navigation system. In the method, borders of a crosswalk in front of the blind person is first determined using straight white lines on an intensity image, and a depth image is simultaneously used to detect candidate moving obstacles using differences among neighboring frames. The moving objects are then determined by height of the candidates, and positions of moving objects, which are assumed to be pedestrians and pets, are obtained from the depth image. This position information on previous image frames is finally used to estimate movement vector based on the moving object trajectory, and rectangular windows are employed to track moving objects in the next frame. To evaluate performance of the proposed method, experiments with pedestrians at a crosswalk were performed, and results showed effectiveness of the proposed method.
The practical identification of individuals using facial recognition techniques requires the matching of faces with specific expressions to faces from a neutral face database. A method for facial recognition under varied expressions against neutral face samples of individuals via recognition of expression warping and the use of a virtual expression-face database is proposed. In this method, facial expressions are recognized and the input expression faces are classified into facial expression groups. To aid facial recognition, the virtual expression-face database is sorted into average facial-expression shapes and by coarse- and fine-featured facial textures. Wrinkle information is also employed in classification by using a process of masking to adjust input faces to match the expression-face database. We evaluate the performance of the proposed method using the CMU multi-PIE, Cohn–Kanade, and AR expression-face databases, and we find that it provides significantly improved results in terms of face recognition accuracy compared to conventional methods and is acceptable for facial recognition under expression variation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.