The age verification is an important task in various context of applications like access control in spaces in hotels which are prohibited for children and teenagers, in dangerous spaces for children and in public area during a spread of virus among others. In fact, the age verification consists in classifying the face images into different age groups while dealing with the face appearance variation affected by occlusion, pose variation, low resolution, scale variation and illumination variation. This work introduced an access control application based on the age verification in an uncontrolled environment. In fact, we proposed a new two-level age classification method based on deep learning in order to classify the face images into eight age groups. Actually, the two-level classification strategy help reducing the confusion between the inter and intra age groups. Our experiments were performed on the multi-constrained Adience benchmark. The obtained results illustrate the effectiveness and robustness of the proposed age classification method in an uncontrolled environment.
Using semantic attributes such as gender, clothes, and accessories to describe people’s appearance is an appealing modeling method for video surveillance applications. We proposed a midlevel appearance signature based on extracting a list of nameable semantic attributes describing the body in uncontrolled acquisition conditions. Conventional approaches extract the same set of low-level features to learn the semantic classifiers uniformly. Their critical limitation is the inability to capture the dominant visual characteristics for each trait separately. The proposed approach consists of extracting low-level features in an attribute-adaptive way by automatically selecting the most relevant features for each attribute separately. Furthermore, relying on a small training-dataset would easily lead to poor performance due to the large intraclass and interclass variations. We annotated large scale people images collected from different person reidentification benchmarks covering a large attribute sample and reflecting the challenges of uncontrolled acquisition conditions. These annotations were gathered into an appearance semantic attribute dataset that contains 3590 images annotated with 14 attributes. Various experiments prove that carefully designed features for learning the visual characteristics for an attribute provide an improvement of the correct classification accuracy and a reduction of both spatial and temporal complexities against state-of-the-art approaches.
This paper introduces a novel method of moving object classification in Infrared and Visible spectra. This method is based on a data-mining process by combining a set of best features based on shape, texture and motion. The proposed method relies either on visible spectrum or on infrared spectrum according to weather conditions (sunny days, rain, fog, snow, etc.) and timing of the video acquisition. Experimental studies are carried out to prove the efficiency of our predictive models to classify moving objects and the originality of our process with intelligent fusion of VIS-IR spectra.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.