We propose a new approach to multi-view subspace learning termed multi-view latent space learning via adaptive graph embedding (MvSLGE), which learns a latent representation from multi-view features. In contrast to most existing multi-view latent space learning methods that only encode the complementary information into the latent representation, MvSLGE adaptively learns an adjacent graph that effectively characterizes the similarity among samples to provide additional regularization to the latent representation. To extract neighborhood information from multi-view features, we introduce a novel strategy that constructs one graph for each view. Subsequently, the learned graph is approximately designed as a centroid of these graphs from different views, each assigned with different weights. Therefore, the constructed latent representation not only incorporates complementary information from features across multiple views but also encodes the similarity information of samples. The proposed MvSLGE can be solved by the augmented Lagrangian multiplier with the alternating direction minimization algorithm. Plenty of experiments substantiate the superiority of MvSLGE on a variety of datasets. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one