14 November 2024 Multi-view latent space learning framework via adaptive graph embedding
Haohao Li, Zhong Li, Huibing Wang
Author Affiliations +
Abstract

We propose a new approach to multi-view subspace learning termed multi-view latent space learning via adaptive graph embedding (MvSLGE), which learns a latent representation from multi-view features. In contrast to most existing multi-view latent space learning methods that only encode the complementary information into the latent representation, MvSLGE adaptively learns an adjacent graph that effectively characterizes the similarity among samples to provide additional regularization to the latent representation. To extract neighborhood information from multi-view features, we introduce a novel strategy that constructs one graph for each view. Subsequently, the learned graph is approximately designed as a centroid of these graphs from different views, each assigned with different weights. Therefore, the constructed latent representation not only incorporates complementary information from features across multiple views but also encodes the similarity information of samples. The proposed MvSLGE can be solved by the augmented Lagrangian multiplier with the alternating direction minimization algorithm. Plenty of experiments substantiate the superiority of MvSLGE on a variety of datasets.

© 2024 SPIE and IS&T
Haohao Li, Zhong Li, and Huibing Wang "Multi-view latent space learning framework via adaptive graph embedding," Journal of Electronic Imaging 33(6), 063016 (14 November 2024). https://doi.org/10.1117/1.JEI.33.6.063016
Received: 15 May 2024; Accepted: 22 October 2024; Published: 14 November 2024
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
Back to Top