Multiview Similarity Learning for Robust Visual Clustering

AO LI, Jia jia Chen, Deyun Chen, Guanglu Sun

Abstract: Multiview similarity learning aims to measure the neighbor relationship between each pair of samples, which has been widely used in data mining and presents encouraging performance on lots of applications. Nevertheless, the recent existing multiview similarity learning methods have two main drawbacks. On one hand, the comprehensive consensus similarity is learned based on previous fixed graphs learned from all views separately, which ignores the latent cues hidden in graphs from different views. On the other hand, when the data are contaminated with noise or outlier, the performance of existing methods will decline greatly because the original true data distribution is destroyed. To address the two problems, a Robust Multiview Similarity Learning(RMvSL) method is proposed in this paper. The contributions of RMvSL includes three aspects. Firstly, the recent low-rank representation shows some advantage in removing noise and outliers, which motivates us to introduce the data representation via low-rank constraint in order to generate clean reconstructed data for robust graph learning in each view. Secondly, a multiview scheme is established to learn the consensus similarity by dynamically learned graphs from all views. Meanwhile, the consensus similarity can be used to propagate the latent relationship information from other views to learn each view graph in turn. Finally, the above two processes are put into a unified objective function to optimize the data reconstruction, view graphs learning and consensus similarity graph learning alternatingly, which can help to obtain overall optimal solutions. Experimental results on several visual data clustering demonstrates that RMvSL outperforms the most existing methods on similarity learning and presents great robustness on noisy data.

SlidesLive