Abstract: In this paper, we propose to learn a powerful Re-ID model by using less labeled data together with lots of unlabeled data,i.e.semi-supervised Re-ID. Such kind of learning enables Re-ID model to be more generalizable and scalable to real-world scenes. Specifically, we design a two-stream encoder-decoder-based structure with shared modules and parameters. For the encoder module, we take the original person image with its horizontal mirror image as a pair of inputs and encode deep features with identity and structural information properly disentangled. Then different combinations of disentangling features are used to reconstruct images in the decoder module. In addition to the commonly used constraints from identity consistency and image reconstruction consistency for loss function definition, we design a novel loss function of en-forcing consistent transformation constraints on disentangled features. It is free of labels, but can be applied to both supervised and unsupervised learning branches in our model. Extensive results on four Re-ID datasets demonstrate that by reducing 5/6 labeled data, Our method achieves the best performance on Market-1501 and CUHK03, and comparable accuracy on DukeMTMC-reID and MSMT17.

SlidesLive

Similar Papers

Meta-Learning with Context-Agnostic Initialisations
Toby Perrett (University of Bristol)*, Alessandro Masullo (University of Bristol), Tilo Burghardt (University of Bristol), Majid Mirmehdi (University of Bristol), Dima Damen (University of Bristol)
OpenGAN: Open Set Generative Adversarial Networks
Luke Ditria (Monash University), Benjamin J. Meyer (Monash University)*, Tom Drummond (Monash University)
Exploiting Transferable Knowledge for Fairness-aware Image Classification
sunhee hwang (Yonsei university)*, Sungho Park (Yonsei University), Pilhyeon Lee (Yonsei University), seogkyu jeon (Yonsei university), Dohyung Kim (Yonsei University), Hyeran Byun (Yonsei University)