Self-supervised Sparse to Dense Motion Segmentation
Amirhossein Kardoost (University of Mannheim)*, Kalun Ho (Fraunhofer ITWM), Peter Ochs (Saarland University), Margret Keuper (University of Mannheim)
Keywords: Motion and Tracking
Abstract:
Observable motion in videos can give rise to the definition of objects moving with respect to the scene. The task of segmenting such moving objects is referred to as motion segmentationand is usually tackled either by aggregating motion information in long, sparse point trajectories, or by directly producing per frame dense segmentations relying on large amounts of training data.In this paper, we propose a self supervised method to learn the densification of sparse motion segmentations from single video frames. While previous approaches towards motion segmentation build upon pre-training on large surrogate datasets and use dense motion information as an essential cue for the pixelwise segmentation, our model does not require pre-training and operates at test time on single frames. It can be trained in a sequence specific way to produce high quality dense segmentations from sparse and noisy input. We evaluate our method on the well-known motion segmentation datasets FBMS59 and DAVIS2016.
SlidesLive
Similar Papers
Semi-supervised Facial Action Unit Intensity Estimation with Contrastive Learning
Enrique Sanchez (Samsung AI Centre)*, Adrian Bulat (Samsung AI Center, Cambridge), Anestis Zaganidis (Samsung), Georgios Tzimiropoulos (Queen Mary University of London)

Discovering Multi-Label Actor-Action Association in a Weakly Supervised Setting
Sovan Biswas (University of Bonn)*, Juergen Gall (University of Bonn)

Learning Local Feature Descriptors for Multiple Object Tracking
Dmytro Borysenko (Samsung R&D Institute Ukraine), Dmytro Mykheievskyi (Samsung R&D Institute Ukraine), Viktor Porokhonskyy (Samsung Research&Development Institute Ukraine (SRK))*
