Virginia Tech® home

Deep Semantic Matching with Foreground Detection and Cycle-Consistency

Jia-Bin Huang

Abstract

Establishing dense semantic correspondences between object instances remains a challenging problem due to background clutter, significant scale and pose differences, and large intra-class variations. In this paper, we address weakly supervised semantic matching based on a deep network where only image pairs without manual keypoint correspondence annotations are provided. To facilitate network training with this weaker form of supervision, we 1) explicitly estimate the foreground regions to suppress the effect of background clutter and 2) develop cycle-consistent losses to enforce the predicted transformations across multiple images to be geometrically plausible and consistent. We train the proposed model using the PF-PASCAL dataset and evaluate the performance on the PF-PASCAL, PF-WILLOW, and TSS datasets. Extensive experimental results show that the proposed approach performs favorably against the state-of-the-art methods.

People

Publication Details

Date of publication: March 30, 2020

Journal: arXiv

Page number(s):

Volume:

Issue Number:

Publication Note: Yun-Chun Chen, Po-Hsiang Huang, Li-Yu Yu, Jia-Bin Huang, Ming-Hsuan Yang, Yen-Yu Lin: Deep Semantic Matching with Foreground Detection and Cycle-Consistency. CoRR abs/2004.00144 (2020)