Back to List

Learning Loss for Active Learning

Donggeun Yoo et al. — CVPR 2019

More annotated data improves the performance of deep neural networks. The problem is the limited budget for annotation. One solution to this is active learning, where a model asks human to annotate data that it perceived as uncertain. A variety of recent methods have been proposed to apply active learning to deep networks but most of them are either designed specifically for their target tasks or computationally inefficient for large networks. In this paper, we propose a novel active learning method that is simple but task-agnostic and works efficiently with the deep networks. We attach a small parametric module, named ``loss prediction module,'' to a target network, and learn it to predict target losses of unlabeled inputs. After that, this module can suggest data that the target model likely produces a wrong prediction. This method is task-agnostic as networks are learned from a single loss regardless of target tasks. We rigorously validate our method through image classification, object detection, and human pose estimation, with the recent network architectures. The results demonstrate that our method consistently outperforms the previous methods over the tasks.

전체 내용 보기
AUTHORS

Minchul Kim1, Jongchan Park1, Seil Na1, Chang Min Park2, Donggeun Yoo1

1Lunit Inc. 2KAIST

PUBLISHED
CVPR 2019

Read more