Xuchao Zhang, Fanglan Chen, Naren Ramakrishnan
The uncertainty measurement of classifiers' predictions is especially important in applications such as medical diagnoses that need to ensure limited human resources can focus on the most uncertain predictions returned by machine learning models. However, few existing uncertainty models attempt to improve overall prediction accuracy where human resources are involved in the text classification task. In this paper, we propose a novel neural-network-based model that applies a new dropout-entropy method for uncertainty measurement. We also design a metric learning method on feature representations, which can boost the performance of dropout-based uncertainty methods with smaller prediction variance in accurate prediction trials. Extensive experiments on real-world data sets demonstrate that our method can achieve a considerable improvement in overall prediction accuracy compared to existing approaches. In particular, our model improved the accuracy from 0.78 to 0.92 when 30\% of the most uncertain predictions were handed over to human experts in "20NewsGroup" data.
- Date of publication:
- July 17, 2019
- Cornell University
- Publication note:
Xuchao Zhang, Fanglan Chen, Chang-Tien Lu, Naren Ramakrishnan: Mitigating Uncertainty in Document Classification. CoRR abs/1907.07590 (2019)