Yali Bian, Michelle Dowling, Chris North

Abstract

Semantic interaction (SI) attempts to learn the user's cognitive intents as they directly manipulate data projections during sensemaking activity. For text analysis, prior implementations of SI have used common data features, such as bag-of-words representations, for machine learning from user interactions. Instead, we hypothesize that features derived from deep learning word embeddings will enable SI to better capture the user's subtle intents. However, evaluating these effects is difficult. SI systems are usually evaluated by a human-centred qualitative approach, by observing the utility and effectiveness of the application for end-users. This approach has drawbacks in terms of replicability, scalability, and objectiveness, which makes it hard to perform convincing contrast experiments between different SI models. To tackle this problem, we explore a quantitative algorithm-centered analysis as a complementary evaluation approach, by simulating users' interactions and calculating the accuracy of the learned model. We use these methods to compare word-embeddings to bag-of-words features for SI.

People

Yali Bian


Chris North


Publication Details

Date of publication:
July 31, 2020
Journal:
Cornell University
Publication note:

Yali Bian, Michelle Dowling, Chris North: Evaluating Semantic Interaction on Word Embeddings via Simulation. CoRR abs/2007.15824 (2020)