Zhiyang Xu, Lifu Huang


Data scarcity and imbalance have been the main factors that hinder the progress of event extraction (EE). In this work, we propose a self-training with gradient guidance (STGG) framework which consists of (1) a base event extraction model which is firstly trained on existing event annotations and then applied to large-scale unlabeled corpora to predict new event mentions, and (2) a scoring model that takes in each predicted event trigger and argument as well as their path in the Abstract Meaning Representation (AMR) graph to estimate a probability score indicating the correctness of the event prediction. The new event predictions along with their correctness scores are then used as pseudo labeled examples to improve the base event extraction model while the magnitude and direction of its gradients are guided by the correctness scores. Experimental results on three benchmark datasets, including ACE05-E, ACE05-E+ and ERE-EN, demonstrate the effectiveness of the STGG framework on event extraction task with up to 1.9 F-score improvement over the base event extraction models. Our experimental analysis further shows that STGG is a general framework as it can be applied to any base event extraction models and improve their performance by leveraging broad unlabeled data, even when the high-quality AMR graph annotations are not available.


Lifu Huang

Zhiyang Xu

Publication Details

Date of publication:
May 25, 2022
Cornell University
Publication note:

Zhiyang Xu, Lifu Huang:Improve Event Extraction via Self-Training with Gradient Guidance. CoRR abs/2205.12490 (2022)