Leveraging syntactic parsing to improve event annotation matching

Publication type
Publication status
Colruyt, C., De Clercq, O., & Hoste, V.
Aggregating and analysing crowdsourced annotations for NLP : Proceedings of the First Workshop on Aggregating and Analysing Crowdsourced Annotations for NLP
Association for Computational Linguistics (ACL)
EMNLP 2019 (Hong Kong)
View in Biblio
(externe link)


Detecting event mentions is the first step in event extraction from text and annotating them is a notoriously difficult task. Evaluating annotator consistency is crucial when building datasets for mention detection. When event mentions are allowed to cover many tokens, annotators may disagree on their span, which means that overlapping annotations may then refer to the same event or to different events.
This paper explores different fuzzy matching functions which aim to resolve this ambiguity. The functions extract the sets of syntactic heads present in the annotations, use the Dice coefficient to measure the similarity between sets and return a judgment based on a given threshold. The functions are tested against the judgments of a human evaluator and a comparison is made between sets of tokens and sets of syntactic heads. The best-performing function is a head-based function that is found to agree with the human evaluator in 89% of cases.