Detail výsledku
BUT-FIT at SemEval-2020 Task 5: Automatic detection of counterfactual statements with deep pre-trained language representation models
Dočekal Martin, Ing., UPGM (FIT)
Jon Josef, Ing.
Smrž Pavel, doc. RNDr., Ph.D., UPGM (FIT)
This paper describes BUT-FITs submission at SemEval-2020 Task 5:
Modelling Causal Reasoning in Language: Detecting Counterfactuals. The
challenge focused on detecting whether a given statement contains a
counterfactual (Subtask 1) and extracting both antecedent and consequent
parts of the counterfactual from the text (Subtask 2). We experimented
with various state-of-the-art language representation models (LRMs). We
found RoBERTa LRM to perform the best in both subtasks. We achieved the
first place in both exact match and F1 for Subtask 2 and ranked second
for Subtask 1.
counterfactual, counterfactual reasoning, BERT, RoBERTa, ALBERT, causal reasoning, what-if, semeval, classification, extraction
@inproceedings{BUT168151,
author="Martin {Fajčík} and Martin {Dočekal} and Josef {Jon} and Pavel {Smrž}",
title="BUT-FIT at SemEval-2020 Task 5: Automatic detection of counterfactual statements with deep pre-trained language representation models",
booktitle="Proceedings of the Fourteenth Workshop on Semantic Evaluation",
year="2020",
pages="437--444",
publisher="Association for Computational Linguistics",
address="Barcelona (online)",
isbn="978-1-952148-31-6",
url="https://www.aclweb.org/anthology/2020.semeval-1.53/"
}
Rozsáhlá extrakce informací a využití herních principů (gamifikace) pro osvojování nových jazyků na základě "moudrosti davů" (crowdsourcingu), MŠMT, INTER-EXCELLENCE - Podprogram INTER-COST, LTC18006, zahájení: 2018-06-01, ukončení: 2021-02-28, ukončen