Result Details
dfkinit2b at CheckThat! 2025: Leveraging LLMs and Ensemble of Methods for Multilingual Claim Normalization
Vykopal Ivan, Bc.
Kula Sebastian
Chikkala Ravi Kiran
Skachkova Natalia
Yang Jing
Solopova Veronika
Schmitt Vera
Ostermann Simon
The rapid spread of misinformation on social media across languages presents a major challenge for fact-checking efforts. Social media posts are often noisy, informal, and unstructured, with irrelevant content, making it difficult to extract concise, verifiable claims. To address this, the CLEF 2025 CheckThat! Shared Task on Multilingual Claim Extraction and Normalization focuses on transforming social media posts into normalized claims, short, clear and check-worthy statements that capture the essence of potentially misleading content. In this paper, we investigate several approaches to this task, including parameter-efficient fine-tuning, prompting large language models (LLMs), and an ensemble of methods. We evaluate our approaches in two settings: monolingual, where we are provided with training and validation data, and the zero-shot setting, where no training data is available for
the target language. Our approaches achieved first place in 6 out of 13 languages in the monolingual setting and ranked second or third in the remaining languages. In the zero-shot setting, we achieved the highest performance across all seven languages, demonstrating strong generalization to unseen languages.
Fact-Checking, Claim Normalization, Claim Extraction, Multilingual NLP
@inproceedings{BUT198001,
author="{} and Ivan {Vykopal} and {} and {} and {} and {} and {} and {} and {}",
title="dfkinit2b at CheckThat! 2025: Leveraging LLMs and Ensemble of Methods for Multilingual Claim Normalization",
year="2025",
publisher="Working Notes of the Conference and Labs of the Evaluation Forum (CLEF 2025)",
address="Madrid",
url="https://ceur-ws.org/Vol-4038/paper_62.pdf"
}