Result Details
CV-Probes: Studying the interplay of lexical and world knowledge in visually grounded verb understanding
How do vision-language (VL) transformer models ground verb phrases and do they integrate contextual and world knowledge in this process? We introduce the CV-Probes dataset, containing image-caption pairs involving verb phrases that require both social knowledge and visual context to interpret (e.g., ‘beg’), as well as pairs involving verb phrases that can be
grounded based on information directly available in the image (e.g., “sit”). We show that VL models struggle to ground VPs that are strongly context-dependent. Further analysis using explainable AI techniques shows that such models may not pay sufficient attention to the verb token in the captions. Our results suggest a need for improved methodologies in VL model training and evaluation. The code and dataset will be available
https://github.com/ivana-13/CV-Probes.
multimodal models, grounding, verb phrases, physical and social grounding, understanding of verbs, probing
@inproceedings{BUT199796,
author="Ivana {Beňová} and Michal {Gregor} and {}",
title="CV-Probes: Studying the interplay of lexical and world knowledge in visually grounded verb understanding",
year="2025",
pages="4425--4433",
isbn="1069-7977",
url="https://escholarship.org/content/qt3h83566r/qt3h83566r.pdf?v=lg"
}