Publication Details

Auditing YouTube's Recommendation Algorithm for Misinformation Filter Bubbles

SRBA Ivan, MÓRO Róbert, TOMLEIN Matúš, PECHER Branislav, ŠIMKO Jakub, ŠTEFANCOVÁ Elena, KOMPAN Michal, HRČKOVÁ Andrea, PODROUŽEK Juraj, GAVORNÍK Adrián and BIELIKOVÁ Mária. Auditing YouTube's Recommendation Algorithm for Misinformation Filter Bubbles. ACM Transactions on Recommender Systems, vol. 1, no. 1, 2023, pp. 1-33. ISSN 2770-6699. Available from: https://dl.acm.org/doi/10.1145/3568392
Czech title
Auditovanie algoritmu odporúčania na YouTube pre dezinformačné filtračné bubliny
Type
journal article
Language
english
Authors
Srba Ivan ()
Móro Róbert ()
Tomlein Matúš ()
Pecher Branislav, Ing. (DCGM FIT BUT)
Šimko Jakub, doc. Ing., Ph.D. (DCGM FIT BUT)
Štefancová Elena ()
Kompan Michal, doc. Ing., Ph.D. (DCGM FIT BUT)
Hrčková Andrea ()
Podroužek Juraj ()
Gavorník Adrián ()
Bieliková Mária, prof. Ing., PhD. (DCGM FIT BUT)
URL
Keywords

Audit, recommender systems, filter bubble, misinformation, personalization, automatic labeling, ethics, YouTube

Abstract

In this article, we present results of an auditing study performed over YouTube aimed at investigating how fast a user can get into a misinformation filter bubble, but also what it takes to burst the bubble, i.e., revert the bubble enclosure. We employ a sock puppet audit methodology, in which pre-programmed agents (acting as YouTube users) delve into misinformation filter bubbles by watching misinformation-promoting content. Then they try to burst the bubbles and reach more balanced recommendations by watching misinformation debunking content. We record search results, home page results, and recommendations for the watched videos. Overall, we recorded 17,405 unique videos, out of which we manually annotated 2,914 for the presence of misinformation. The labeled data was used to train a machine learning model classifying videos into three classes (promoting, debunking, neutral) with the accuracy of 0.82. We use the trained model to classify the remaining videos that would not be feasible to annotate manually. Using both the manually and automatically annotated data, we observe the misinformation bubble dynamics for a range of audited topics. Our key finding is that even though filter bubbles do not appear in some situations, when they do, it is possible to burst them by watching misinformation-debunking content (albeit it manifests differently from topic to topic). We also observe a sudden decrease of misinformation filter bubble effect when misinformation-debunking videos are watched after misinformation-promoting videos, suggesting a strong contextuality of recommendations. Finally, when comparing our results with a previous similar study, we do not observe significant improvements in the overall quantity of recommended misinformation content.

Published
2023
Pages
1-33
Journal
ACM Transactions on Recommender Systems, vol. 1, no. 1, ISSN 2770-6699
Book
ACM Transactions on Recommender Systems
Publisher
Association for Computing Machinery
DOI
BibTeX
@ARTICLE{FITPUB12711,
   author = "Ivan Srba and R\'{o}bert M\'{o}ro and Mat\'{u}\v{s} Tomlein and Branislav Pecher and Jakub \v{S}imko and Elena \v{S}tefancov\'{a} and Michal Kompan and Andrea Hr\v{c}kov\'{a} and Juraj Podrou\v{z}ek and Adri\'{a}n Gavorn\'{i}k and M\'{a}ria Bielikov\'{a}",
   title = "Auditing YouTube's Recommendation Algorithm for Misinformation Filter Bubbles",
   pages = "1--33",
   booktitle = "ACM Transactions on Recommender Systems",
   journal = "ACM Transactions on Recommender Systems",
   volume = 1,
   number = 1,
   year = 2023,
   ISSN = "2770-6699",
   doi = "10.1145/3568392",
   language = "english",
   url = "https://www.fit.vut.cz/research/publication/12711"
}
Back to top