"Bias in AI autocomplete suggestions leads to attitude shift on societal issues"
by Williams-Ceci, Sterling; Jakesch, Maurice; Bhat, Advait; Kadoma, Kowe; Zalmanson, Lior; Naaman, Mor (2024)
Abstract
AI technologies such as Large Language Models (LLMs) are increasingly used to make suggestions to autocomplete text as people write. Can these suggestions impact people’s writing and attitudes? In two large-scale preregistered experiments (N=2,582), we expose participants who are writing about important societal issues to biased AI-generated suggestions. The attitudes participants expressed in their writing and in a post-task survey converged towards the AI’s position. Yet, a majority of participants were unaware of the AI suggestions’ bias and their influence. Further, awareness of the task or of the AI’s bias, e.g. warning participants about potential bias before or after exposure to the treatment, did not mitigate the influence effect. Moreover, the AI’s influence is not fully explained by the additional information provided by the suggestions.
Keywords
Themes
AlgorithmsLinks to Reference
Citation
Share
How to contribute.