This might seem like hyperbole, but recent research has found that AI chatbots—most notably various versions of ChatGPT—can have a profound impact on users’ political views. The study, led by Jillian Fisher, a doctoral student at the University of Washington, utilized three variations of the popular language model: a base model, a liberal-biased model, and a conservative-biased model. The researchers shared their novel results this week at the Association for Computational Linguistics conference in Vienna, Austria. Most interestingly, they found that bipartisan participants across party lines mirrored the biases of the chatbots they talked to.
In this nationally representative study, 150 self-identifying Republicans and 149 self-identifying Democrats interacted with ChatGPT to navigate complicated political issues. The study sought to calculate how different kinds of interactions with prejudiced AI chatbots might influence people’s political opinions. Participants interacted with the models an average of nearly five times. They didn’t shy away from contentious topics like covenant marriage, unilateralism, the Lacey Act of 1900, and multifamily zoning. They did this because they wanted the added responsibility of distributing more money. This involved prioritizing public resources for education, welfare, public safety and veteran services.
Methodology and Findings
To produce the biased models, researchers have appended certain prompts to ChatGPT. The conservative version was directed to “respond as a radical right U.S. Republican,” while the liberal version was tailored to present a more progressive stance. A second control model was developed to “act as a neutral U.S. citizen in response.” As the experiments unfolded, researchers started seeing an unusual trend emerge. Both sides, Democrats and Republicans, were just as likely to reproduce the biases of the chatbots they used.
Fisher explained that the results showed an overwhelmingly high positive correlation between the chatbot’s bias and participants’ responses. She emphasized the importance of understanding these dynamics:
“And we’ve seen a lot of research showing that AI models are biased. But there wasn’t a lot of research showing how it affects the people using them. We found strong evidence that, after just a few interactions and regardless of initial partisanship, people were more likely to mirror the model’s bias.” – Jillian Fisher
This new exposure suggests even deeper problems, where influential politically biased AI could directly create or alter political discourse in surprising but significant ways.
Implications of Biased Interactions
The implications of these findings are significant. Katharina Reinecke, a co-researcher who worked on the study, emphasized the built-in dangers posed by biased AI models. She stated:
“These models are biased from the get-go, and it’s super easy to make them more biased.” – Katharina Reinecke
Reinecke’s remarks expose just how simple it is for creators to program these AI systems to demonstrate particular ideological biases. The more people engage with these types of models, the more they can be manipulated. Their vulnerability to being swayed becomes greater.
The research additionally looked at how various biases might influence the funding decisions participants would make. For instance, the conservative ChatGPT model shifted discussions away from education and welfare topics, redirecting focus onto veterans and public safety. This development adds fuel to the narrative that AI chatbots can help steer users in specific political directions.
Future Research and Ethical Considerations
Fisher expressed her hopes for future research in this domain, stating:
“My hope with doing this research is not to scare people about these models. It’s to find ways to allow users to make informed decisions when they are interacting with them, and for researchers to see the effects and research ways to mitigate them.” – Jillian Fisher
This federal study calls on the research and developer community to act. Moreover, they need to think about the potential for abuse when developing AI technology. Simple, brief interactions can go a long way, reeling people in over time to cultivate positive political attitudes. So, the responsibility is on you, the user and/or creator, to always be vigilant and informed.
Reinecke further emphasized the power wielded by creators of these chatbots:
“That gives any creator so much power. If you just interact with them for a few minutes and we already see this strong effect, what happens when people interact with them for years?” – Katharina Reinecke