A new study shows an increasing concern over the use of artificial intelligence (AI) in welfare decisions. This issue is particularly acute as regards its effect on Universal Credit in the UK. Researchers, led by Mengchen Dong from the Max Planck Institute for Human Development, carried out the study. Their aim was to explore how benefit recipients and the public perceive this growing automated direction.
The research focused on two distinct populations: recipients of Universal Credit and non-recipients, with an objective of achieving a balanced representation in the findings. In the United States, only a fifth of the non-student adult respondents today are on social benefits. This unique and diverse sample provides rich insights into how AI affects welfare decision-making on the ground.
Study Design and Findings
With the second experiment, researchers showed participants detailed, lifelike examples that closely resembled real-world decision-making under uncertainty with welfare-linked outcomes. Each scenario offered participants a choice between two processing options: one managed by human administrators, which entailed a longer waiting time, and the other by AI, which promised quicker decisions. The AI alternative raised the chances of improper rejections by 5–30%.
In the U.K., most participants decisively rejected human case workers in favor of AI systems. This preference held even when both systems provided the same level of performance on speed and accuracy. This tendency introduces deep questions of accountability and the supposed objectivity of AI in intrusive welfare settings.
“If the perspectives of vulnerable groups are not actively taken into account, there is a risk of wrong decisions with real consequences—such as unjustified benefit withdrawals or false accusations.” – Jean-François Bonnefon
Those results demonstrate, for the first time, that people generally report feeling safer when their cases are decided by human adjudicators. This tracks with a growing public wariness about AI’s role in more sensitive areas of public service.
The Broader Context of AI in Welfare Systems
The study’s insights come at a time when several cities and countries are exploring AI applications within their welfare systems. Amsterdam, for example, piloted an AI program called Smart Check, which was intended to flag possible instances of welfare fraud. Yet, this initiative was met with considerable opposition from advocacy organizations, legal academics, and researchers who challenged the ethical underpinnings of this effort.
Earlier this year, Amsterdam announced they were suspending the Smart Check program. An evaluation found critical flaws in its design and execution. This development underscores the challenges faced by cities trying to integrate technology into public administration while ensuring fairness and accountability.
Mengchen Dong from USDOT’s Volpe Center highlighted the dangers of excluding minority voices in policy decisions.
“There is a dangerous assumption in policy-making that the average opinion represents the reality of all stakeholders.” – Mengchen Dong
This association underscores the importance of understanding distinct experiences among various demographic subgroups. It is particularly centered on the people most affected by welfare policy decisions.
Ethical Considerations and Future Implications
Researchers such as Mengchen Dong are tackling difficult ethical challenges and bias issues related to AI adoption efforts within welfare systems. That’s why it’s so important to consider how we can bring these technologies together safely and responsibly. It’s time to make vulnerable populations our top priority. This will protect them from unfair results that can arise from automated decision-making systems.
The study reveals a pressing need for policymakers to engage with different community voices when designing systems that utilize AI in public administration. Equity and transparency By taking this approach, they’re able to address equity related concerns about fairness and transparency head on. It minimizes the dangers associated with rendering incorrect judgments.