Consider a simple thought experiment. Imagine you’ve just read yet another headline about artificial intelligence replacing truck drivers, radiologists, customer service agents. You feel a shift in your chest; not quite panic, but something like it. The insecurity settles in. You find yourself doubting whether democratic institutions can really protect you from this. You start disengaging from political discussions about technology. You skip voting in local elections. Why bother if the system can’t save your job anyway?
Now here’s what’s genuinely unsettling: that shift might be happening to you based on something that isn’t really happening yet.
Researchers at Ludwig-Maximilians-Universität München and the University of Vienna have just published findings that suggest a troubling disconnect between AI’s actual impact on labour and what people believe that impact to be. Their work, published this month in the Proceedings of the National Academy of Sciences, shows that widespread perceptions of AI as a job killer are actively corroding people’s faith in democracy (and doing so even though AI has barely touched the labour market). Worse still, the belief itself seems to trigger the damage, independent of any real economic change. It’s a sort of self-fulfilling prophecy, except what’s being fulfilled isn’t the job losses; it’s the collapse of democratic participation.
The team began by asking a straightforward question: what do Europeans actually think about AI and jobs? They pulled data from 37,079 respondents across 38 countries, a snapshot of public opinion from 2021. The results were remarkably consistent. In most European countries, the prevailing view was that artificial intelligence destroys more jobs than it creates. How much more consistent? The average response was 3.16 on a 5-point scale, well above the neutral midpoint. “The actual impact of artificial intelligence on the labour market is still limited,” says Armin Granulo, from the LMU Munich School of Management. “Nevertheless, many people primarily perceive artificial intelligence as replacing human labour. This perception is remarkably stable and particularly widespread in economically developed countries.”
Think about that for a moment. The damage, such as it is, comes not from what’s happening but from what people believeis happening. And they believe it in the rich world, in the places we might assume would be most equipped to adapt to technological change.
But here’s where the research gets darker. Those perceptions didn’t just sit there in people’s heads like harmless misconceptions. They correlated with something measurable: lower satisfaction with democracy, less engagement in political discussions about technology, reduced participation in civic processes. Correlation doesn’t prove causation, though. So the researchers went further.
They ran experiments. In the UK, they showed 1,202 nationally representative participants one of two scenarios: either a future where AI eliminates more jobs than it creates, or one where it creates more jobs than it eliminates. Same people, same setup, just different framing. The results were stark. Those who imagined AI as a job killer reported significantly greater erosion of trust in democratic institutions. They expressed lower willingness to engage politically with future AI developments. The effect was large (what researchers call a “very large” effect size). In the US, a separate group of 1,200 respondents showed the same pattern. The belief triggered the response, regardless of political orientation or prior attitudes toward technology.
“When people feel that artificial intelligence replaces human labour, they express doubts about the political system; they are less satisfied with democracy and its institutions,” explains Christoph Fuchs from the University of Vienna. The phrasing matters. People don’t just think less about democracy in abstract terms. They feel doubt. Trust erodes. Engagement withers.
Consider the implications. Here’s a general-purpose technology that will legitimately reshape economies over the coming decades. Its actual trajectory depends partly on technical innovation, yes, but also on public engagement with policy, regulation, governance. We need people in the democratic process, shaping how AI develops and is deployed. Yet the narrative about AI (the pervasive story that it’s primarily a job killer) is actively driving people away from that democratic process. It’s like we’ve engineered a situation where the anxiety about the technology prevents the very engagement that might govern it wisely.
Worse, the narrative itself may be self-reinforcing. Andreas Raff, also at Vienna, notes that “the very way we talk about artificial intelligence as a society can influence democratic attitudes. If public debates focus heavily on job losses, this can have unintended side effects for democracy.” Which they do, of course. The job-loss angle sells newspapers. It triggers engagement. It’s emotionally resonant. But that very engagement in discussing the threat seems to undermine the democratic systems we’d need to manage it.
The timing of this research matters. Democratic legitimacy is already declining across established democracies. Trust in institutions was already fragile. Populism was already rising. And now we’re layering onto that a technology (a genuinely transformative one) that people perceive as a fundamental threat to their economic security. And that perception, regardless of its accuracy, is making them retreat from democratic participation just when they need to be most engaged.
Yet the researchers aren’t entirely pessimistic. Their experiments hint at something crucial: people’s beliefs about AI aren’t fixed. They’re malleable. The trajectory of AI, people seemed to understand when shown an alternative narrative, isn’t predetermined. It can be shaped. “Our experiments suggest that people’s beliefs about artificial intelligence are not fixed,” Granulo says. “These beliefs could be changed through targeted communication that highlights that the trajectory of artificial intelligence (and its impact on labour) is not predetermined, but can be shaped through democratic choices.”
That’s the opening. Not through denial of genuine economic anxieties (those matter, and they’re partly justified). But through honest communication about the fact that the future isn’t written. The story doesn’t have to be one of technological inevitability where ordinary people are merely acted upon. It could be about deliberate choice, collective governance, democratic shaping of how this technology gets developed and deployed.
There’s a warning in this research, certainly. “Our findings are a warning signal at a time when democratic legitimacy is declining in many established democracies and when democratic influence on the development of artificial intelligence is critical,” Fuchs emphasises. But there’s also, oddly, something like hope buried in the data. The very malleability of belief means intervention is possible. If perceptions can be shaped (if people can be shown genuinely that their political engagement matters for steering AI development), then the catastrophic feedback loop might be interrupted.
But that requires something almost unfashionable: faith in the power of honest, intentional communication. It requires recognizing that the narratives we tell about technology aren’t neutral. They’re active forces that shape behaviour, erode institutions, trigger withdrawals from civic life. And it requires, perhaps most importantly, acknowledging that what we choose to emphasise about AI’s future (whether we lead with job losses or with opportunities for democratic governance) is itself a choice. A choice that carries consequences.
“If we want to strengthen democratic participation in the development of artificial intelligence,” the researchers conclude, “we must take people’s perceptions of its economic consequences seriously (and actively help shape them).” That’s perhaps the most vital sentence in their entire study. Not because it offers easy answers, but because it locates the problem precisely where it actually exists: not in the technology itself, but in the stories we’re telling about it, and in our willingness to tell different ones.
The irony, then, is this. We’re anxious about AI destroying jobs we haven’t lost yet. That anxiety is destroying something more immediate: our willingness to participate in the systems that might actually protect us. And the technology itself (the actual, real transformation that’s coming) recedes into the background whilst we grapple with the perception of it. The job killer that might not come is more powerful, right now, than the actual technology. And that’s a problem that no amount of algorithmic improvement can solve.
Study link: https://www.pnas.org/doi/epdf/10.1073/pnas.2523508123
ScienceBlog.com has no paywalls, no sponsored content, and no agenda beyond getting the science right. Every story here is written to inform, not to impress an advertiser or push a point of view.
Good science journalism takes time — reading the papers, checking the claims, finding researchers who can put findings in context. We do that work because we think it matters.
If you find this site useful, consider supporting it with a donation. Even a few dollars a month helps keep the coverage independent and free for everyone.
