Despite artificial intelligence becoming increasingly mainstream since ChatGPT’s 2022 debut, Americans view AI scientists with more suspicion than they do climate researchers or scientists in general.
This negativity stems primarily from widespread concern that AI research is producing dangerous unintended consequences, according to new research from the University of Pennsylvania’s Annenberg Public Policy Center.
The study, published in PNAS Nexus, surveyed thousands of Americans between 2023 and 2025 to measure public trust across different scientific fields. What emerged was a clear hierarchy of credibilityโwith AI scientists ranking at the bottom.
Unintended Consequences Drive Skepticism
“Our research suggests that AI has not been politicized in the U.S., at least not yet,” says lead author Dror Walter, an associate professor of digital communication at Georgia State University. Yet this hasn’t translated into public confidence.
The researchers used a comprehensive framework called Factors Assessing Science’s Self-Presentation (FASS) to evaluate public perceptions across five key areas: credibility, prudence, unbiasedness, self-correction, and benefit. AI scientists scored lowest on nearly every measure.
Most telling was the “unintended consequences” metric, where AI received dramatically lower scores than other fields. On a five-point scale, AI scientists averaged just 2.26 points in 2024 and 2.33 in 2025โcompared to 2.99 for climate scientists and 2.85-2.93 for scientists generally.
Beyond Politics: A Different Kind of Distrust
Unlike climate science, which has been deeply politicized along partisan lines, AI skepticism cuts across political boundaries. The study found that political ideology explained only 2-7% of variance in AI perceptions, compared to 31% for climate science and 17-20% for science generally.
This pattern holds significant implications for how scientific communities might address public concerns. Where climate science faces ideological resistance, AI confronts something differentโwidespread anxiety about technological risks that transcends party lines.
The research revealed fascinating patterns in how Americans consume information about different scientific fields. Media exposure patterns were far less predictive of AI perceptions than those for climate science or general science, suggesting public attitudes toward AI may be forming through different channels entirely.
The Funding Disconnect
Perhaps most intriguing was how these negative perceptions translatedโor didn’t translateโinto funding preferences. While Americans expressed more trust in climate scientists and general scientists, this didn’t automatically mean they opposed AI research funding. The study found that traditional predictors of science funding support explained much less variance for AI research compared to other fields.
Political ideology, typically a strong predictor of science funding attitudes, showed no significant relationship with AI research support. This suggests Americans may separate their concerns about AI scientists from their views on whether such research should continue.
Key Differences Across Scientific Fields:
- Trustworthiness: AI scientists scored 2.97, compared to 3.56-3.60 for general scientists and 3.62 for climate scientists
- Value alignment: Only 2.67 for AI versus 3.19-3.22 for general science and 3.31 for climate science
- Bias concerns: AI scientists rated 2.79 compared to 3.24-3.31 for other fields
Time Reveals Persistent Concerns
Researchers tracked perceptions over time to determine whether familiarity might breed acceptance. The answer was no. Between 2024 and 2025, as AI applications became more common in daily life, public skepticism toward AI scientists remained virtually unchanged.
This persistence suggests deeper structural concerns rather than simple fear of the unknown. Americans aren’t just worried about new technologyโthey’re specifically concerned about the scientists developing it and their ability to manage risks responsibly.
“The public unease about AI’s potential to create unintended consequences invites transparent, well-communicated ongoing assessment of the effectiveness of self or governmental regulation of AI,” Walter noted.
As AI continues reshaping society, these findings highlight a critical challenge: building public trust in the scientific community driving these changes may require different strategies than those used for traditional scientific communication.
If our reporting has informed or inspired you, please consider making a donation. Every contribution, no matter the size, empowers us to continue delivering accurate, engaging, and trustworthy science and medical news. Independent journalism requires time, effort, and resourcesโyour support ensures we can keep uncovering the stories that matter most to you.
Join us in making knowledge accessible and impactful. Thank you for standing with us!