Every generation of scientists starts from zero. They are born not knowing anything, spend two decades absorbing what came before, and only then, if they are lucky and determined, begin to push against the edges of the known. A doctor may log 25 years of training before practising independently. A research biologist might spend a decade on a single protein. This slowness, this grinding biological pace, is not a flaw in the system. It is, arguably, the system. Each researcher who earns their way into the frontier has had their intuitions shaped by failure and uncertainty and the accumulated friction of really hard problems. And that, Hyunjin Shim thinks, is precisely what AI cannot replicate and what we are in danger of discarding.
Shim is a bioengineer and assistant professor at California State University, Fresno, where her lab works on diagnostics and therapies for drug-resistant infections. Writing in the Journal of Medical Internet Research, she has laid out what she considers a longer-term and largely overlooked risk in the current rush to embed AI in scientific research and education: not that the tools will fail, but that they will succeed in exactly the wrong ways.
The core asymmetry she identifies is almost embarrassingly simple once stated. Human knowledge resets with every generation, rebuilt painstakingly through decades of study and practice. AI knowledge does not reset. It accumulates, persists, and expands at a rate that has no biological equivalent. Since the generative AI boom accelerated around 2019, the gap between those two curves has been widening considerably, and most of the conversation has focused on what AI can do rather than what that gap implies for the kind of thinking humans bring to hard problems.
Chasing the Shiny Tool
Shim’s most pointed example involves her own field. Antimicrobial resistance is, by most credible accounts, one of the genuinely urgent crises in global public health: bacteria are developing resistance to small-molecule antibiotics faster than new ones can be developed, and the market incentives for creating them are structurally broken. The obvious need, she argues, is for entirely new strategies, high-risk bets on radically different approaches to combating resistant pathogens. What is actually happening, fuelled in part by AI enthusiasm, is a surge of research into high-throughput screening and de novo design of yet more small molecules. Faster output, similar approach. AI is optimising the pipeline that needs to be replaced, not the replacement itself.
This is what she means by the diversion of “shiny tools.” The problem isn’t that AI-assisted drug screening is useless. Some of it is genuinely valuable. The problem is that investment and attention are finite, and when a technology makes one kind of research dramatically easier and faster, there is a powerful incentive to concentrate there. The harder, stranger, riskier work gets quietly defunded. And because generative AI systems are fundamentally pattern-matching engines trained on existing data, they are, almost by definition, oriented toward the known. They predict from what has been; they do not hallucinate the genuinely new in any scientific sense.
Shim calls the result “monocultures of knowing,” a reduction in the diversity of thought and ideas that can weaken the production of scientific knowledge. The agricultural analogy is apt. A monoculture is extraordinarily productive under normal conditions and catastrophically fragile under novel ones. When the familiar approaches stop working, and in science they always eventually do, you want a field full of eccentrics with different methods, not a field full of highly optimised versions of the same approach.
What Education Is Actually For
The implications for higher education are, if anything, even more vertiginous. Shim notes that AI systems can absorb and synthesise the core principles of most traditional curricula in a fraction of the time a human student requires. This creates a question that most universities are studiously avoiding: if knowledge transfer of established principles is something AI does extraordinarily well, what precisely is the 25-year education of a doctor or research scientist actually for?
Her answer is not that education becomes irrelevant; it is that education needs to reorient toward what AI cannot yet do reliably: identifying the right questions rather than answering the obvious ones, thinking laterally across disciplines, exercising the judgment that comes from genuine understanding rather than pattern matching, and developing the interpersonal and ethical capacities that any real application of science eventually requires. Some educators, confronted with the more immediate problem of simply knowing whether students have done their own thinking, are reverting to oral exams and handwritten assessments. That is probably the least interesting response to the challenge, though perhaps the most understandable.
There is something ironic, Shim observes, in the fact that AI integration in education is shifting the focus of educational practice even as it is shifting the focus of research. In both cases, the incentive structure rewards speed and volume; in both cases, the things that get squeezed out are the slower, harder-to-quantify capacities that produce breakthroughs rather than increments.
What Remains Distinctly Human
Shim is careful not to frame her argument as anti-AI. The tools are genuinely useful, she acknowledges, and attempts to simply prohibit them from classrooms or research pipelines are unlikely to accomplish much. The concern is dependency without oversight, and the particular shape that dependency takes when the tool being depended on is one that excels at averaging the past rather than imagining the future. The solution she gestures toward is preservation, not prohibition: maintaining human-centred pathways for knowledge generation that remain distinct from, and capable of scrutinising, AI systems.
Whether universities and research funders will find that argument compelling in the current climate is, to put it mildly, unclear. There is a great deal of money and prestige chasing AI integration right now, and arguments for slower, more varied, more human-mediated approaches to knowledge tend not to attract venture capital. But Shim’s underlying point has a kind of biological weight to it. Evolution, she notes, is powerful precisely because it is diverse and redundant and full of paths that look unpromising until, abruptly, they aren’t. A system that optimises away that redundancy becomes more efficient and more brittle at the same time. Which is fine, right up until it isn’t.
Frequently Asked Questions
Is AI actually making scientists less creative, or is this just speculation?
Shim’s argument is structural rather than anecdotal: because generative AI systems are trained on existing data, they are oriented toward patterns already present in the scientific literature, and investment tends to follow whatever tools make research faster and easier. Whether this is already measurably reducing scientific diversity is hard to quantify, but the dynamics she describes, concentrated funding, optimised pipelines, pattern-reinforcing outputs, are real and already visible in fields like antibiotic research.
Why does it matter if AI is doing more scientific work if the results are still useful?
The concern isn’t the quality of individual outputs but the diversity of approaches across a whole research landscape. Science advances partly through the accumulation of incremental results and partly through unexpected breakthroughs that come from unconventional thinking; a system that efficiently produces more of the former while crowding out the conditions for the latter may look productive and be strategically fragile at the same time.
What should universities actually do differently to prepare students for an AI-saturated world?
Shim’s suggestion is that education should shift toward cultivating the capacities AI currently handles poorly: identifying which problems are worth solving, exercising judgment under genuine uncertainty, working across disciplines, and developing the interpersonal and ethical skills that real-world application of science demands. That is a significant reorientation from curricula built primarily around knowledge transfer, and it is not yet clear how most institutions intend to make it.
Could AI training on AI-generated data make the monoculture problem worse over time?
Research published in Nature in 2024 found that AI models trained recursively on AI-generated data tend to collapse toward less diverse outputs, a phenomenon sometimes called model collapse. If the scientific literature itself becomes increasingly shaped by AI-assisted writing and AI-assisted research, the training data for future AI systems may become progressively less varied, potentially amplifying exactly the monoculture dynamic Shim describes.
ScienceBlog.com has no paywalls, no sponsored content, and no agenda beyond getting the science right. Every story here is written to inform, not to impress an advertiser or push a point of view.
Good science journalism takes time — reading the papers, checking the claims, finding researchers who can put findings in context. We do that work because we think it matters.
If you find this site useful, consider supporting it with a donation. Even a few dollars a month helps keep the coverage independent and free for everyone.
