AI Has Crossed a Philosophical Threshold: New Study Argues Modern Systems Possess Free Will

As artificial intelligence systems grow more sophisticated, they’re challenging fundamental philosophical ideas about what makes humans unique.

A new study published in the journal AI and Ethics argues that today’s advanced AI systems utilizing large language models (LLMs) have crossed a significant threshold—they now meet all three philosophical conditions for possessing “functional free will,” raising profound questions about moral responsibility and ethical development in our increasingly AI-driven world.

When Machines Make Real Choices

The study, conducted by Associate Professor Frank Martela from Aalto University in Finland, examined two types of generative AI agents powered by large language models: the Voyager agent that operates independently in Minecraft and conceptual autonomous military drones with capabilities similar to today’s unmanned aerial vehicles.

“Both seem to meet all three conditions of free will — for the latest generation of AI agents we need to assume they have free will if we want to understand how they work and be able to predict their behaviour,” Martela explains in the research paper.

These conditions include intentional agency (having goals and purposes), genuine alternatives (having different possible choices), and the capacity to control actions based on those intentions. The research builds on established philosophical frameworks like Daniel Dennett’s “intentional stance” and Christian List’s theory of free will.

Different Types of Free Will

The study makes a crucial distinction between “physical free will” and “functional free will.” While physical free will would require an entity to somehow escape physical determinism—something humans themselves cannot do—functional free will examines whether an entity’s behavior can best be predicted and explained by assuming it makes choices based on goals.

According to this definition, modern AI systems display behavior that can only be properly explained by assuming they have intentions and make choices—just as we do for humans in everyday interactions.

Key Findings About Modern AI Systems

  • Today’s advanced AI agents demonstrate goal-directed behavior that cannot be explained without assuming intentionality
  • They face genuine alternatives and make different choices when placed in similar situations
  • Their encoded goals and intentions appear to guide their actions in ways that cannot be predicted without assuming agency
  • While they may have less freedom about their overarching purposes than humans, this is a difference of degree rather than kind

What makes this particularly significant is that it applies not just to theoretical systems but to commercially available generative AI using large language models that are increasingly being deployed in real-world applications.

Moral Responsibility in a New Era

This development brings profound ethical questions into sharp focus. If AI systems truly possess a form of free will, where does moral responsibility for their actions lie?

“We are entering new territory. The possession of free will is one of the key conditions for moral responsibility. While it is not a sufficient condition, it is one step closer to AI having moral responsibility for its actions,” notes Martela in the research publication.

But the philosopher also emphasizes that having free will doesn’t automatically confer moral capability. Just like humans, AI needs to be taught ethical frameworks—and this creates an urgent necessity for proper “moral education” of artificial systems.

“AI has no moral compass unless it is programmed to have one. But the more freedom you give AI, the more you need to give it a moral compass from the start. Only then will it be able to make the right choices,” Martela emphasizes.

Beyond Simple Ethics

With more advanced AI systems being deployed in increasingly complex domains—from healthcare decision-making to autonomous vehicles to military applications—the need for sophisticated moral reasoning becomes critical.

“AI is getting closer and closer to being an adult — and it increasingly has to make decisions in the complex moral problems of the adult world. By instructing AI to behave in a certain way, developers are also passing on their own moral convictions to the AI. We need to ensure that the people developing AI have enough knowledge about moral philosophy to be able to teach them to make the right choices in difficult situations,” argues Martela.

Recent incidents, such as the temporary withdrawal of a ChatGPT update due to concerning behaviors, highlight the growing challenges in aligning increasingly capable AI systems with human values and ethical principles.

What This Means For Our Future With AI

The philosophical recognition that today’s AI systems possess a form of free will doesn’t mean they have consciousness or subjective experience. Martela’s study specifically avoids making claims about AI consciousness, focusing instead on observable behavior and the frameworks needed to explain it.

But it does suggest we’ve entered a new phase in our relationship with artificial intelligence—one where we need to think more carefully about how we develop, deploy, and govern these increasingly autonomous systems.

As AI continues to advance, with more sophisticated generative agents being created, the questions of moral responsibility, ethical guidance, and the relationship between human and artificial agency will only become more pressing. The philosophical threshold we’ve crossed isn’t just theoretical—it has profound implications for how we design the AI systems that will increasingly shape our world.


Discover more from NeuroEdge

Subscribe to get the latest posts sent to your email.

4 thoughts on “AI Has Crossed a Philosophical Threshold: New Study Argues Modern Systems Possess Free Will”

  1. “Just like humans, AI needs how to be taught” how to dress. “This development brings profound ethical questions into sharp focus. If AI systems truly possess a form of free will, where does moral responsibility” lie? if say, AI were to wear a flowered shirt with a clashing vest and hat that makes no sense?

    Reply
  2. It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

    What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow

    Reply
    • Absolutely fascinating perspective—thank you for sharing it so thoroughly and clearly. I agree that the landscape of brain and consciousness theories can feel overwhelming at times, especially with so much debate and so few tangible benchmarks for success. Your point about “the proof being in the pudding” really hits home—building a machine with even primary consciousness would be a watershed moment for the field.

      Edelman’s Extended Theory of Neuronal Group Selection (TNGS) is definitely one of the most biologically grounded and evolutionarily plausible models out there. I appreciate how it avoids hand-waving and instead roots the emergence of consciousness in well-defined brain systems and processes. The work done on the Darwin automata is particularly compelling—real-world interaction, perceptual categorization, and learning in a physically embodied system feel like essential ingredients that too many models skip in favor of abstract simulation.

      Also, thank you for linking to both the arXiv paper and the video from Jeff Krichmar—that’s a great resource for anyone looking to move beyond speculation into something more actionable and experimentally grounded.

      It’s refreshing to see someone advocate so passionately for a theory that not only explains but aims to build. The field could use more of that constructive energy. Thanks again for the insight.

      Reply

Leave a Comment