New research reveals how computers can learn to generate human-like goals through program synthesis
The difference between a child stacking blocks until they fall and an AI attempting the same task has always been clear: one is driven by genuine playfulness and creativity, while the other follows programmed instructions. But that gap may be narrowing, according to groundbreaking research from New York University scientists published this month in Nature Machine Intelligence.
In the study, researchers developed a computational model that represents and generates human-like goals by learning from how people create games. The model proved so effective that when human evaluators were asked to rate games, they couldn’t reliably distinguish between those created by humans and those generated by AI.
“While goals are fundamental to human behavior, we know very little about how people represent and come up with them—and lack models that capture the richness and creativity of human-generated goals,” explained Guy Davidson, the paper’s lead author and an NYU doctoral student. “Our research provides a new framework for understanding how people create and represent goals, which could help develop more creative, original, and effective AI systems.”
The research team, which included Graham Todd, Julian Togelius, Todd M. Gureckis, and Brenden M. Lake, began by conducting online experiments in which participants were placed in a virtual room containing various objects like balls, blocks, and furniture. Participants were asked to invent single-player games using only the objects present—creating nearly 100 different games.
From Child’s Play to Computer Understanding
What makes the research particularly notable is how it bridges cognitive science and artificial intelligence. While most AI models that handle goals rely on simple parameters like “reach a target location” or “win a game,” human goals are far more varied and creative.
The researchers observed that despite the seemingly unlimited possibilities, human-created goals followed patterns guided by common sense (physical plausibility) and recombination (mixing familiar elements in new ways). For instance, participants instinctively knew a ball could be thrown in a bin or bounced off a wall, and they combined these basic actions to create diverse games.
The scientists represented these goals as “reward-producing programs”—symbolic operations that evaluate progress and provide feedback. This approach allowed them to identify compositional patterns across different games.
“Participants showcase intuitive common sense,” the paper noted. When creating throwing games, participants overwhelmingly chose balls rather than other objects. Similarly, for stacking games, they primarily used blocks rather than balls. This intuitive understanding of physical properties seems obvious to humans but represents a significant challenge for AI systems.
Teaching AI to Play Like a Human
The team then built a Goal Program Generator (GPG) model that learned to generate new games based on the patterns observed in human-created ones. The model included components explicitly designed to approximate cognitive capacities like physical common sense and recombination.
When evaluating the model’s output, the researchers categorized the AI-generated games into two types: those matching patterns found in human-created games and those exploring new territory. For example:
Human-created game:
- Gameplay: Throw a ball so that it touches a wall and then either catch it or touch it
- Scoring: Get 1 point each time you successfully throw the ball, it touches a wall, and you either hold it again or touch it after its flight
AI-created game:
- Gameplay: Throw dodgeballs so they land and come to rest on the top shelf; the game ends after 30 seconds
- Scoring: Get 1 point for each dodgeball resting on the top shelf at the end of the game
A separate group of human evaluators rated both human and AI-created games on factors including fun, creativity, and difficulty. Notably, participants gave similar ratings to human-created games and those generated by the AI model that matched human patterns—finding them equally understandable and enjoyable.
Beyond Gamification
The implications extend far beyond creating better games. This research helps advance our understanding of how humans form goals and how these goals can be represented to computers—a crucial challenge in developing AI that truly understands human intentions.
“Understanding how humans create, represent and reason about goals is crucial to understanding human behaviour,” the study authors wrote. “People routinely create novel, idiosyncratic goals with richness beyond these common modelling settings.”
The research team suggests their framework could enhance AI systems’ ability to explore and adapt to new environments autonomously. It could also improve how machines interpret human intentions—a critical factor in developing AI that aligns with human values.
The study, supported by grants from the National Science Foundation, represents an important step toward AI systems that can not only follow instructions but understand the creative, playful goals that drive human behavior from childhood onward.
If our reporting has informed or inspired you, please consider making a donation. Every contribution, no matter the size, empowers us to continue delivering accurate, engaging, and trustworthy science and medical news. Independent journalism requires time, effort, and resources—your support ensures we can keep uncovering the stories that matter most to you.
Join us in making knowledge accessible and impactful. Thank you for standing with us!