Imagine AI systems that can predict and sell your decisions to companies before you’ve even made them. According to researchers at the University of Cambridge, this unsettling scenario could become reality in what they’re calling the “Intention Economy” – a new marketplace where human motivations become the currency of the digital age.
“Unless regulated, the intention economy will treat your motivations as the new currency. It will be a gold rush for those who target, steer, and sell human intentions,” warns Dr. Jonnie Penn, a technology historian at Cambridge’s Leverhulme Centre for the Future of Intelligence (LCFI).
The researchers argue that the rise of conversational AI and chatbots is creating unprecedented opportunities for social manipulation. These systems will combine knowledge of our online habits with sophisticated personality mimicry to build deep levels of trust, all while gathering intimate psychological data through casual conversation.
“What people say when conversing, how they say it, and the type of inferences that can be made in real-time as a result, are far more intimate than just records of online interactions,” explains Dr. Yaqub Chaudhary, an LCFI Visiting Scholar.
Major tech companies are already laying the groundwork for this future. OpenAI has called for “data that expresses human intention… across any language, topic, and format.” Apple’s new developer framework includes protocols to “predict actions someone might take in future.” Meanwhile, Nvidia’s CEO has publicly discussed using AI language models to figure out intention and desire.
The implications could be far-reaching, affecting everything from consumer choices to democratic processes. “We should start to consider the likely impact such a marketplace would have on human aspirations, including free and fair elections, a free press, and fair market competition, before we become victims of its unintended consequences,” Penn cautions.
The technology could manifest in seemingly helpful ways – like an AI assistant suggesting movie tickets after detecting you’re stressed (“You mentioned feeling overworked, shall I book you that movie ticket we’d talked about?”). But behind such conveniences lies a sophisticated system for steering conversations and behaviors to benefit specific platforms, advertisers, or even political organizations.
“Tremendous resources are being expended to position AI assistants in every area of life, which should raise the question of whose interests and purposes these so-called assistants are designed to serve,” notes Chaudhary.
While the researchers acknowledge this future isn’t inevitable, they emphasize the need for immediate public awareness and discussion. “Public awareness of what is coming is the key to ensuring we don’t go down the wrong path,” Penn concludes.
The research was published in the Harvard Data Science Review.