Dan Milmo Global technology editor 

AI tools may soon manipulate people’s online decision-making, say researchers

Study predicts an ‘intention economy’ where companies bid for accurate predictions of human behaviour
  
  

A person using a laptop in their kitchen
University of Cambridge researchers believe AI assistants will become adept at influencing those who use them. Photograph: Fizkes/Getty Images

Artificial intelligence (AI) tools could be used to manipulate online audiences into making decisions – ranging from what to buy to who to vote for – according to researchers at the University of Cambridge.

The paper highlights an emerging new marketplace for “digital signals of intent” – known as the “intention economy” – where AI assistants understand, forecast and manipulate human intentions and sell that information on to companies who can profit from it.

The intention economy is touted by researchers at Cambridge’s Leverhulme Centre for the Future of Intelligence (LCFI) as a successor to the attention economy, where social networks keep users hooked on their platforms and serve them adverts.

The intention economy involves AI-savvy tech companies selling what they know about your motivations, from plans for a stay in a hotel to opinions on a political candidate, to the highest bidder.

“For decades, attention has been the currency of the internet,” said Dr Jonnie Penn, an historian of technology at LCFI. “Sharing your attention with social media platforms such as Facebook and Instagram drove the online economy.”

He added: “Unless regulated, the intention economy will treat your motivations as the new currency. It will be a gold rush for those who target, steer and sell human intentions.

“We should start to consider the likely impact such a marketplace would have on human aspirations, including free and fair elections, a free press and fair market competition, before we become victims of its unintended consequences.”

The study claims that large language models (LLMs), the technology that underpins AI tools such as the ChatGPT chatbot, will be used to “anticipate and steer” users based on “intentional, behavioural and psychological data”.

The authors said the attention economy allows advertisers to buy access to users’ attention in the present via real-time bidding on ad exchanges or buy it in the future by acquiring a month’s-worth of ad space on a billboard.

LLMs will be able to access attention in real-time as well, by, for instance, asking if a user has thought about seeing a particular film – “have you thought about seeing Spider-Man tonight?” – as well as making suggestions relating to future intentions, such as asking: “You mentioned feeling overworked, shall I book you that movie ticket we’d talked about?”

The study raises a scenario where these examples are “dynamically generated” to match factors such as a user’s “personal behavioural traces” and “psychological profile”.

“In an intention economy, an LLM could, at low cost, leverage a user’s cadence, politics, vocabulary, age, gender, preferences for sycophancy, and so on, in concert with brokered bids, to maximise the likelihood of achieving a given aim (eg to sell a film ticket),” the study suggests. In such a world, an AI model would steer conversations in the service of advertisers, businesses and other third parties.

Advertisers will be able to use generative AI tools to create bespoke online ads, the report claims. It also cites the example of an AI model created by Mark Zuckerberg’s Meta, called Cicero, that has achieved the “human-level” ability to play the board game Diplomacy – a game that the authors say is dependent on inferring and predicting the intent of opponents.

AI models will be able to tweak their outputs in response to “streams of incoming user-generated data”, the study added, citing research showing that models can infer personal information through workaday exchanges and even “steer” conversations in order to gain more personal information.

The study then raises a future scenario where Meta will auction off to advertisers a user’s intent to book a restaurant, flight or hotel. Although there is already an industry devoted to forecasting and bidding on human behaviour, the report said, AI models will distill those practices into a “highly quantified, dynamic and personalised format”.

The study quotes the research team behind Cicero warning that an “[AI] agent may learn to nudge its conversational partner to achieve a particular objective”.

The research refers to tech executives discussing how AI models will be able to predict a user’s intent and actions. It quotes the chief executive of the largest AI chipmaker, Jensen Huang of Nvidia, who said last year that models will “figure out what is your intention, what is your desire, what are you trying to do, given the context, and present the information to you in the best possible way”.

 

Leave a Comment

Required fields are marked *

*

*