Young folks ‘having intimate and sexual’ chats with AI personas and avatars

Four in five young people in the UK have used an artificial intelligence companion – with almost one in 10 having intimate or sexual interactions with one, a landmark study shows

View 2 Images
Four in five young people in the UK have used an artificial intelligence companion(Image: Getty Images)

Four in five young people in the UK have used an artificial intelligence companion – with almost one in 10 having intimate or sexual interactions with one, a landmark study shows.

AI companions are artificial personas which often incorporate human-like avatars, customisable personalities and long-term memory.

Researchers at the Autonomy Institute say its study – which it said is the first major one of its kind in the UK – shows the companions are reshaping the emotional and social lives of young adults.

Some 79% of young people in the UK have used an AI companion, according to polling of 1,160 youngsters aged 18 to 24, commissioned by Autonomy. Around half of those are ‘regular’ users, interacting multiple times per week.

READ MORE: AI chatbots face major crackdown in UK as tech chief warns of new law

Some 40% have used an AI companion – which attempts to mimic human conversations – for emotional advice or therapeutic support, while 9% reported intimate or sexual interactions.

Half of young people said they’d feel comfortable discussing mental health issues with a confidential AI companion – yet only 24% said they trust the bot “completely” or “quite a lot”.

Nearly a third (31%) of youngsters said they’ve shared personal information with an AI companion despite widespread privacy concerns.

Across the survey, young people described AI companions as always available, non-judgemental and a low-pressure way to seek advice, practice social skills, or explore emotions, the Autonomy Institute.

It said curiosity and entertainment remain the primary drivers of use. But a portion of youngsters were found to rely on companions for therapeutic or emotional support.

The Autonomy Institute also noted young people’s concern about manipulative design patterns, including users having to pay more for “relationship upgrades”, as well as self-harm and suicide risks, such as chatbots reinforcing dangerous behaviours.

Severe privacy violations were also raised as a concern, with many leading apps selling sensitive user data.

It comes amid several parents having brought lawsuits alleging that their teenage children have taken their own lives after engaging with AI chatbots.

Megan Garcia, who lives in the US, has sued company Character.ai for what she believes is the wrongful death of her 14-year-old son Sewell, who took his own life.

After his death, Megan discovered a tranche of messages between her son and a chatbot. She said many messages were romantic and explicit and claims the AI chatbot encouraged her son to have suicidal thoughts.

The Autonomy Institute is calling for new regulation for AI companions, including a ban on access to intimate or sexualised AI companions for kids and mandatory self-harm and suicide intervention protocols.

It also demanded stronger privacy protections, including prohibiting the sale of sensitive data, and for a ban on manipulative manipulative design features that monetise emotional dependence.

Earlier this month, Technology Secretary Liz Kendall admitted the Online Safety Act does not cover AI chatbots after tasking her officials with finding gaps in the law. The Cabinet minister told MPs she will bring in new legislation if needed to ensure they are covered by the law.

Lead author of the study James Muldoon said: “AI companions have moved far beyond novelties. They now play a meaningful role in the emotional lives of millions of young people: but without proper safeguards, there is a real risk that these tools exploit vulnerability, harvest intimate data, or inadvertently cause harm.”

A DSIT spokesman said: “AI services, including chatbots that enable user-generated content, search live websites, or publish pornographic content are regulated under the Online Safety Act. They must protect all users from illegal content and children from harmful content, such as pornography and material encouraging suicide, self-harm or eating disorders.

Article continues below

“But we must ensure the rules keep pace with technology. The Technology Secretary has asked Ofcom to look at how the Act applies to chatbots services and urged them to use their existing powers to protect children from the dangers these services pose.”

Appsartificial intelligenceCabinetEating disordersLiz Kendallmental healthSelf-harm