Millions of People Are Using Abusive AI ‘Nudify’ Bots on Telegram

Kate Ruane, director of the Center for Democracy and Technology’s free expression project, says most major technology platforms now have policies prohibiting nonconsensual distribution of intimate images, with many of the biggest agreeing to principles to tackle deepfakes. “I would say that it’s actually not clear whether nonconsensual intimate image creation or distribution is prohibited on the platform,” Ruane says of Telegram’s terms of service, which are less detailed than other major tech platforms.

Telegram’s approach to removing harmful content has long been criticized by civil society groups, with the platform historically hosting scammers, extreme right-wing groups, and terrorism-related content. Since Telegram CEO and founder Pavel Durov was arrested and charged in France in August relating to a range of potential offenses, Telegram has started to make some changes to its terms of service and provide data to law enforcement agencies. The company did not respond to WIRED’s questions about whether it specifically prohibits explicit deepfakes.

Execute the Harm

Ajder, the researcher who discovered deepfake Telegram bots four years ago, says the app is almost uniquely positioned for deepfake abuse. “Telegram provides you with the search functionality, so it allows you to identify communities, chats, and bots,” Ajder says. “It provides the bot-hosting functionality, so it’s somewhere that provides the tooling in effect. Then it’s also the place where you can share it and actually execute the harm in terms of the end result.”

In late September, several deepfake channels started posting that Telegram had removed their bots. It is unclear what prompted the removals. On September 30, a channel with 295,000 subscribers posted that Telegram had “banned” its bots, but it posted a new bot link for users to use. (The channel was removed after WIRED sent questions to Telegram.)

“One of the things that’s really concerning about apps like Telegram is that it is so difficult to track and monitor, particularly from the perspective of survivors,” says Elena Michael, the cofounder and director of #NotYourPorn, a campaign group working to protect people from image-based sexual abuse.

Michael says Telegram has been “notoriously difficult” to discuss safety issues with, but notes there has been some progress from the company in recent years. However, she says the company should be more proactive in moderating and filtering out content itself.

“Imagine if you were a survivor who’s having to do that themselves, surely the burden shouldn’t be on an individual,” Michael says. “Surely the burden should be on the company to put something in place that’s proactive rather than reactive.”

abuseaiAppsartificial intelligenceasbotbotsBusiness / Artificial IntelligenceceoChannelclothescommunitiescompaniescontentdataDeepfakesdemocracydistributionFrancefreeiitkatelawlinkmessagingmichaelnotesotherPeopleplaceplatformsprivacySafetySearchsecuritySecurity / PrivacySecurity / Security Newsseptemberservicesexual abuseTechnologyTelegramTerrorismthattheThe Channelthingstrywhoyou