London24NEWS

Former Meta worker ‘doesn’t suppose tech big cares’ if younger lives are misplaced

A former senior Meta employee has said he doesn’t think the tech giant cares if more young lives are lost due to harms on its social media platforms.

Whistleblower Arturo Béjar told the Mirror the company, which owns Facebook, Instagram and WhatsApp, has “extraordinary” technical expertise and could easily stop recommending harmful content to young people – but chooses not to.

He said if Facebook founder Mark Zuckerberg woke up tomorrow and decided to make Instagram “as free of self-harm content as we can make it, then it would happen within a few months”.

Mr Béjar, who worked at Meta as Director of Engineering between 2009 and 2015 and again as a contractor between 2019 and 2021, said: “Just imagine, they created this product that billions of people use, that opens in milliseconds and gives you a personally curated list of things that they think you’ll find interesting.

“Don’t tell me that company cannot make it so you can’t find a choking challenge or that you don’t recommend grim content to a 13 year old.”







Mr Béjar worked at Meta as Director of Engineering between 2009 and 2015
(
SOPA Images/LightRocket via Getty Images)

Asked if he thinks Meta cares if more people die because of harm they encounter on their platforms, he said: “I think they don’t. I know that is a really hard thing for me to say, but I just go by their actions.”

Mr Béjar said he was a senior staff member in Meta and would often work with Mr Zuckerberg. He said during his time, when one of its platforms crashed there were emergency meetings to find out what went wrong and how to learn from it.

The ex-employee, who recently met with Ian Russell, the dad of 14-year-old Molly who took her own life after viewing harmful content online, said: “That’s what should happen in the company each time a teenager dies, because of something that the product facilitated. It doesn’t.

“When Molly committed suicide and it became clear that it was through this sort of firehose of self-harm content that she had got through Instagram, Instagram should have – and I believe even Mark himself – should have invited Ian to the campus to discuss what we can do so this doesn’t happen again. I mean that’s the decent, responsible thing to do.”

Mr Russell told us last week “almost astonishingly, and really quite tragically, too little has changed” since his daughter died in 2017. He said strengthening the Online Safety Bill was a “matter of life and death”.

He said: “It’s a matter of urgency for the next Government that they pursue strengthening the Bill and repairing problems that have become identified so that we have a chance of keeping up with the pace of change that is set by the tech industry, which is lightning fast. I would call on [ Labour Leader] Keir Starmer, if it is to be him. I’d call on [Prime Minister] Rishi Sunak, if it is to be him.

“Whoever is the next Prime Minister, I’d call on them not to neglect this and to finish the job that has been started, because it would be really, really tragic to leave this half done.” Mr Russell said four young people are dying by suicide every week on average, adding: “Four people like Molly, four families like mine, four sets of school friends like Molly’s school friends.”

Media regulator Ofcom last week suggested new rules for social media firms to make their sites safer for young children. If tech companies fail to comply with the guidance, they could be fined up to 10% of their global turnover or have their services blocked in the UK.

Mr Béjar, who testified to Congress about Meta in November, said he thinks threats to block Meta products on app stores is one of the key ways to make the company make its platforms safer. He said: “I think that they care more about their products being available and how they’re perceived as a company than they do about the lives that are getting lost right now.”

During his first stint at Meta, Mr Béjar developed tools to help children deal with bullying and harassment, but when he returned in 2019, he said they had been removed from the product. When he rejoined, he worked in the Instagram wellbeing team and his daughter was now old enough to use the app. He said his 14-year-old received “both unwanted advances over DMs [direct messages] and pretty awful misogyny” not long after creating an account.

Mr Béjar said he struggled to get Instagram to set goals in stopping kids getting unwanted messages, comments or images sent to them. He came up with a simple solution: create a button to say when content is harmful and unwanted. If an account is flagged more than three times, it gets a warning and the people who continue after that, Mr Béjar said, are the “predators” Meta should want to eradicate from the platform.

He said he built the technology and found 50-75% of people change their behaviour when they are given a “tap on the shoulder”. But he said Meta would not introduce it in 2021 and suggested it still has yet to do so today, leading Mr Béjar to leave the company and blow the whistle.

He said: “They know the harm that teens are experiencing. And I can say that authoritatively because I made sure that they knew through the process that I followed and the data that I gave them, and they chose not to act on it, right, and they still choose not to act on it.”

He later conducted research where he set up brand new Instagram accounts as a 13 year old, which had not interacted with or posted anything, and found they were recommended a “tremendous amount” of harmful content.

It included a short video, called a Reel, of what appeared to be a seven or eight-year-old girl lip-syncing to the lyrics “he like it when I bend on over and I arch my back”, which had 150,000 views.

Another post recommended was a woman saying she wanted a group of men to forcibly have sex with her. Mr Béjar said: “In my mind, it shows a complete disregard for creating a safe environment for a 13-year-old boy or girl. And it’s not necessary – it’s not like they’re lacking in content to recommend. They don’t really need to be recommending self-harm or pornographic content.”

He added: “The company has the technology to do this. In general, you don’t see spam as a judgement call on the part of the company. They have the technology, the most advanced technology in the world for detecting and not recommending spam. Well, that’s what they can apply to self harm content.”

Asked if Meta is more interested in profit than safety, Mr Béjar said: “I don’t know what’s in Mark’s heart.” In January, Mr Zuckerberg gave evidence to the US senate, where he stood up and apologised to bereaved parents whose children’s deaths were linked to online harms.

“No one should have to go through the things that your families have suffered and this is why we invested so much and are going to continue industry leading efforts to make sure that no one has to go through the things that your families have had to suffer,” he said. Mr Béjar added: “In all of his responses, it was ‘we do more than anybody else and basically what we do is enough’. And behind him, there was like 30 or 50 proofs that they’re not doing enough and there’s so much more they could do that they can do.”

Meta spokesman Andy Stone said: “We strongly disagree with Mr Bejar’s assertions. In fact, many of his suggestions for Instagram are already in place, including showing comment warnings to discourage people from making hurtful comments, restricting teens from seeing content related to self-harm and eating disorders, and automatically removing accounts that repeatedly display suspicious behaviour.

“We are deeply committed to helping keep teens safe online and have developed more than 50 tools, features and resources over the past decade to do just that.” Meta is also proposing a framework for federal legislation in the US that would require parents to control under-16s’ downloads from the App store.