Google AI flags dad who had photos of his child’s groin infection on his phone to share with doctors
A father was locked out of his Google Photos account after storing images of his child’s infected groin that he intended to share with doctors.
The photos were flagged as potential child sexual abuse material by an artificial intelligence (AI), triggering a police investigation, according to The New York Times.
This incident occurred in February 2021 when some doctors’ offices were still closed due to the COVID-19 pandemic, so consultations were taking place virtually.
Named only as Mark, the concerned parent ended up losing access to his emails, contacts, photos and even his phone number, and his appeal was denied.
It was not until December that year that the San Francisco Police Department found that the incident ‘did not meet the elements of a crime and that no crime occurred.’
This highlights the complications of using AI technology to identify abusive digital material, currently implemented by Google, Facebook, Twitter and Reddit.
Google scans images and videos uploaded to Google Photos using its Content Safety API AI toolkit, released in 2018 . This AI was trained to recognise ‘hashes’, or unique digital fingerprints, of child sexual abuse material (stock image)
Mark tried to appeal the decision but Google denied the request, leaving him unable to access any of his data, and blocked from his mobile provider Google Fi. It wasn’t until months later that he was informed that the San Francisco Police Department had closed the case against him (stock image)
Google scans images and videos uploaded to Google Photos using its Content Safety API AI toolkit, released in 2018.
This AI was trained to recognise ‘hashes’, or unique digital fingerprints, of child sexual abuse material (CSAM).
As well as matching hashes to known CSAM on a database, it is able to classify previously unseen imagery.
The tool then prioritises those it thinks are most likely to be deemed harmful and flags them to human moderators.
Any illegal material is reported to the National Center for Missing and Exploited Children (NCMEC), which liaises with the appropriate law enforcement agency, and it is removed from the platform.
Google spokesperson Christa Muldoon told The Verge: ‘Our team of child safety experts reviews flagged content for accuracy and consults with paediatricians to help ensure we’re able to identify instances where users may be seeking medical advice.’
In 2021, Google reported 621,583 cases of CSAM to the NCMEC’s CyberTipLine, which then alerted authorities to over 4,260 potential new child victims.
A Google spokeswoman told The New York Times that the company only scans personal images after the user takes ‘affirmative action’, which includes backing up their material on Google Photos.
Last year, stay-at-home dad Mark noticed swelling in his toddler’s genital area, so immediately contacted his healthcare provider.
A nurse asked him to send over photos of the infected region so a doctor could review them ahead of a video conference.
Mark took some photos using his Android smartphone, which were automatically backed-up to his Google cloud.
The doctor prescribed the boy with antibiotics which cleared up the swelling, however two days after taking the images Mark received a notification informing him that his Google accounts had been locked.
According to The New York Times, the reason for the action was the presence of ‘harmful content’ that was ‘a severe violation of Google’s policies and might be illegal.’
After the AI flagged the photos, a human content moderator for Google would have reviewed them to confirm they met the definition of CASM before escalating the incident.
Mark tried to appeal the decision but Google denied the request, leaving him unable to access any of his data, and blocked from his mobile provider Google Fi.
It wasn’t until months later that he was informed that the San Francisco Police Department had closed the case against him.
Named only as Mark, the concerned parent ended up losing access to his emails, contacts, photos and even his phone number, and his appeal was denied (stock image)
The incident is an example of why critics regard the monitoring of data stored on personal devices or in the cloud for CSAM as an invasion of privacy.
Jon Callas, a director of technology projects at the Electronic Frontier Foundation called Google’s practices ‘intrusive’ in a statement to The New York Times.
He said: ‘This is precisely the nightmare that we are all concerned about.
‘They’re going to scan my family album, and then I’m going to get into trouble.’
In April, Apple announced it was rolling out its Communication Safety tool in the UK.
The tool – which parents can choose to opt in or out of – scans images sent and received by children in Messages for nudity and automatically blurs them.
It initially raised concerns about privacy when it was announced in 2021, but Apple has since reassured that it does not have access to photos or messages.
‘Messages uses on-device machine learning to analyse image attachments and determine if a photo appears to contain nudity,’ it explained.
‘The feature is designed so that Apple doesn’t get access to the photos.’