Evidence that adverts for major brands were placed in “child abuse discovery apps” via Google and Facebook’s ad networks has led to fresh calls for the tech giants to face tougher regulation.
The apps involved used to be available on Google’s Play Store for Android devices, and directed users to WhatsApp groups containing the illegal content.

Facebook and Google said they have taken steps to address the problem.

But the NSPCC charity wants a new regulator to monitor their efforts.

“WhatsApp is not doing anywhere near enough to stop the spread of child sexual abuse images on its app,” said Tony Stower, head of internet safety at the child protection campaign.

“For too long tech companies have been left to their own devices and failed to keep children safe.”

The charity believes a watchdog with the power to impose large fines would give the technology firms the incentive needed to hire more staff and otherwise spend more to tackle the problem.

WhatsApp is owned by Facebook.

Adverts for several famous brands were placed within the apps

Group searches


News site Techcrunch published details of a two-part investigation by the Israeli child protection start-up AntiToxin Technologies and two NGOs from the country before and after Christmas

It reported that Google and Facebook’s automated advertising tech had placed adverts for household names in a total of six apps that let users search for WhatsApp groups to join – a function that the chat service does not allow in its own app.

Using the third-party software, it was possible to look for groups containing inoffensive material.

But a search for the word “child” brought up links to join groups that clearly signalled their purpose was to share illegal pictures and videos.

The BBC understands these groups were listed under different names in WhatsApp itself to make them harder to detect.

Brands whose ads were shown ahead of these search results included:


  • Amazon

  • Microsoft

  • Sprite

  • Dyson

  • Western Union

“The link-sharing apps were mind-bogglingly easy to find and download off of Google Play,” Roi Carthy, AntiToxin’s chief marketing officer told the BBC

“Interestingly, none of the apps were to be found on Apple’s App Store, a point which should raise serious questions about Google’s app review policies.”

After the first article was published, Google removed the group-searching apps from its store.

“Google has a zero-tolerance approach to child sexual abuse material and we thoroughly investigate any claims of this kind,” a spokeswoman for the firm said.

“As soon as we became aware of these WhatsApp group link apps using our services, we removed them from the Play store and stopped ads.

“These apps earned very little ad revenue and we’re terminating these accounts and refunding advertisers in accordance with our policies.”

Human moderators


WhatsApp messages are scrambled using end-to-end encryption, which means only the members of a group can see their contents.

Group names and profile photos are, however, viewable.

WhatsApp’s own moderators began actively policing the service about 18 months ago, having previously relied on user reports.

They use the names and profile pictures as a means to detect banned activity.

Earlier this month, the firm revealed it had terminated 130,000 accounts over a 10 day period.

However, Techcrunch and the Financial Times both subsequently documentedexamples of groups with child abuse-related names and profile pictures that remained active. They are no longer available.

Google and Facebook say they both intend to reimburse affected advertisers

“WhatsApp has a zero-tolerance policy around child sexual abuse,” a spokesman for the service told the BBC.

“We deploy our most advanced technology, including artificial intelligence to scan profile photos and actively ban accounts suspected of sharing this vile content.

“Sadly, because both app stores and communications services are being misused to spread abusive content, technology companies must work together to stop it.”

At present, WhatsApp has less than 100 human moderators compared to more than 20,000 working on the main Facebook platform, but because of WhatsApp’s nature they have less material to work with.

‘Vile images’


The BBC has asked several of the brands whose adverts were displayed for comment, but none have done so.

Facebook noted that its Audience Network, which placed some of the promotions, checks whether an app is live in Google Play before pushing content.

As a result, removal of the apps from the store meant its system would stop placing ads in copies of the apps already downloaded on people’s devices.

Furthermore, it said in the future it would prevent ads being placed in any WhatsApp group search apps, even if Google allows them to return to its marketplace.

Facebook is also refunding impacted advertisers.

Even so, the NSPCC thinks the brands affected should hold the two tech firms to account.

“It should be patently obvious that advertisers must ensure their money is not supporting the spread of these vile images,” said a spokeswoman.

BBC