Google says its AI uses machine learning through deep neural networks to help make intelligent image processing. This will help whoever is reviewing content to sort through images more quickly and detect CSAM content automatically. If any found content is illegal, the AI can remove it. In its announcement, Google says it wants the AI model to be free. It will be given to NGOs for free, which industry partners will be able to use it through the Content Safety API. Susie Hargreaves, CEO of the Internet Watch Foundation said the group is eager to start using the technology. “We, and in particular our expert analysts, are excited about the development of an artificial intelligence tool which could help our human experts review material to an even greater scale and keep up with offenders, by targeting imagery that hasn’t previously been marked as illegal material. By sharing this new technology, the identification of images could be speeded up, which in turn could make the internet a safer place for both survivors and users.”

Faster Speeds

The power of the tool allows finding CSAM online content much easier. Google says reviewers will be able to find 700% more CSAM content than they previously would under an equal timeframe. In the time one piece of content can be checked manually, the AI can complete seven. Over larger images running into the thousands, this could be a powerful tool. “Identifying and fighting the spread of CSAM is an ongoing challenge, and governments, law enforcement, NGOs and industry all have a critically important role in protecting children from this horrific crime.”

Google Debuts AI to Detect and Remove Child Sexual Abuse Material  CSAM  - 2Google Debuts AI to Detect and Remove Child Sexual Abuse Material  CSAM  - 13Google Debuts AI to Detect and Remove Child Sexual Abuse Material  CSAM  - 18Google Debuts AI to Detect and Remove Child Sexual Abuse Material  CSAM  - 90Google Debuts AI to Detect and Remove Child Sexual Abuse Material  CSAM  - 92