Google partnering with a non-profit to fight nonconsensual images on the web
Google announced the partnership at the NCII summit on Wednesday at Google’s London office, where they were hosting StopNCII’s parent charity, SWGfL.
Alphabet Inc.’s Google plans to start working with StopNCII, a nonprofit that helps prevent the spread of nonconsensual images online. For advocates, the move marks a significant, albeit overdue, step in the search giant's efforts against image-based abuse.
StopNCII’s technology allows victims of image-based abuse to create digital fingerprints, or hashes, of intimate images. These hashes are then shared with partner platforms, including Facebook, Instagram, Reddit and OnlyFans, which use them to prevent reupload of the images without requiring anybody to look at or report them.
Google announced the partnership at the NCII summit on Wednesday at Google’s London office, where they were hosting StopNCII’s parent charity, SWGfL. For victims, “knowing that their content doesn't appear in search — I can't begin to articulate the impact that this has on individuals,” said David Wright, SWGfL’s chief executive officer, in an interview.
Google will not immediately appear on StopNCII’s official partner list. A Google spokesperson said the company is currently testing the technology and expects to begin using the hashes “over the next few months.” Adopting the hash matching technology would be a significant change that would require evolving the company’s processes and infrastructure over time, they added.
Google's comparatively slow pace in adapting the technology has drawn criticism. StopNCII launched in late 2021, building on detection tools used at Meta. Facebook and Instagram were among the early partners, TikTok and Bumble joined in December 2022, while Microsoft integrated the system into its Bing search engine in September 2024 — nearly a year ahead of Google. When questioned explicitly on why the company was not yet adapting StopNCII’s technology, Google told UK lawmakers in April 2024 it had “policy and practical concerns about the interoperability of the database” and therefore could not yet participate.
Some advocates say Google’s move doesn’t go far enough. “It’s a step in the right direction,” said Adam Dodge, founder of advocacy group End Technology-Enabled Abuse. 'But I think this still puts a burden on victims to self-report.” A company with Google’s resources, he argued, could go further in taking nonconsensual images down, without requiring victims to create hashes.
Missing from Google’s announcement is any mention of AI-based nonconsensual imagery, or deepfakes. StopNCII relies on known images, meaning it cannot preemptively block AI-generated deepfakes. “If it's a synthetic or an entirely different image, the hash is not going to trap it,” Wright said.
In 2023, Bloomberg found that Google Search was the top traffic driver to websites hosting deepfakes, or sexually explicit AI-generated pornography. Since then, the company has taken some steps to reduce and downrank the content in search.