Google's Sneaky Solution to AI Misinformation

By Elliot Chen August 30, 2023

Google releases SynthID, a permanent, invisible watermark to identify AI-generated images and guard against misinformation.

Google is upping its ante against misinformation with SynthID, a new technology that leaves an invisible, permanent watermark on computer-generated images. The watermark, effectively a secret identifier, doesn't vanish, even after the original image undergoes alterations like color changes or filter additions. All of this was revealed on Tuesday in an official Google blog post.

SynthID is directly embedded into images created by Imagen, a cutting-edge text-to-image generator by Google. The watermarking tool is capable of scanning incoming images and determining whether they were created by Imagen, with three certainty levels: detected, not detected, and possibly detected.

Google stated, "While this technology isn’t perfect, our internal testing shows that it’s accurate against many common image manipulations." The beta version of SynthID has already rolled out to select customers of Vertex AI, Google's platform for developers specializing in generative-AI. Furthermore, the tech giant noted that SynthID, a product of the combined efforts of DeepMind and Google Cloud, will continue to develop and potentially expand across other Google products or even third-party offerings.

As issues surrounding deepfakes and doctored images gain more prominence, this new offering by Google seeks to mitigate the reality distortion they cause. Alarmingly realistic AI-generated images of Pope Francis sporting a puffer jacket and former President Donald Trump being arrested have gone viral, giving momentum to tech companies working diligently to develop methodologies that can distinguish manipulated content.

Google, with SynthID, now joins the ranks of startups and major tech corporations seeking similar anti-misinformation solutions. These enterprises, carrying names like Truepic and Reality Defender, epitomize the high-stakes combat against the fabrication of reality.

In its battle against manipulated content, Google had previously taken a unique route compared to the Coalition for Content Provenance and Authenticity (C2PA), which is lead, in part, by Adobe and focuses on digital watermarking. Google primarily launched a tool in May, titled 'About this Image', that allowed users to get background information about indexed images, like their first appearance or where they can be found elsewhere online.

Alongside this initiative, Google proclaimed that it will ensure that each AI-generated image it makes carries a distinctive markup in its original file to provide context if found on another website or platform.

However, these technical antidotes may not be enough, given the rapid advancements of AI technology. OpenAI, pioneers behind Dall-E and ChatGPT, confessed that their attempt to detect AI-generated writing is "imperfect," and should be weighed cautiously.

LEAD STORY