GOOGLE DEEPMIND TEAM UNVEILS A TOOL TO WATERMARK AND DETECT AI-GENERATED IMAGES

Most Read Tech

Sun 03 September 2023:

As artificial intelligence technology continues to advance at a rapid pace, post-apocalyptic depictions of killer robots may seem more and more realistic. But AI does not have to reach the level of true human-like consciousness to wreak havoc.

In a world where media literacy is low and trust in institutions is lacking, simply altering an image via AI could have devastating repercussions.

DEEP FAKE VIDEO OF PRESIDENT PUTIN ANNOUNCING FULL-SCALE WAR AGAINST UKRAINE BROADCAST ON RUSSIAN TV

A team of researchers at Google are hoping to prevent this with a new tool that uses AI to combat the proliferation of fake, AI-altered images.

Google has released a new tool for watermarking and identifying AI-generated photos – a method that embeds a digital watermark right into an image’s pixels, making it invisible to the human eye but detectable for identification.

Google DeepMind, Google’s AI research division, has released a beta version of a new tool called “SynthID” in collaboration with Google Cloud.

FACEBOOK AI SOFTWARE ABLE TO DIG UP ORIGINS OF DEEPFAKE IMAGES

“While generative AI can unlock huge creative potential, it also presents new risks, like enabling creators to spread false information — both intentionally or unintentionally,” Google DeepMind said in a blogpost.

“Being able to identify AI-generated content is critical to empowering people with knowledge of when they’re interacting with generated media, and for helping prevent the spread of misinformation,” it added.

According to DeepMind, SynthID remains in place even after modifications, such as adding filters to images, changing their colours, and compressing them.

In order to produce the tool, DeepMind trained two AI models together on a “diverse” set of images, one for watermarking and one for identification.
However, SynthID cannot confidently identify watermarked images. The tool distinguishes between images that may or may not have a watermark and images that are highly likely to have one.
“SynthID isn’t foolproof against extreme image manipulations, but it does provide a promising technical approach for empowering people and organisations to work with AI-generated content responsibly. This tool could also evolve alongside other AI models and modalities beyond imagery such as audio, video, and text,” Google said.

This program provides three levels of confidence for interpreting watermark identification findings. If a digital watermark is found, Imagen is most likely accountable for a portion of the image.

The company stated that the tool will be integrated into more Google products and made available to third parties in the near future.

HOW DEEPFAKE TECHNOLOGY ACTUALLY WORKS

 

SOURCE: INDEPENDENT PRESS AND NEWS AGENCIES

______________________________________________________________ 

FOLLOW INDEPENDENT PRESS:

TWITTER (CLICK HERE) 
https://twitter.com/IpIndependent 

FACEBOOK (CLICK HERE)
https://web.facebook.com/ipindependent

Think your friends would be interested? Share this story!

Leave a Reply

Your email address will not be published. Required fields are marked *