Google Now Lets You Verify AI-Generated Videos in the Gemini App

Updated 19 December 2025 11:46 AM

by

Google Now Lets You Verify AI-Generated Videos in the Gemini App

What the new check actually does?

The Gemini app can now inspect a video that a user uploads then tell whether Google AI helped create or edit it. It looks for an invisible watermark called SynthID that Google bakes into media made by its own models including Veo.

The check can flag AI in different layers of the clip. It can say whether the visuals contain SynthID the audio contains SynthID or both contain SynthID and it can narrow this to specific time ranges in the video.

This matters because many AI tools let people mix real footage with generated elements. A video might have a real background with an AI generated face or a real video track with AI generated voice over and users need clarity about which part is synthetic.

How people use it in the app

To run a check a user opens the Gemini app then uploads a video file within the supported limits. The person can then ask Gemini if any part of the clip was created using Google AI or a similar natural question.

Gemini processes the file and scans every frame and audio segment for SynthID. When it finds a match the app can respond with a short explanation for example SynthID detected in visuals between 5 and 12 seconds and in audio between 30 and 40 seconds.

If Gemini does not find SynthID the answer states that it did not detect Google AI generated content in the video. That does not prove the clip is human made it only means there is no watermark from Google’s own systems.

Technical limits and scope

The verification feature is designed for short media not full length films. For now it supports files up to around 100 MB and clips that are roughly 90 seconds or less which matches common sizes for social posts stories and short ads.

It works on videos that were generated or edited with Google’s AI models such as Veo where SynthID is embedded by default at the time of creation. If a video was made with another company’s model that does not include SynthID the Gemini check cannot identify it as AI even if it is fully synthetic.

The system also depends on the watermark surviving later edits. In theory heavy cropping re encoding or hostile attempts to strip or corrupt the watermark could weaken detection though Google says SynthID is designed to resist common transformations.

Where and for whom it is available

Google has rolled this out in the same countries and languages where the Gemini app already runs. Users need a Google Account and must agree to standard Gemini terms to access video upload and analysis features.

This video verification tool follows an earlier update that added similar checks for images generated by Google AI. Both features sit inside the same consumer facing app which means people do not have to visit a separate website or lab interface to verify content.

Why this move matters now

Short AI videos are increasingly common in social feeds political ads and influencer posts. Deepfake style clips can spread fast across platforms while regular viewers often have no easy way to see whether a piece of media is synthetic.

By adding a built in check Google is trying to put provenance information closer to users and to creators who already rely on Gemini and Veo for content. It does not stop misuse on its own and it only detects Google’s watermark but it still pushes the ecosystem toward clearer labels for AI generated media.

Disclaimer:

The video verification feature in the Gemini app identifies content generated by Google's AI models, such as Veo, by detecting the SynthID watermark. It does not guarantee that content is fully human-made if no AI is detected. The tool is limited to shorter videos and works only on media created or edited using Google's models.

Tags: Google Now Lets You Verify AI-Generated Videos in the Gemini App

Recent Articles

More Related News Articles