The rapid advancement of AI video-generators in the last six months has fundamentally altered the relationship between content and credibility. It has led to a pervasive flood of what many call “AI video slop” across social media feeds. The ability of tools like Google’s Veo and OpenAI’s Sora to create compelling, yet entirely synthetic, footage means viewers are increasingly likely to be fooled. For the time being, however, there remains one critical red flag. It can help viewers spot a likely deepfake: poor picture quality.
Experts in digital forensics, like Hany Farid, a computer-science professor at the University of California, Berkeley and founder of GetReal Security, assert that grainy, blurry, or pixelated footage should immediately raise alarm bells. While the best AI tools are capable of generating pristine, high-resolution clips, it is paradoxically the low-quality videos that are most likely to successfully deceive the average person.
This phenomenon is not because AI is inherently worse at quality. It is due to the intentional reduction of resolution and the addition of compression. These are used as tactics to obfuscate the subtle, yet persistent, inconsistencies still found in AI-generated media.
The Blurry Tactic: Hiding AI’s Flaws in Plain Sight
The reason low-quality videos are so effective at deception is that they intentionally hide the small inconsistencies that even the most advanced text-to-video generators still introduce. Today’s generative models frequently struggle with minor, yet telling, errors. These aren’t the obvious flaws of the past, like a character having six fingers or garbled text; they are more subtle problems, such as uncanny smoothness in skin textures, weird shifting patterns in hair and clothing, or small background objects moving in physically impossible ways.
The clearer the video, the more likely the human eye is to catch these tell-tale AI artifacts. Matthew Stamm, a professor at Drexel University and head of the Multimedia and Information Security Lab, notes that when creators ask an AI to generate footage that mimics an old phone camera or cheap security camera footage, the low fidelity automatically masks these errors.
The fact that high-profile deepfakes that have fooled millions—such as the viral clip of bunnies on a trampoline or the fake preacher giving a shockingly leftist sermon—were all presented with a distinctly poor, zoomed-in, or pixelated look is not a coincidence. It is a deliberate and common technique used by malicious actors. In an effort to mislead viewers, creators will purposely downgrade their work by reducing resolution and adding compression, which further obfuscates any potential statistical traces left behind by the generative model.
Length, Resolution, and the Cost of Deepfakes
Beyond simple resolution, two other factors—length and quality—help experts determine a video’s authenticity. According to Farid, length is the easiest giveaway. Most AI-generated videos are very short, often maxing out at six, eight, or 10 seconds, because the process of generating AI video is technically demanding and expensive. The vast majority of fakes requested for verification are short clips. These are significantly shorter than the typical 30- to 60-second social media video.
Furthermore, the longer the video, the more likely the AI is to make a noticeable mistake. This is why attempts to stitch multiple AI clips together often show a visible “cut every eight seconds or so.” The factor of quality is related to resolution but distinct. It refers to the compression applied to the file. Compression reduces a video’s size by discarding visual detail, leaving behind blocky patterns and blurred edges—perfect for hiding AI artifacts. The combination of low resolution, high compression, and short length makes a piece of content highly suspect. This trio provides the ideal formula for hiding a synthetic origin.
Given that technology companies are spending billions of dollars to improve realism, these visual cues are likely to vanish rapidly. Stamm anticipates that these obvious visual cues could be gone from video within two years. They have already evaporated from AI-generated images, rendering the advice to “trust your eyes” obsolete.
The Future of Trust: Provenance Over Surface Features
As visual tells vanish, the solution to the greatest information security challenge of the 21st Century lies not in looking for technical flaws but in fundamentally changing how we think about what we see online. Digital literacy expert Mike Caulfield argues that chasing constantly changing visual clues is not “durable” advice. Instead, the focus must shift from surface features to provenance, or the origin of the video. The way we treat video content must evolve to resemble how we already treat text. Nobody assumes a piece of text is true simply because it was written down; we investigate the source.
Since videos are no longer inherently harder to fake than text, their credibility is entirely dependent on who posted it, what the original context is, and whether it has been verified by a trustworthy source. This shift means that the only thing that matters is where the content came from. Efforts are underway to address this, including the emergence of advanced forensic techniques that look for statistical traces or “fingerprints” left behind when a video is modified. These traces are invisible to the naked eye.
Additionally, technology companies are working on new standards to embed verifiable information—both by cameras when they record and by AI tools when they generate—to help prove a file’s authenticity or synthetic origin. The final solution will require a combination of new policies, education, and technological approaches all working together to manage the new information landscape. Many experts are rapidly growing to address this task.

