Jessica Dine
Policy Analyst, Open Technology Institute and Wireless Future, New America
An ad by Senate Republicans shows a Democratic candidate’s dramatic reading of his own past social media posts. On blurry security camera footage, a dog pushes an infant out of the path of a falling chandelier. A clip of a showdown between Superman and The Boys’s Homelander makes you do a double-take and a quick Google search for a crossover that doesn’t exist.
You’re scrolling on TikTok, but something seems off. None of this content is real.
From political campaigns embracing the use of synthetic media to everyday users wielding powerful generative-AI tools, America is being dragged relentlessly into a new reality: one where anyone can generate fake videos and images that spread quickly over social media with just a click, and where the voice you’re talking to might not be real—even if you’re communicating in real-time.
The ability to spread fake content isn’t new. Airbrushing and photo editing have existed for decades, and videos have long been used to spread propaganda and lies. But today, deepfakes can be anywhere and everywhere online. When George Orwell warned us against questioning the evidence of our eyes and ears, did he envision a world where that evidence might be a lie?
The rapid diffusion of advanced deepfake technology threatens our privacy, political stability, and democratic norms by casting doubt on everything we see online. Deepfake fraud and scam attempts have surged, with certain already vulnerable groups most at risk. One report found that deepfake fraud cost over $200 million in financial losses in the first quarter of 2025 alone. Fakes showing unsavory or illegal behavior can be used to unjustly accuse individuals of all kinds of crimes. And women are disproportionately at risk of having damaging deepfaked pornographic images created of them without their consent.
Schools and other institutions are struggling to address this rise in deepfaked non-consensual intimate imagery, which makes up the majority of deepfakes online. Federal agencies and governing bodies have released guidelines on digital identity proofing and authentication and detection technologies. And there’s still plenty of terrain left unexplored.
On social media, requests to help identify deepfakes have soared as the average person’s online feed becomes clogged with AI-generated “slop.” Every day it becomes harder to recognize deepfaked content in our everyday lives. One McAfee survey found that most people don’t think they could tell the difference between a real and cloned voice, and in practice, people are poor at identifying deepfaked content. As the technology matures, the lingering forensic tells are being iterated away.
So we can’t guarantee a positive ID of a digital fake—but nor can we afford to throw up our hands. Practicing basic cybersecurity and online hygiene still goes a long way: Protect your passwords, use multi-factor authentication, avoid suspicious links, and don’t share sensitive information online. It’s worth looking for common visual cues and maintaining a level of healthy skepticism above all else. Although institutions will need to create guardrails around fair and safe AI use, there are tactics individuals can take to guard themselves and their loved ones online.
1) Examine content carefully. Although AI-generated deepfakes without visual tells are on the rise, it’s still worth checking for the obvious signs. MIT suggests looking out for odd lighting without a clear source and for unusual shines. Most deepfakes are of human faces, and AI models still often struggle with the finer details of human hair and skin texture. A guide by the American Bankers Association and the FBI notes blurry or distorted facial features, strange lip movements or blinking, or audio-visual mismatches as potential signs of deepfaked media. Sometimes backgrounds are nonsensical, text is unintelligible, or the depiction doesn’t follow the normal rules of physics. Sometimes it’s unlikely something could have been photographed from the angle it appears.
Research shows that while deepfake detectors are better at identifying faked images than humans, people are still better at detecting deepfaked videos, and that people get better at identifying deepfakes when they’re primed for it—either with an explanation or hands-on practice. That’s where deepfake detection sites can come in handy to help you practice and improve. Deepfake detection software can also explore geometric inconsistencies or visual noise in an image or video to try to assess whether it’s real.
2) Pay attention to context clues. Caution still goes a long way. Receiving an unexpected phone call? Stop and think before picking up if you don’t recognize the number. Reverse image-search suspicious images. Does something just feel off or too good to be true? It probably is.
Some kinds of communication—like political content or hot-button issues—should be given an especially close look. An example is the current war with Iran, where cinematic AI videos showing the destruction of U.S. troops and allies have been deployed as propaganda tools. Assess the kind of response the content is meant to evoke. Is there an urgent appeal meant to stoke your emotions or impose a ticking clock for action? Scams, deepfaked or not, often rely on your fear and split-second decision-making—so give it some thought before you react or repost.
3) Verify sources and claims if you can. Nothing’s trustworthy in a vacuum, but there’s always another source to try. See a shocking political video? Check if other credible outlets are reporting the same thing, and trace back their sources. Get an unexpected, alarming text? Try to contact the sender on another platform. Come up with code words with close family and friends. Before you respond to that phone call or voicemail with an urgent financial ask from a friend, try to confirm it’s really them through text, email, or in person. Be wary of spending a cent on a big-ticket item, like a car or rental property, before viewing it in person.
4) Know what to do if you’ve been targeted. Deepfakes are easy to fall for by design—and some kinds of abuse, like non-consensual nudes, can take place without any interaction at all. If you’re the victim of a cybercrime, you can file a report at the Internet Crime Complaint Center (IC3) run by the FBI. If you find deepfaked intimate content online, reach out to the platform or site owner; many have terms of service that may require them to take the content down. Report crimes to law enforcement, and seek legal advice—including from local legal aid organizations—to understand which laws in your location can protect you. Other organizations, like the Cyber Civil Rights Initiative, exist to support you if you’re not sure where to go.
As technology develops, the potential for error or fraud becomes clear. Systems change, some more quickly than others. Individuals have to adapt.
It’s time for us all to adapt to a reality where visual or audio evidence can’t always serve as proof of anything—by using our judgment, practicing good digital hygiene, and always remaining skeptical in a world where the truth can be credibly faked.