Mother Jones illustration; Alejandro Barba/Unsplash
Less than a day after President Donald Trump falsely suggested that Ilhan Omar had staged an attack on herself, the images started to circulate. In AI-generated fake photos that soon flooded both X and Facebook, the Minnesota representative is depicted posing next to the man who invaded a town hall meeting and sprayed apple cider vinegar on her from a syringe. In the AI-generated images, Omar and the man are both smiling; in some, the congresswoman is foisting a wad of cash, presumably to suggest that she bribed her attacker.
It’s extremely easy to trace not just the fact that these photos are fake, but how: one widely circulated image simply replaces a woman Omar’s attacker posed with in a separate Facebook photo with the congresswoman. And while the pictures were cartoonish and strained credulity—would someone engaged in a conspiracy with their attacker really pose with him holding a fistful of bribe money?—in two distinct senses, they worked. A false narrative soon broadly took hold on the right that Omar had planned the attack on herself; the fake photos have often been used on social media as further “proof” that the event wasn’t real. But even when the images weren’t taken to be definitively real, they still were effective at creating some useful amount of uncertainty about what might actually be true, and discouraging people from trying to find out.
“People have a very difficult time figuring out what is real and what is true.”
Dmytro Iarovyi is an associate professor at the Kyiv School of Economics who studies disinformation, propaganda, and “disinformation resilience.” Iarovyi, who is also a researcher at Vytautas Magnus University and a visiting scholar at Harvard, explains that the “sustained experience of living through disinformation changes people’s capacity to participate meaningfully in democratic life… In fact, it’s one of the major tasks of modern disinformation—not to persuade people in something, yet to discourage them, turn them into passive, tired, exhausted mob.”
In the United States, a strategic lawsuit against public participation, or SLAPP suit, is one that is filed to silence one’s critics or scare journalists away from covering a story. What we’re seeing now could be termed “strategic memes against public participation”—images designed to confuse, sow doubt, and chill public engagement with political issues.
Take, for instance, a discussion that ensued under a Facebook post about Omar’s assault from Ted Howze, a failed 2020 GOP California congressional candidate whose party support faded after he was found to have made bigoted posts against Black people and Muslims. “The attack was a staged production,” Howze wrote above a fake photo of Omar. “Don’t fall for it.”
A few people in Howze’s comments pushed back, noting that the image appeared to be AI generated. A majority believed it to be real. But a third camp simply weren’t sure, and asked where the photo had come from or sought other contextualizing information not readily available in an unhinged Facebook comments section.
“Interesting,” one person wrote, “so hard to tell with all the abilities to add and change anything in a photo nowadays.”
“I think the first pic is fake,” another chimed in. “She is wearing the same sweater. BUT the others might be real.”
Fake images now attach themselves to virtually every global news event. Take, for instance, a spate of AI images claiming to depict Jeffrey Epstein, either showing him alive and well in 2026 or pictured with people we don’t know him to have associated with. One image shared on X by an obscure YouTuber claimed to show the dead convicted sex criminal walking in Tel Aviv; Hebrew speakers pointed out that road signs in the image were gibberish, among several tells that the image was fake. Nonetheless, the tweet has been viewed over 3 million times.
But that Epstein picture, where nonsense text immediately points to the image being false, is increasingly an exception, warns Georgetown University’s Renée DiResta, a social media researcher and globally recognized expert on propaganda and disinformation.
In the last few months, DiResta says, when it comes to AI-generated photos and audio, “We have crossed the threshold of it being virtually impossible for people to tell just with the human eye whether something is real or fake.”
The net effect, she adds, “is that people have a very difficult time figuring out what is real and what is true. And those are not always the same thing.” Something “real” could be a genuine photo in a false context; for instance, a picture of a Amazonian fire whose caption claims it depicts Los Angeles. Images that are “true,” DiResta explains, depict both a real image and contextualize it accurately.
By themselves, fake images can already do real harm. A recent study by researchers at the University of Hong Kong and Vanderbilt University found that susceptibility to fake news increases when false headlines are paired with “realistic” but fake photos. Other research from University of New South Wales shows that people tend to overestimate their ability to identify AI-generated faces.
“Researchers thought the risk profile for this stuff was going to be foreign bad actors.”
But a deeper issue is what the public does with the knowledge that a photo might not be real. In a painfully circular irony, more and more people are using Grok or other AI tools to attempt to figure out if AI-generated images are real. But AI models are simply bad at verifying photos, or giving appropriate context if they are able to correctly note an image is fake.
That’s happened with these photos of Omar. When asked by one user, Grok correctly noted a “pattern of edited images” targeting the congresswoman, but was unable to contextualize other parts of the fake image. “The photo appears to be edited or AI-generated,” Grok responded. “Searches show no credible evidence of a real image of Rep. Ilhan Omar wearing an IDF headscarf.” But that last sentence contains a revealing error: Grok did not even “read” the image correctly, as the letters on Omar’s scarf in the faked image don’t say “IDF” (meaing Israeli Defense Forces) but “DFF,” a watermark used by a Twitter account called “Dumb F#ck Finder,” which claims to expose “dumbfucks” by creating fake photos and getting people to fall for them.
The Trump administration, despite their constant, angry denunciations of what they consider to be “fake news,” has emerged as a prominent player in this disturbing blurring of visual truth and fiction. During this winter’s immigration crackdown in Minnesota, the White House published a doctored photo of the arrest of activist Nekima Levy Armstrong, who participated in a demonstration at a St. Paul church where one of the pastors reportedly leads a ICE field office. The altered photo showed Armstrong, who is African-American, wailing as she’s taken into custody, with tears running down her face; it also appeared to slightly darken her skin.
In this case, had then-Homeland Security Secretary Kristi Noem not shared the unaltered photo of Armstrong in an earlier post, it would have been difficult to tell that the image was fake. As the Electronic Frontier Foundation pointed out, it isn’t known if the move was a first for the White House.
“This incident raises the question of whether the Trump Administration feels emboldened to manipulate other photos for other propaganda purposes,” the digital rights organization wrote at the time. “Does it rework photos of the President to make him appear healthier, or more awake? Does it rework military or intelligence images to create pretexts for war? Does it rework photos of American citizens protesting or safeguarding their neighbors to justify a military deployment?”
The response from the White House was also notable; when asked by a journalist if the photo was manipulated, White House spokesperson Abigail Jackson responded with mockery, posting a poorly-formulated meme on X mocking “debunkers” and the very idea of fact checking. Deputy communications director Kaelan Dorr responded with a statement that said, in part, “The memes will continue”—blurring deliberately produced, photo-realistic disinformation with the idea of a meme.
Indeed, the White House didn’t learn its lesson. Weeks later, it posted an AI-generated TikTok video showingTeam USA hockey star Brady Tkachuk calling Canadians “maple-syrup-eating fucks.” (Tkachuk was forced to clarify that the video wasn’t real, saying at a press conference, “I’m not in control of any of those accounts. I know that those words would never come out of my mouth. So, I can’t do anything about it.”)
According to DiResta, it used to be that “disinformation researchers thought the risk profile for this stuff was going to be foreign bad actors,” like troll accounts linked to the Russian or Chinese governments. “When it became clear that the U.S. government itself was doing this to own its domestic enemies, that was alarming.”
The Trump administration visibly uses AI in forms that are DiResta calls “obvious political propaganda”—for instance, the October 2025 video showing Donald Trump dumping shit on protesters from a fighter jet.
There’s a difference, Diresta says, between such images where “no one is fooled” and ones that are genuinely meant to be manipulative. The later kind, she says, are “contributing to the trust breakdown and reiterating that the U.S. government can’t be trusted.” They also can play a part in what she calls the “firehose of falsehood model,” where fake information and propaganda are launched at people with overwhelming and disorienting volume.
Iarovyi, who is Ukrainian, says that after years of Russian attacks, his home country now recognizes how disinformation “targets morale, trust, cohesion, and the credibility of institutions under stress.”
“Democratic life assumes at least a minimal shared picture of reality.”
In the United States—and in totalitarian societies like China and Iran—it’s reasonable to expect what he calls “truth decay” tactics, Iarovyi says, “Not just individual falsehoods. The strategic product is uncertainty, polarization, and distrust—conditions that make collective action harder.”
“When disinformation is constant,” Iarovyi says, “the everyday cost of knowing what’s going on rises. People spend more time verifying the basics, or they stop trying… It reduces meaningful participation because democratic life assumes at least a minimal shared picture of reality.”
Constant exposure to disinformation, by contrast, he says, can produce both cynicism and disengagement. “A high-volume, repetitive environment (especially when messages contradict each other) doesn’t need to persuade you of a specific lie,” he explains. “It can persuade you that truth is inaccessible, so politics becomes vibes, identity, and tribe. This is why the ‘flood the zone’ logic works: it produces exhaustion and withdrawal, not just misbelief.”
The primary aims of images like the ones targeting Omar and Armstrong are, of course, to harass, demean and discredit political opponents. But they have a secondary effect: to overwhelm the internet with unusable, false, unstable information, and then to mock, as Jackson did, the idea of finding out what’s true at all.
This type of bad information can make it genuinely difficult for people to figure out what’s real and what’s worth engaging with. And as AI-generated images like these become increasingly convincing, a new danger is emerging: that when people don’t know what is AI and what isn’t, they will distrust everything they see equally.
At that point, DiResta says, people can start to believe that “nothing is true and everything is possible,” a phrase coined by journalist Peter Pomerantsev in his 2015 book about working in Russian TV news.
The main risk in that circumstance, DiResta says, “is that you see trust fragment along very partisan lines. This has already happened to an extent. People come to believe something is true or not based on who says it.” The most successfully manipulative fake images, DiResta adds, convince people to share them quickly—before their brains can do the work of assessing whether they’re real.
“If they find it plausible and really believe that Ilhan Omar is a Somali agent, or here illegally, or in cahoots with a false flag attack, and they believe that, then they want people to know,” she says. “They believe they’re being righteous by sharing that… They won’t say, ‘Let me go and search for disconfirming evidence.’ That’s not an innate behavior.”
“In the moments where it matters most,” DiResta says, “You’ll see people do the least amount of checking.”
“The technology is developing so fast,” Katie Sanders, editor-in-chief of PolitiFact, told the Texas Tribune. “What I suspect can happen is that it makes people more skeptical of what they see. If you feel that you can’t believe what you are seeing, you might be inclined to not believe anything.”
“Don’t treat disinformation as a temporary ‘media trend’ that will pass.”
Previous research has shown that exposure to fake news can lower trust in media and raise trust in government. A 2020 study from the Harvard Kennedy School found that it can also have a direct impact on whether people vote or engage in civic activity at all. “Public confidence in political institutions affects civic and electoral behavior, with distrustful citizens more likely to sit out an election or vote for a populist candidate,” the researchers noted. “While in some cases concerns about poor government may lead to citizen mobilization, high levels of cynicism and mistrust can cause people to withdraw from participating in politics.” This adds to what we already know about how conspiracy theories and fake news affect people’s desire to engage civically: a 2014 study by University of Kent researchers, for instance, showed that exposure to conspiracy theories about the government’s involvement in significant world events—in that case, the death of Princess Diana—“reduced participants’ intentions to engage in politics, relative to participants who were given information refuting conspiracy theories,” the researchers wrote.
A lack of civic participation and an inability to distinguish truth from fiction only benefits autocratic leaders. “If we don’t know what is happening in the world, if we do not have common reference points, there is no way to decide how to vote, whom to vote for, even how to have an opinion on what is going on,” warned Fred Ritchin, the dean emeritus of the International Center of Photography School, in a recent interview with the influential photobook publisher Aperture. “As many have previously warned us, including Hannah Arendt, there is now a widening path leading to autocratic governments when citizens become confused enough, essentially disarmed, so that a dictator emerges who will make the decisions for them.”
In some ways, this situation isn’t new: state-backed disinformation has always created confusion about what truths are knowable, with the hope that people set aside their own critical capacities and instead put their trust in a strongman leader. In 2015, journalist Adrian Chen first reported on the Internet Research Agency, a Russian government-linked troll farm where workers spent all day sowing disinformation about fake events: a “toxic fume” release in Louisiana, for instance, or an outbreak of Ebola in Atlanta.
But when Chen considered the troll farm again in 2016—and spoke to Russian activists about its actual effects—he realized that their aim was “not to brainwash,” he wrote, “but to overwhelm social media with a flood of fake content, seeding doubt and paranoia, and destroying the possibility of using the Internet as a democratic space.”
Today, we find ourselves in a disturbingly similar situation. But the solutions, Iarovyi says, are complex. “Debunking is good, but it’s not sufficient,” he says. “Resilience is built when societies lower the spread capacity of bad information and raise the verification capacity of ordinary people—without outsourcing everything to heroic fact-checkers.”
Iarovyi’s research for an organization called the Baltic Engagement Centre for Combatting Information Disorders suggests that many people in Latvia, Estonia, and Lithuania, countries with long experience fighting government disinformation, have learned tactical lessons: “They don’t treat disinformation as a temporary ‘media trend’ that will pass. That mindset changes the work—it pushes you toward long-term capacity building and institutional routines, not just reactive debunks.”
The United States, meanwhile, is still at the beginning of our journey towards understanding how state-backed disinformation, including increasingly realistic fake images, will affect our politics and our national life. In the meantime, fake images—some promoted by the federal government itself—clutter our feeds, and our sense of how the world really looks, and whether we can trust the evidence of our own eyes.
