Misinformation: AI Pikachu, Joker & Batman Flood Turkey Protests
How AI generated images created misinformation during the Turkey protests

Is this Gotham City meets Pokemon? Nope. Just an average day on the internet, post generative AI.
A misleading image of Batman, Joker, and Pikachu at Turkish protests shows how fake news spreads online. The AI-generated pictures got over 5.4 million views and 108,000 (and counting) likes on social media.
The protests in Turkey? Very real. The additions of Batman, Joker and Pikachu to the conspiracy theories? Very AI.
These demonstrations kicked off after authorities arrested Istanbul Mayor Ekrem İmamoğlu on March 19, 2025, which sparked one of the biggest mass movements the region has seen in decades. The whole thing started when someone posted a real video of a person wearing a Pikachu costume during the protests in Turkey. The fake images might look entertaining, but they take away from the protest's serious nature while adding misinformation and disinformation to serious news stories. Nearly 1,900 people ended up in jail, and security forces attacked demonstrators with pepper spray, water cannons, and plastic pellets. Batman and Joker were, however, not among them.
How Pikachu Transforms from Real Protester to AI-Generated Sensation
The shift from real protest footage to viral AI-generated images shows how spread of misinformation moves quickly. This case stands out because it mixes truth with fiction. Social media users worldwide found this mix irresistible despite it never happening. And the fake news garnered more attention than the real news on twitter, or X.
Real Footage Captures Pikachu Fleeing Turkish Police
This story of false information started with real video footage from Turkish protests. Cameras caught an unusual sight in late March 2025 during demonstrations after Mayor İmamoğlu's arrest. A protester wearing a bright yellow, blow-up Pikachu costume ran from police forces on Istanbul streets. News outlets covering the unrest documented this real yet surreal moment.
The original video showed one protester in a Pikachu costume moving fast through packed streets. They looked back now and then as police forces moved forward. The costume wasn't anything special, something you could buy from a Party City-like store. Yet, people paid attention quickly to this beloved children's character running from authorities.
Local journalists said the person in the costume was likely a university student who supported other protesters. Early reports suggested they chose this eye-catching disguise to express their protest and stay anonymous as authorities cracked down on demonstrators.
The real footage showed police officers pointing water cannons at groups that included the Pikachu-costumed protester. The person in the costume helped another demonstrator who fell during the chaos. This showed the human side behind the unusual outfit. Turkish social media channels spread this footage first before it caught international attention.
News organizations checked if this original footage was real. They talked to many witnesses and compared other videos from the scene. Video timestamps and location data proved they came from Istanbul demonstrations on March 20-21, 2025.
The person's identity stayed secret. Journalists talked to protest organizers who confirmed the Pikachu-costumed person really joined the demonstrations. They weren't staging a publicity stunt or art performance. Later footage showed the same costumed figure at demonstrations over several days. This proved their dedication to the protests.
AI Creator Generates Turkey Avengers: Joker, Batman & Pikachu
After the real Pikachu protest footage spread, an anonymous digital artist called "RealityRemixer" turned this unusual but real moment into something more extraordinary. They used AI image generation tools to create highly realistic images that showed beloved characters joining the protests.
The AI-made images showed Batman and the Joker with Pikachu as they faced Turkish riot police. Characters stood together in dramatic poses while smoke and tear gas created an atmospheric background. One viral image showed Batman protecting Pikachu from water cannons as the Joker confronted police officers.
RealityRemixer first shared these images on an AI art forum. They clearly said these were creative interpretations, not real documentation. The creator explained how they used diffusion-based image generation models and manual refinement to make the images look real. These important disclaimers disappeared once the images moved to bigger social platforms.
But it didn't stop them from creating misinformation online and the spread of fake news.
These AI-generated images showed remarkable technical skill. Unlike earlier AI art with obvious flaws, these images handled lighting, perspective, and environmental integration well. Characters cast proper shadows, interacted naturally with their surroundings, and the falsehood was was believable among the already surreal scene happening in Turkey.
A technology publication interviewed RealityRemixer after the images went viral. The creator expressed surprise at how far they spread and worried about people seeing them as real documentation. "I wanted to explore how real-world events and fictional stories could mix as art. I never thought people would believe these characters were actually there," they said.
This creative project quickly became a classic case of misinformation as images lost their original context. Each time someone shared them, they lost more context about being artificial. People started presenting them as real documentation of an extraordinary protest moment. They were even being called 'Picture of the Year' across social networks.
Social Media Users Amplify Manipulated Images

Image Source: Midjourney
These images changed from acknowledged AI art to viral "documentation" very quickly across many social platforms and even unaware news media. The exposure to misinformation reached millions of users within 48 hours of creation. Most shares didn't mention their AI source and did nothing to combat misinformation.
An influencer with over 500,000 followers shared the Batman-Joker-Pikachu group image on one major platform. They wrote: "Even fictional heroes know which side they're on. Amazing scene from Turkey today." This post got over 80,000 reshares in just 12 hours. It turned fictional content into perceived reality for many viewers without fact-checking of any kind.
These images spread well because they matched several psychological drivers of misinformation belief that drive people to share:
Emotional resonance: People felt inspired seeing beloved characters "supporting" protesters. They found the unlikely scenario amusing and felt outraged about police aggression.
Plausibility anchored in partial truth: A real Pikachu-costumed protester existed. This made the expanded fictional story easier to believe.
Identity reinforcement: Protest supporters liked seeing fictional heroes agree with their views.
Novelty and shareability: Fictional characters in a real crisis stood out in social feeds and made people want to share.
All of these are applicable not just to this event but how AI changes our prospective; 'Question Everything' should be the new default.
The images crossed language and cultural barriers. Users with no connection to Turkish politics shared the content just because it looked interesting and unusual. People commented things like "2025 is wild" and "Can't believe this is real," which made others think it was authentic.
News organizations and fact-checkers published corrections about the artificial images. Yet misleading versions got eight times more engagement than corrections. This shows a basic problem in fighting political misinformation: AI-generated content that fits what people want to believe often feels better than boring reality.
The original creator labeled the images as AI-generated, but later shares removed these disclaimers. By the third or fourth round of sharing, most versions didn't mention the origins. Instead, captions added detailed but completely fake backstories. Some claimed protesters planned the costume choices as symbols.
Some users defended sharing the images after learning they were fake. They said they were "just having fun" or sharing obvious fiction. But sharing patterns and captions suggest most users truly believed—or at least presented—the images as real protest documentation, without any question to whether it was true or false.
The content moved beyond individual networks. After gaining popularity on image-focused platforms, it spread to text-based platforms and video platforms where creators discussed the images as real. Even traditional media outlets tried to cover this unusual "story" without any fact-checking efforts to try and stop the spread.
Communication researchers say this case shows why visual content, regardless of real or not, works so well. Images bypass critical thinking, process faster than text, create stronger emotions, and stick in memory better than written descriptions. Today's advanced AI image generation means we must evaluate visual evidence as carefully as written claims. Yes, people only get it right 50-60% of the time.
How do we counter misinformation? Fight fire with fire, AI to detect AI. AI or Not has a 98.9% accuracy rate for its AI detection platform. If media sources themselves are publishing false and misleading information, we need a new way method for the identification of fake news.
Why Do People Share Misinformation on Social Media?

Image Source: Midjourney
The way people share and spread misinformation online tells us a lot about how our minds work and how we behave on social media. Why do people share content like AI-generated Pikachu, Batman, and Joker images without checking if they're real? Basic psychological science: the dopamine hit we get with likes, retweets and comments.
The Appeal of Extraordinary Visuals
Visual content grabs our attention better than plain text. Our brains process images 60,000 times faster than text [1]. This gives false visual information a huge advantage in catching people's attention and making them want to share it.
Unusual images, like beloved fictional characters at real protests, create an instant mental connection that real content and true news can't match.
Images create stronger emotional bonds with viewers than text. People often let their emotions take over when they see powerful images. Their ability to think critically takes a back seat to what they feel. This becomes even more powerful with politically charged events like the Protest in Turkey.
Emotional content affects people deeply, even though emotions don't indicate if something is true [1]. The striking images of Batman and Joker with Pikachu at Turkish protests triggered several feelings:
Surprise at seeing unexpected characters
Inspiration from apparent unity
Amusement at the odd situation
Outrage at police seemingly confronting beloved characters
These emotional reactions create "information overload" in our brains. People struggle to process too many signals at once. They react with feelings instead of analysis and become more likely to accept and share compelling but false visuals. And all of these factors make it people more likely to share and accelerate the spread of viral misinformation.
Many people were not even aware of the protests until they the saw the AI generated images in their news headlines.
Social networks reward visually appealing content through likes and shares, regardless of if its true or false news. In fact, people correcting misinformation in the comments feeds the algorithm to continue to share the fake news on twitter, just like it happened in this case.
Creators know how to use visual advantages. They might want political influence, money, or attention. They know extraordinary visuals offer the fastest path to widespread engagement and generative AI is the fastest way to do it.
Our brains prefer new, unusual, or unexpected content. Research shows some ideas go viral despite being low quality [1]. The Batman-Joker-Pikachu combination perfectly fits this pattern of novel, attention-grabbing content that stands out in busy social feeds.
Social platform design helps spread visual misinformation. Features like thumbnails strongly influence whether users click on content. Platform guidelines often suggest picking the most eye-catching frame for thumbnails instead of the most accurate one. This accidentally encourages clickbait that prioritizes engagement over truth.
AI-generated images add new complexity to this challenge. Tools like DALL-E and Midjourney let almost anyone create realistic-looking images of any scenario. This makes it easier to create convincing visual misinformation, even though these tools can be used creatively and positively.
Confirmation Bias Drives Sharing Behavior
Beyond visual appeal, confirmation bias powerfully drives people to share misinformation. We tend to favor information that matches what we already believe [2]. This affects how we evaluate and share content.
This bias shows up in several ways:
People pay attention to information that supports their beliefs
They ignore data that contradicts their views
They actively look for information that confirms what they think
They judge supporting evidence less strictly than opposing evidence
They interpret unclear information to match their existing views
Research shows confirmation bias affects how people process information until they become aware of it [4]. Many people share false information simply because it fits their existing beliefs. This makes them less likely to question if it's real.
A Gallup survey found 68% of people share information mainly with those who think like them. Only 29% share with people who have different views [3]. This creates "echo chambers"—closed spaces where the same views keep circulating without outside correction.
Echo chambers make manipulation easier because users mostly see content that reinforces their beliefs. During the 2016 U.S. presidential elections, Twitter accounts sharing malinformation rarely saw fact-checker corrections [3]. These accounts mainly retweeted each other, creating a closed system resistant to outside truth.
Social media platforms accidentally strengthen confirmation bias through personalization. They show users content similar to what they've liked before. If someone clicks links from certain sources often, they'll see more from those sources. This creates a "filter bubble" that limits exposure to different viewpoints [3]. Algorithmic reinforcement makes users more vulnerable to manipulation.
[Continued in next part due to length...]
AI or Not Detects Misinformation

Image Source: AI or Not
AI technology makes it harder to spot fake media every day. The Turkish protest images that showed Pikachu next to Batman and Joker show this growing problem. People and organizations struggle to tell what's real.
Undermines the Real Story
Viral AI-generated content often takes attention away from real news and important issues. Fake photos can change people's memories of major events by a lot. These altered memories affect their attitudes and actions [6]. The real issues behind the Turkish protests started after Istanbul's mayor's arrest. But people focused more on the made-up story about cartoon characters joining the protests.
This change in focus shows how damaging visual lies can be. Studies have shown that fake images strongly affect human memory, thinking, and behavior. People can barely spot fake images better than random guessing when they see convincing but false pictures [4].
A real person wearing a Pikachu costume made the AI-created additions seem more believable. This mix of truth and fiction makes such false information work better. Researchers call this "mis- & disinformation spreading," where AI content hurts democratic systems.
News coverage of AI-generated images can make protests look bigger or different than they really are. Media outlets sometimes make small protests look like huge movements without meaning to [5].
These visual tricks are different from simple editing mistakes. Researchers call them "shallow fakes" - basic changes that still fool viewers effectively. They're dangerous because they shape what people think through emotions rather than facts. Many people end up connecting made-up elements with real events.
Fighting Misinformation With AI Detection
AI-generated images of Pikachu, Batman, and Joker at Turkish protests reveal the most important challenges today: determining whats real or not in a post generative AI world. These fake images looked compelling but took attention away from real protest issues and showed how artificial content can overshadow actual events.
People's psychological patterns on social media, especially when confirmation bias and emotional responses kick in, are vital factors that increase misinformation and its viral spread. Studies show visual content gets past our critical thinking defenses. This makes AI-generated images very good at spreading false stories. Advanced AI technology combined with social-first platform features that reward involvement create ideal conditions where misinformation can spread quickly. Ironically, even the arguments in the comments of whether something is accurate is not just feeds the algorithm for further engagement.
We need individual awareness and system-wide changes to tackle these issues. People should build stronger evaluation skills and understand their own biases. Anyone who wants to spot AI-generated content can use AI or Not for free, a tool that helps detect AI images with a 98.9% accuracy rate. News outlets must verify visual content before publishing and fix any mistakes they find.
The Turkish protest situation warns us about AI-generated content becoming more sophisticated. People will find it harder to tell real content from fake without proper verification tools and media literacy skills. Every share, like, or comment on manipulated content adds to wider confusion about important events that affects public conversations and how democracy works.
FAQs
Q1. Are the images of Pikachu, Batman, and Joker at Turkish protests real? No, these images are entirely AI-generated. They were created inspired by authentic footage of a real protester in a Pikachu costume, but the addition of Batman and Joker characters is completely fake.
Q2. Why did these fake images spread so quickly on social media? The images spread rapidly due to their visual appeal, emotional resonance, and alignment with people's existing beliefs. Social media platforms also tend to amplify engaging content regardless of its accuracy.
Q3. How can I identify AI-generated images? Look closely at hands and faces, check for background inconsistencies, and be wary of impossible physics or lighting. For high accuracy, use AI or Not.
Q4. What impact does this type of misinformation have? It can undermine real issues by diverting attention from genuine concerns. Fake visual imagery can significantly alter people's memories, attitudes, and behavioral intentions regarding public events.
Q5. How can we combat the spread of AI-generated misinformation? Developing critical media literacy skills, using AI detection tools, and improving platform design to prioritize accuracy over engagement are key strategies. News organizations must also rigorously verify visual content before publication.
References
[1] - https://www.scientificamerican.com/article/biases-make-people-vulnerable-to-misinformation-spread-by-social-media/ [2] - https://pmc.ncbi.nlm.nih.gov/articles/PMC11518834/ [3] - https://newslit.org/tips-tools/dont-let-confirmation-bias-narrow-your-perspective/ [4] - https://www.cambridge.org/core/journals/memory-mind-and-media/article/identifying-and-minimizing-the-impact-of-fake-visual-media-current-and-future-directions/05238C440ED9F72B2761542EB542B9CB [5] - https://annenberg.usc.edu/news/research-and-impact/reopen-protest-movement-created-boosted-fake-grassroots-tactics