AI or Not
    Menu

    Celebrity Deepfakes: How AI Brad Pitt Ruined a Woman's Life

    Deepfakes can ruin lives; how an AI generated Brad Pitt was used to scam $850,000 and even cause a woman to divorce her husband. What can you do to detect these deepfakes?

    Deepfake Brad Pitt

    Love is blind. Everyone wants to believe that special someone is out for them, and only them. And, as many have wondered, or hoped, maybe that special someone is right in front of them.. on their TV screen.

    Go into this tale with caution and not judgement. This was a vulnerable person who's life is now in shambles due to artificial intelligence. Hopefully one day, she'll be able to speak out about this. But for now, maybe we can all learn about the dangers of bad actors using the latest technology to create deepfakes.

    From Instagram to Deepfake Disaster: A Victim's Journey

    It began innocently enough in February 2023, when Anne, a 53-year-old French interior designer, decided to share moments from her skiing holiday in the French Alps on Instagram. Little did she know that this simple act of joining social media would lead to a devastating chain of events that would ultimately cost her not only her life savings but also her marriage and mental well-being.

    The first message came from someone claiming to be Jane Etta Pitt, Brad Pitt's mother, with the message: "My son needs a woman like you."

    Within 24 hours, another account emerged, this saying its Brad Pitt himself. The scammers, armed with generative AI technology, began flooding Anne’s inbox with love poems, compliments about her resilience, and promises of a future together. The scammer sent fake selfies of “Pitt”—casual bedroom photos, gym shots, even images of him holding handwritten notes with her name. They utilized deepfake technology to generate lifelike images and fake videos of the Hollywood star.

    Let me guess what you're thinking right now: how can someone fall for this? Put yourself in Anne's shoes for second; recent user of social media, maybe no familiar with latest tech available (generative adversarial network used to create fake images and video) and being overcome with emotion from incredibly thoughtful gestures.

    It can happen to anyone.

    A Deepfake Love Story

    At this point, things went from the internet to irl.

    bradd_pitt_AI_collage

    "Brad," all of a sudden, needed €9,000 for customs fees. The story was simple enough – he had sent her gifts, tokens of his affection, but they were being held up at customs. No problem!

    This is where the fake took a drastic turn: counterfeit Brad Pitt was battling kidney cancer. They sent images showing Pitt in hospital beds, looking frail yet maintaining his characteristic charm. These deepfake images, created with generative AI, worked on their unsuspecting victim to scam more money from her.

    But wouldn't you think thee Brad Pitt, is like, rich af?!

    Well, according to these fraudsters, all his accounts, and assets, were frozen due to ongoing divorce proceedings with Angelina Jolie. This detail, based on public knowledge of Pitt's actual divorce, added further credibility to the plot.

    First comes deepfakes, then comes a marriage proposal. The promise of a future with Brad Pitt led Anne to make the most drastic decision of her life: divorcing her wealthy husband. The heart wants what the heart wants.

    In what would prove to be the final act of this tragic romance, Anne transferred nearly her entire divorce settlement – €775,000, or $850,000 – to the scammers.

    The Emotional Impact of Deepfake Fraud on Victims

    Brad_pitt_deepfake_detected

    The scam ended in June 2024, but the damage was irreversible. Anne lost her husband, home, savings, and reputation. Friends distanced themselves, mocking her “gullibility.” Online trolls labeled her “France’s most desperate woman.” Humiliated and traumatized, she even attempted suicide three times.

    The emotional devastation that deepfake fraud can cause serves as a sobering reminder that behind every scam statistic lies a human story of loss. Needless to say, this person's life will never be the same.

    Multi Modality Deepfake Attack: Text, Image, Audio, Video

    Combining modalities creates fake content that seems all to real:

    • Text: poems and communication

    • Images: lifelike use of deepfakes

    • Video: looks the same as genuine videos

    • Audio: audio deepfakes trained on the target's actual voice

    It starts with text-based communication, which allows scammers to establish an connection through casual messages. AI language models can now generate highly personalized, contextually appropriate responses that mirror human conversation patterns. In Anne's case, these messages likely displayed intimate knowledge of her interests, responded to her emotional cues, and maintained consistent personality traits that aligned with her perception of Brad Pitt.

    Images serve as the first visual proof of authenticity. Scammers can now generate high-quality, photorealistic images on demand, creating "evidence" of their convincing deepfake. The fake hospital photos of "Brad Pitt" demonstrated this, showing him in convincing medical settings that supported the overall story. Scary!

    Modern voice cloning technology can reproduce a person's voice with just 20-30 seconds of sample audio, which is readily available for public figures like Brad Pitt. This technology can generate new speech in the target's voice, making phone calls or voice messages feel intimate and personal.

    When victims can see their celebrity love interest moving, talking, and expressing emotions in real-time or near-real-time videos, the victim's guard really comes down. Production of deepfakes can now map facial expressions, lip movements, and body language convincingly enough to create videos that pass casual inspection.

    The real synergy comes when you combine all the modalities: each interaction reinforces the others, creating a consistent and comprehensive deception. When a victim receives a text message about feeling unwell, followed by a convincing photo from a hospital bed, then a voice message expressing gratitude for their concern, and finally a video showing emotional vulnerability – the combined effect can override rational skepticism.

    How To Detect Deepfakes

    Detecting deepfakes is yourself is more important than ever. Here's what to look for:

    1. Visual Queues: examine images and videos for inconsistencies. Look for unnatural skin textures (too smooth?), mismatched or no shadows, and other kinds of irregular lighting.

    2. Facial Features: really look at the eyes and mouth movements. In some deepfakes, blinking may look off, or lip movements may not match up perfectly with the audio.

    3. Audio Distortions: any strange speech patterns or background noises that

    4. Context: is the premise believable to begin with? If a video or audio clip seems out of character, might be worth a Google.

    But sometimes thats not enough; instead of showing up to a fight ill-equipped, you may need AI tools of your own.

    Lets avoid this story from repeating itself. And remember "The first rule of deepfake detection is: you do not talk about deepfake detection."

    Contact Us