C
Coffeezilla·TechInvestigating AI Deepfakes
TL;DR
Cheap AI tools have made deepfakes dangerous not because they're perfect, but because they're good enough to fool most people at massive scale.
Key Points
- 1.UC Berkeley professor Hani Farid found people are barely above 50% chance at detecting fake images and audio; video detection sits around 70%, and he predicts that will collapse within 12 months.
- 2.CoffeeZilla deepfaked three presidents, Elon Musk, MrBeast, and Joe Rogan using just two $20 app subscriptions in two days — compared to the Hollywood-level setup required for Deep Tom Cruise in 2022.
- 3.Deepfake scams commonly run crypto "doubling" livestreams impersonating Trump or Elon Musk, and AI now enables solo operators to run sophisticated scam infrastructure that previously required expensive, raid-vulnerable call centers.
- 4.Venezuela propaganda deepfakes went viral showing fake "Venezuelans crying with joy" over Maduro's capture — even people who saw the debunks simply became too exhausted and suspicious to know what was real.
- 5.Joe Rogan believed an AI-generated video of Tim Walz because it already fit his worldview, illustrating how deepfakes bypass critical thinking by confirming existing beliefs rather than creating new ones.
- 6.Grok (owned by Elon Musk) generated 6,700 non-consensual sexualized deepfakes in a single hour after launching "spicy mode" in early 2025, which experts called completely unprecedented.
- 7.The Take It Down Act criminalized non-consensual intimate imagery, but critics note it places the burden of finding and reporting violations on victims rather than the platforms generating them.
- 8.An anonymous OnlyFans agency operator admitted his customers don't know models are AI-generated or nonexistent, with Filipino chatters building fake emotional relationships — and he even scripted fake dying-grandmother stories to extract more money.
Life's too short for long videos.
Summarize any YouTube video in seconds.
Quit Yapping — Try it Free →