The Deepfake Cleanup Illusion.Why “Take It Down” Isn’t Taking Us Anywhere
- Kirra Pendergast
- 5 minutes ago
- 2 min read

On April 28, 2025, the U.S. House of Representatives passed the bipartisan #TakeItDown Act with a resounding 409–2 vote, following the Senate's unanimous approval in February. This landmark legislation now awaits President Donald Trump's signature, which he has pledged to provide. But let’s get one thing straight, the US #TakeItDown Act is not a win. It’s yet another containment measure. And while the headlines are screaming “landmark victory,” I’m here to tell you what they’re not.
We are still failing kids.We are still playing catch-up. And we are still building policy around the wreckage after the bomb has gone off.
So yes, the U.S. just passed legislation criminalising the distribution of non-consensual intimate images—including AI-generated deepfakes. Yes, it mandates 48-hour takedown windows. Yes, it redefines consent to include coercion and misrepresentation. Necessary? Absolutely. Game-changing? Not even close.
Because while the adults cheer from the floor of Congress, I’ll ask the question no one seems brave enough to:
What the hell are we teaching the 14-year-old who made the deepfake in the first place?
Here’s What the Law Gets Right (But Way Too Late)
For survivors, this law will matter. It:
Closes critical loopholes around consent
Criminalises synthetic sexual abuse
Forces platforms to act within hours, not weeks
Says loud and clear: “You don’t own someone’s body just because you can recreate it”
But:
All of this happens after the harm. After the file is made. After it’s shared in a group chat. After someone vomits in a school bathroom or drops out entirely. After their voice, their face, their nipples are spliced into a video they didn’t even know existed. That’s not digital safety or well-being it’s trauma triage dressed up as progress.
Australia, the UK, the US.......All Reaction, No Prevention
We are seeing this on three continents now. In the U.S., the #TakeItDown Act sets new rules but no learning.
In Australia, the Criminal Code Amendment (Deepfake Sexual Material) Bill 2024 finally criminalises the distribution of deepfake nudes but leaves a legal gap around creation.
In the UK, the Online Safety Act only just caught up to criminalising deepfake porn after public outrage despite deepfakes circulating in private WhatsApp groups for years.
All three are chasing the virus after it's airborne. Not one of them requires platforms to prevent creation before the damage. Because??? Well lets start with Section 230...again. We are legislating for ghosts after the people have already bled out.
“Take It Down” Is Not the Same as “Teach Them Better”
Here’s what every government should be forced to answer under oath:
“What are you doing to ensure the next 13-year-old understands that making a nude deepfake of a classmate is not just illegal, it’s violence?”
Because if all we’ve taught them is that it’s only bad if you get caught, then we haven’t built a safer internet.
We’ve built a better digital hide-and-seek game.
If you're serious about building real digital safety, not just scrambling after the next crisis, you need more than policy documents. You need governance frameworks that hold, crisis management strategies that work under pressure, and education models that rebuild trust before harm takes root. If you’re ready to rethink how your school, organisation, or system approaches digital ethics and digital crisis leadership, get in touch. We work directly with leaders ready to move from reaction to resilience.