top of page

Deepfake Abuse: What Happens Next (and What You Can Do).

  • Jan 23
  • 4 min read

If you’ve been online lately, you’ve probably felt it: AI is getting smarter, faster, and easier to use. Most of that is exciting, better productivity tools, better search, better creativity.

But there’s a darker trend growing quickly across Ireland and the UK: deepfakes and AI-generated sexual images being made and shared without consent.

This week, Ireland made headlines after the Tánaiste said proposals are expected to go to Cabinet to tackle non-consensual AI-generated sexual images. And while the legal details are still developing, the bigger point is clear: this is no longer seen as a niche “tech issue” it’s being treated as a serious public safety and harm issue.

So let’s break it down in plain language: what’s happening, why it’s accelerating, what laws could look like, and most importantly what you should do if it happens to you or someone you care about!


When people hear “deepfake,” they often picture a fake celebrity video. But a lot of the harm now is happening with images, not videos.

This can include:

  • AI-generated nude or sexual images where a person’s face is inserted

  • “Undressing” apps that generate fake explicit images from a normal photo

  • Fake accounts using AI images to humiliate, harass, or impersonate someone

  • Threats to share fake sexual content unless the victim pays money or complies (blackmail)

And here’s the truly unsettling part, you don’t have to send private photos for this to happen. A selfie, a group photo, or a profile picture can be enough.

This is why governments in both Ireland and the UK are under pressure to respond quickly. There are a few big reasons this is exploding now:

1) The tools are “one-click easy.” A few years ago, creating convincing fake content required technical skills. Now it can be done with a phone app and a clear image of someone’s face.

2) It spreads faster than it can be removed. Even when something is taken down, it may already be downloaded, reposted, or shared privately in group chats. The emotional impact hits immediately and the clean-up takes much longer.

3) It targets ordinary people, not just public figures. This is no longer confined to celebrities. People are being targeted because they’re visible: students, professionals, teachers, people dating online, or anyone with public social media profiles.

4) Shame is being weaponised. A lot of these cases rely on one thing: victims feeling too embarrassed to speak up. That silence gives abusers more power.

Ireland’s move signals momentum and the UK has been evolving its approach too through online safety regulation and criminal law. While the details vary, legal changes usually land in a few practical areas:

Clear criminal offences - Laws that explicitly cover creating, sharing, or threatening to share non-consensual sexual deepfakes.

Platform responsibility - Pressure on platforms to act faster, prevent re-uploads, and treat reports as urgent safety issues (not just “content complaints”)

Enforcement and penalties - This is the big one. Rules need real consequences, and victims need clear reporting routes that actually work.

Victim support - Not just punishment for offenders but support for people affected, including guidance on evidence, reporting, and takedown processes.

If this happens to you, what to do immediately, first, take a breath. If someone created fake sexual content involving you, you have done nothing wrong. This is abuse!

1) Don’t negotiate with the person threatening you. If it’s blackmail (“pay or I’ll send this”), negotiating usually escalates the situation. The goal is control, not truth.

2) Collect evidence (even if you want to look away). As difficult as it is, evidence matters. Capture:

  • screenshots of the content (or posts/messages showing it)

  • usernames, profile links, group names

  • dates and times

  • the exact platform and where it appearedIf it’s video content, record your screen showing the content playing, plus the account name.

3) Report it on the platform straight away. Use whatever category is closest: harassment, bullying, impersonation, or non-consensual intimate imagery. If multiple people report it, takedown can sometimes happen faster.

4) Tell someone you trust. This is where people freeze, but it’s one of the most important steps. Tell a friend, partner, colleague, or family member. You shouldn’t handle this alone.

5) Report to authorities if there are threats or stalking. If you’re being harassed, blackmailed, or threatened:

  • In Ireland, contact An Garda Síochána (under Coco's Law)

  • In the UK, contact your local police (101 for non-emergency, 999 in immediate danger) You can also seek advice from victim support organisations, and in some situations legal advice can help with takedown and documentation.


You shouldn’t have to “live like a ghost” online but there are practical steps that reduce risk:

  • Enable 2-factor authentication (2FA) on email and social accounts

  • Review privacy settings (especially who can DM, tag, or follow you)

  • Remove unknown followers and lock down older public posts

  • Be cautious with high-resolution public headshots (LinkedIn included)

  • Search your name occasionally to catch impersonation early

These won’t stop every bad actor but they make you harder to target.


AI deepfake abuse isn’t harmless trolling. It can affect reputations, careers, relationships, and mental health and it often hits people who never saw it coming. Ireland and the UK taking it seriously is progress. But right now, the best defence is awareness, fast action, and refusing to let shame do the abuser’s job.


If it happens to you, speak up early and get support quickly!

Comments


“Created to empower businesses to use AI with confidence, clarity, and impact”
© 2025 Business of AI Club. All rights reserved.
bottom of page