Can an app that promises to “undress” a photo change how we think about privacy and harm? This article looks back at how clothoff rose from a niche tool to a headline-making service over the past year.
Reporting has placed the app at the center of two major incidents — in Almendralejo, Spain, and at Westfield High School, New Jersey — as concern about deepfake pornography grew. Operators stayed anonymous, even using distorted voices and a reportedly AI-generated CEO to hide identities.
At a high level, “nudify” technology uses artificial intelligence to change images quickly. That speed and scale let images spread far faster than earlier methods. The result: more victims and a bigger public outcry.
This article is not a how-to. It is a reporting-driven, accountability-focused look at how the app works, how images move online, who may run the service, and how hidden payment routes keep it running.
Key Takeaways
- The piece traces Clothoff’s rapid rise and why it matters now.
- It explains “nudify” AI tools and how they scaled abuse.
- Two headline cases show harm is global and affects everyday people.
- The article maps infrastructure, alleged operators, and money flows.
- This is an accountability-focused investigation, not a guide.
What ClothOff Is and How the App Works Across Websites
What looks like one simple app often runs across dozens of near-identical websites and mirror domains. The service presents itself as a fast, consumer-facing tool that can turn ordinary photos or pictures into explicit outputs using AI.
How users interact: visitors land on a clothoff website, see an “over 18” click-through, and are shown sexualized examples without meaningful age or ID checks. A 60 Minutes segment and researchers noted the gate is a simple click-through, not real verification.

At a basic level, users upload a photo or picture and the app returns a generated nude image. The first time is often free; after that, credit packs cost roughly $2–$40 depending on the site. The Guardian reported about £8.50 for 25 credits.
Features like a “poses” option can fabricate more explicit scenes than the original image implied. Platform text warns about consent and says processing minors is impossible, yet reporting documents incidents involving minors.
- Multiple clothoff sites make takedown and payment blocking harder.
- Low-cost credits and fast automation increase the risk of widespread abuse.
“An 18+ click-through is not the same as real age verification,”
clothoff and the Human Impact of Nonconsensual Deepfake Pornography
A single generated photo can ripple through communities, triggering panic, shame, and legal action.
The Almendralejo case in Spain: schoolgirls targeted in WhatsApp groups
In Almendralejo, a mother discovered a realistic explicit fake of her 14-year-old daughter circulating in a school WhatsApp group.
The shock at how lifelike the images looked led to panic attacks, refusal to attend school, blackmail, and public bullying of the girls involved.
Westfield High School in New Jersey: lawsuits and community fallout
In the United States, students at a high school faced similar attacks with generated nude images shared among peers.
That incident prompted a civil lawsuit and helped push bipartisan attention to deepfake pornography and protections for minors.
How content spreads: social media posts, accounts, and reuploads to adult content sites
Once the images exist, time works against victims. Copies are reposted to social media, burned through anonymous accounts, and reuploaded to adult content sites.
- Private chats → peer-to-peer sharing
- Social media “before/after” posts
- Reuploads to pornographic platforms and searchable archives
| Spread Vector | Typical Accounts | Impact |
|---|---|---|
| WhatsApp groups | Classmates, peers | Immediate local bullying |
| Social media | Public & burner accounts | Wider visibility, viral risk |
| Adult sites | Porn platforms, reupload bots | Long-term searchable archives |
“The realism makes it feel real to victims, even when everyone knows it’s fabricated.”
Who Runs ClothOff: What Investigations Found About Names, Emails, and a Fake CEO
Investigations traced a tangled web of accounts, emails, and travel posts that aim to hide who actually runs the service.
Anonymity by design
Operators used deliberate methods to avoid accountability: distorted voices, minimal ownership disclosures, and shifting domains made it hard for regulators to follow up.
Belarus-linked names
Published reporting linked two names to operations: Dasha Babicheva and Alaiksandr Babichau. Investigators carefully attribute these ties to screenshots and message trails rather than direct proof of legal responsibility.
Recruitment and mirrored sites
Recruitment ads directed applicants to an AI-Imagecraft email. A near-duplicate site, A-Imagecraft, listed Babichau and accepted the same login credentials, suggesting shared infrastructure.

Telegram clues and travel overlaps
Telegram posts used display names like “Al.” Travel timestamps from Macau and Hong Kong overlapped with posts on an account tied to Babichau, a circumstantial link investigators noted.
Developer and company traces
Reports say a developer named Alexander German uploaded site code to GitHub and later deleted it. Links to the gaming marketplace GGSel were reported; GGSel denied involvement.
Press contacts and media response
Journalists were given a press contact email for questions. Responses were limited or evasive, and one episode involved an AI-generated CEO voice — a stark reference point for how identity can be manufactured.
Why names matter: without clear operator identities, enforcement, takedowns, and victim support become harder, and the same network can reappear under new names.
Following the Money: Redirect Sites, Payment Workarounds, and Shifting Business Addresses
Behind the checkout buttons, payments are routed through disguised storefronts that mask what customers actually buy. This model turns the sale of sexualized images into a repeat-purchase business, sold in small credit packs for quick revenue.
How the redirect trick works: a customer clicks to buy credits on one website, but the card or Google Pay charge posts under a different site’s name. That second site often poses as a harmless shop — flowers, lessons, or digital goods — to keep mainstream processors like PayPal from flagging transactions.
Payment processors and takedowns
PayPal says it bans offending merchants and closes redirect accounts when found. But new storefronts appear fast, so enforcement is a repeated cycle over time.
London shell signals
Investigations traced some payments to a London-registered company called Texture Oasis. Reporters noted copied staff lists and lifted website text, which raised red flags about a shell-like company presentation.
Shifting names and misdirected addresses
Sites repeatedly swap footer names and an address. When media contact companies listed in footers, those firms often denied any link and were later replaced. A Buenos Aires address tied to “Grupo Digital” led journalists to an unrelated office; staff there said they had no connection.
“When name, site, and address keep changing, tracing harm and holding a business to account becomes much harder.”
Clothoff said its holding company oversees multiple businesses and cited NDAs for not disclosing owners. That claim does not provide a verifiable way for victims, journalists, or regulators to follow the money.
| Mechanism | Effect | Why it matters |
|---|---|---|
| Redirect storefront | Charge disguised | Masks true website and business |
| Swapped footer names | Confusion for contact | Blocks accountability |
| False addresses | On-the-ground misdirection | Stops quick investigation |
Conclusion
Conclusion
What this article shows is simple: over the past year deepfakes moved from headline events to an everyday threat that can target ordinary people with a single photo.
The pattern is clear. A quick website or app entry point, repeatable outputs, and a shifting network of sites and companies let content spread fast and stay online for a long time.
Victims face real harm—reputation damage, school disruption, and anxiety that images will never fully disappear. Investigations surfaced names and company clues, but ownership stays opaque. That opacity is part of the business model.
If you see explicit material on social media, don’t repost. Document, report to platforms, and contact support services like the Cyber Civil Rights Initiative (US) or the UK Revenge Porn Helpline. As AI tools evolve, transparency, enforcement, and survivor-centered resources will shape whether the next year brings stronger safeguards or more scalable abuse.
