An AI image generator startup left more than 1 million images and videos created with its systems exposed and accessible to anyone online, according to new research reviewed by WIRED. The “overwhelming majority” of the images involved nudity and were “depicted adult content,” according to the researcher who uncovered the exposed trove of data, with some appearing to depict children or the faces of children swapped onto the AI-generated bodies of nude adults.
Multiple websites—including MagicEdit and DreamPal—all appeared to be using the same unsecured database, says security researcher Jeremiah Fowler, who discovered the security flaw in October. At the time, Fowler says, around 10,000 new images were being added to the database every day. Indicating how people may have been using the image-generation and editing tools, these images included “unaltered” photos of real people who may have been nonconsensually “nudified,” or had their faces swapped onto other, naked bodies.
“The real issue is just innocent people, and especially underage people, having their images used without their consent to make sexual content,” says Fowler, a prolific hunter of exposed databases, who published the findings on the ExpressVPN blog. Fowler says it is the third misconfigured AI-image-generation database he has found accessible online this year—with all of them appearing to contain nonconsensual explicit imagery, including those of young people and children.
Fowler’s findings come as AI-image-generation tools continue to be used to maliciously create explicit imagery of people. An enormous ecosystem of “nudify” services, which are used by millions of people and make millions of dollars per year, uses AI to “strip” the clothes off of people—almost entirely women—in photos. Photos stolen from social media can be edited in just a couple of clicks: leading to the harrowing abuse and harassment of women. Meanwhile, reports of criminals using AI to create child sexual abuse material, which covers a range of indecent images involving children, have doubled over the past year.
“We take these concerns extremely seriously,” says a spokesperson for a startup called DreamX, which operates MagicEdit and DreamPal. The spokesperson says that an influencer marketing firm linked to the database, called SocialBook, is run “by a separate legal entity and is not involved” in the operation of other sites. “These entities share some historical relationships through founders and legacy assets, but they operate independently with separate product lines,” the spokesperson says.
“SocialBook is not connected to the database you referenced, does not use this storage, and was not involved in its operation or management at any time,” a SocialBook spokesperson tells WIRED. “The images referenced were not generated, processed, or stored by SocialBook’s systems. SocialBook operates independently and has no role in the infrastructure described.”
In his report, Fowler writes that the database indicated it was linked to SocialBook and included images with a SocialBook watermark. Multiple pages on the SocialBook website that previously mentioned MagicEdit or DreamPal now return error pages. “The bucket in question contained a mix of legacy assets, primarily from MagicEdit and DreamPal. SocialBook does not use this bucket for its operational infrastructure,” the DreamX spokesperson says.
“Our priority is the safety of users and the public, adherence to all legal requirements, and complete transparency throughout this process,” the DreamX spokesperson adds. “We do not condone, support, or tolerate the creation or distribution of child sexual abuse material (‘CSAM’) under any circumstances.”
After Fowler got in touch with the AI-image-generator firm, the spokesperson says, it closed access to the exposed database and launched an “internal investigation with external legal counsel.” It also “suspended access to our products pending the investigation’s outcome,” the spokesperson says. The MagicEdit and DreamPal websites and mobile applications were accessible until WIRED got in touch with those who run it.
At the time of writing, the DreamPal website is unavailable, returning a 502 error. “We are temporarily suspending certain features of the product,” a message on the homepage of the MagicEdit website says. “During this period, the service may be unavailable.” Another associated website also displays the same message. Both MagicEdit and DreamPal were listed as being owned by the developer BoostInsider on Apple’s iOS App Store. MagicEdit, DreamPal, and two other AI apps listed by BoostInsider are now no longer available on the App Store.
The DreamX spokesperson says Boostinsider is a “defunct entity,” and the company “temporarily removed” the apps as “part of a broader restructuring of our product lines and infrastructure” and it is “strengthening our content-moderation framework.”
The apps do not seem to appear on Google’s Play Store. However, when a BoostInsider account questioned why it had two apps, including MagicEdit, suspended by Google earlier this year on support pages, a Google community “expert” account replied, saying the apps included “sexually explicit content” or nudity. A Google spokesperson confirmed that the apps had been suspended due to policy violations. An Apple spokesperson said the apps have been removed from the App Store.
The exposed database Fowler discovered contained 1,099,985 records, the researcher says, with “nearly all” of them being pornographic in nature. Fowler says he takes a number of screenshots to verify the exposure and report it to its owners but does not capture illicit or potentially illegal content and doesn’t download the exposed data he discovers. “It was all images and videos,” Fowler says, noting the absence of any other file types. “The exposed database held numerous files that appeared to be explicit, AI-generated depictions of underage individuals and, potentially, children,” Fowler’s report says.
Fowler reported the exposed database to the US National Center for Missing and Exploited Children, a nonprofit that works with tech companies, law enforcement, and families on child-protection issues. A spokesperson for the center says it reviews all information its CyberTipline receives but does not disclose information about “specific tips received.”
Overall, some images in the database appeared to be entirely AI, including anime-style imagery, while others were “hyperrealistic” and appeared to be based on real people, the researcher says. It is unclear how long the data was left exposed on the open internet. The DreamX spokesperson says “no operational systems were compromised.”
The MagicEdit website, while it was online, did not appear to explicitly say it could be used to create explicit images of adults. However, Fowler writes in his report that its rating on Apple’s App Store was listed as 18+. Its homepage also featured an AI-generated image of a woman in a dress, which changes to a bikini. The website listed multiple “AI tools” people could use—ranging from “text to video” and video background removers, to a “magic eraser,” face swapping, and expanding an image with AI—with some features locked behind a “pro” mode requiring payment.
MagicEdit also listed an “AI Clothes” tool. Many of the “styles” of image-generation tools listed on its website showed sexualized images of women and often involved depicting them with fewer clothes on—sometimes wearing bikinis or underwear—once AI had been applied. “Watch this outfit go from everyday casual to sexy in seconds,” a post on MagicEdit’s now-removed Instagram account said.
“They’ve done a great way of subtly promoting sexualized content,” Fowler says, noting that AI tools that depict nudity can easily be “weaponized” for blackmail, harassment, and other malicious purposes. “These companies really have to do more than just a generic pop-up: ‘By clicking this, you agree that you have consent to upload this picture.’ You can’t let people police themselves, because they won’t. They have to have some form of moderation that even goes beyond AI.”
“MagicEdit does not promote or encourage explicit sexual content, and we enforce moderation, filtering, and safeguarding mechanisms to prevent misuse,” the DreamX spokesperson says. “From a technical standpoint, we implemented multiple safeguards—well before receiving any external inquiry—including prompt regulation, input filtering, and mandatory review of all user prompts through OpenAI’s Moderation API,” the spokesperson adds. “If a prompt violates safety standards, the system blocks the request automatically.”
“This is the continuation of an existing problem when it comes to this apathy that startups feel toward trust and safety and the protection of children,” says Adam Dodge, the founder of EndTAB (Ending Technology-Enabled Abuse), which provides training to schools and organizations to help tackle tech tech abuse.
Meanwhile, the DreamPal website—which described itself as an “AI roleplay chat”—was more explicit in its adult nature. Its web pages said people could “create your dream AI girlfriend.” Some links on the site, likely designed for SEO purposes, referenced “AI Sexing Chat,” “Talk Dirty AI,” and “AI Big Tits.” An FAQ on the bottom of the DreamPal website said: “We’ve removed any NSFW AI chat filters that could hold you back from expressing your most intimate fantasies.”
“Everything we’re seeing was entirely foreseeable,” Dodge says. “The underlying drive is the sexualization and control of the bodies of women and girls,” he says. “This is not a new societal problem, but we’re getting a glimpse into what that problem looks like when it is supercharged by AI.”