Apple cracks down on AI apps creating deepfake nudes on App Store

In response to growing concerns regarding the misuse of artificial intelligence (AI) technology, Apple has recently removed several generative AI apps from the App Store. These apps were found to be capable of creating nonconsensual deepfake nude images, raising significant privacy and ethical concerns.

App Store

Apple’s move comes after an investigation by 404 Media revealed the troubling trend of AI apps being used for this purpose. However, it’s important to note that Apple reportedly only took action after receiving detailed information from 404 Media, including direct links to the apps and their advertisements. This suggests Apple’s content moderation system may not have been effective in identifying these violations on its own.

Overall, Apple removed three apps from the App Store, but only after we provided the company with links to the specific apps and their related ads, indicating the company was not able to find the apps that violated its policy itself.

These apps, initially marketed as harmless “art generators,” were found to have the capability of creating nonconsensual nude images. For instance, one advertisement displayed a photo of Kim Kardashian with provocative taglines like “undress any girl for free” and “try it.” Another ad showcased AI-generated images of young-looking individuals, both clothed and topless, with explicit prompts such as “any clothing delete.”

The proliferation of deepfake nude images and videos on social media platforms has raised serious concerns about digital privacy and exploitation. Recent incidents involving AI-generated nude images of celebrities like Taylor Swift and public figures like Rep. Alexandria Ocasio-Cortez have prompted legislative action.

In response, Rep. Ocasio-Cortez introduced the Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act, aimed at creating a federal civil remedy for deepfake victims.

While Apple’s actions are a step in the right direction, they highlight the ongoing challenges in digital content moderation. Platforms like Google have also faced scrutiny for directing users to websites known for explicit deepfake content.

In conclusion, this incident highlights a potential shortcoming in Apple’s content moderation system, suggesting it may not be adept at proactively identifying apps that violate its policies. This raises concerns about the effectiveness of Apple’s safeguards against other malicious content and emphasizes the need for a more robust approach to content moderation on the App Store.

Read more:

About the Author

Asma is an editor at iThinkDifferent with a strong focus on social media, Apple news, streaming services, guides, mobile gaming, app reviews, and more. When not blogging, Asma loves to play with her cat, draw, and binge on Netflix shows.