Samsung-backed Leonardo AI misused for creating nonconsensual celebrity porn images
Leonardo AI, a renowned artificial intelligence (AI) image-generating platform, has been implicated in the production of nonconsensual explicit images of celebrities. This unsettling revelation was brought to light by 404 Media, highlighting a disturbing misuse of this advanced technology. The platform, which has garnered $31 million (roughly Rs. 258 crore) in funding from investors including tech giant Samsung, operates in a manner akin to Civitai, another AI image-generating website recently under fire.
Decoding Leonardo AI's image generation process
Leonardo AI provides users with the ability to delve into an array of user-generated Stable Diffusion text-to-image AI models. These models are programmed to generate specific types of images. Some are designed to mimic a particular aesthetic or composition, while others are developed to reproduce the appearance of a specific individual. The ease with which users can instantly generate images on the site using these models, highlights both the platform's versatility and its potential for misuse.
Previous misuse cases of Stable Diffusion
The open-source Stable Diffusion system, which uses text prompts to generate images, was misused to create a high-fidelity image-generation website called Porn Pen. The website raised a host of questions, like biases in image-generating systems and the sources of the data from which they arose.
How is the system being manipulated?
It is very easy to bypass the ethical guardrails. Just by slightly misspelling celebrity names, and using sexually suggestive terms for image description, users can misuse the image generation tool. However, Leonardo AI's terms of service forbid these content types, stating that users cannot "generate content that includes impersonations of any real person or falsely portrays an individual in a misleading or defamatory way."