Viral AI Avatar App Lensa Stripped Me Without My Consent

Stability.AI, the company that developed Stable Diffusion, released a new version of the AI ​​model at the end of November. A spokesperson says the original model launched with a security filter, which Lensa doesn’t appear to have used, as it would remove these outputs. One way Stable Diffusion 2.0 filters content is by removing frequently recurring images. The more often something is repeated, such as Asian women in sexually graphic scenes, the stronger the association becomes in the AI ​​model.

Caliskan has studied CLIP (Contrastive Language Image Pretraining), which is a system that helps Stable Diffusion generate images. CLIP learns to match images in a data set with descriptive text prompts. Caliskan found that it was riddled with problematic racial and gender biases.

“Women associate with sexual content, while men associate with professional, career-related content in any major domain, such as medicine, science, business, etc.,” Caliskan says.

Interestingly, my Lensa avatars were more realistic when my images were filtered with male content. I got avatars of myself wearing clothes(!) and in neutral poses. In several images, he was wearing a white coat that looked like it belonged to a chef or a doctor.

But it’s not just the training data that’s to blame. The companies that develop these models and apps make active decisions about how they use the data, says Ryan Steed, a doctoral student at Carnegie Mellon University who has studied biases in imaging algorithms.

“Someone has to choose the training data, decide to build the model, decide to take certain steps to mitigate those biases or not,” he says.

The app developers have chosen to have the male avatars appear in space suits, while the female avatars get cosmic thongs and fairy wings.

A Prisma Labs spokesperson says the “sporadic sexualization” of photos happens to people of all genders, but in different ways.

Source: news.google.com