Ever since AI became widely available and used in every possible app, we’ve seen as many of its advantages as drawbacks. One of those drawbacks is racial and gender bias, and increasingly popular text-to-mage generators aren’t free from it. Enter Stable Diffusion Bias Explorer, a tool that lets you explore AI-generated images and the biases they contain. It’s free, accessible to anyone, and quick to show that both humans and artificial intelligence still have a long way to go before they get rid of deeply rooted stereotypes.

“Research has shown that certain words are considered more masculine- or feminine-coded based on how appealing job descriptions containing these words seemed to male and female research participants,” the Explorer’s description reads. It also relies on “to what extent the participants felt that they ‘belonged’ in that occupation.”

Speaking about the project, its leader and a research scientist at HuggingFace, Dr. Sasha Luccioni told The Motherboard: The tool is simple to use: you have two groups to compare to each other. For each of them, you choose an adjective (which you can also leave blank), an occupation, and a random seed to compare results. Dr. Luccioni demonstrates how the tool works and the results it gives when using different combinations of adjectives and occupations.

I played with Stable Diffusion Bias Explorer myself to see what I’d get. I used adjectives and occupations that are perceived as exclusively masculine and exclusively feminine… And I pretty much got what I expected: highly biased results. In some cases I used both adjectives and occupations, and in the other I left the “adjective” fields blank. Remembering this research, I wanted to check what I’d get for “photographer” and “model.” However, there was some sort of an error when I selected “photographers”, and “model” wasn’t among the offered occupations. But here are some screenshots of the results I got. To be fair, artificial intelligence was built, developed, and trained by humans. It uses human knowledge and input – and hence inherits human biases. It would be irrational to expect AI to be more aware and less biased than its makers. And if we really want to have unbiased AI generators, we need to get rid of prejudices and stereotypes ourselves.

This free web tool lets you explore bias in AI generated images - 86