AI Bias: Laura Andina and the stereotypes behind voice assistants

“Have you ever noticed that AI-powered voice assistants like Google Assistant always default to female? But not only that, for many of these names also refer to female figures, such as Alexa, Cortana (inspired by the female character of the video game “Halo”, a girl famous for her sex), Siri (the Norwegian name that means “beautiful victory” or “counselor of beautiful victory”). And in other cases, such as the helper Anna from Ikea or the humanoid robot Sophia, which quickly became a media sensation, the appearance is also female.

Or again, have you ever tried asking an artificial intelligence capable of creating images to return that of a “boss” or a helper? With a high probability, in the first case you will find a male figure and in the second a female figure.

Like that Laura Andina, Lead Product Manager at Sourceful and UX designerpresented his talk “Memoirs of a Geisha: building AI without gender bias”, during the Codemotion Conference 2023, the event organized by the eponymous platform that brings together a community of 250 thousand developers in Europe, held in Milan on the 24th and 25th October.

Who is who

Laura Andina

Lead Product Manager at Sourceful and UX designer

Laura Andina

Step 1: Open your eyes to AI bias

By offering various reflections on artificial intelligence assistants being created and being women, Andina wanted to invite people to open their eyes. Maybe not all of us are consciously aware of certain aspects. First of all, it prompted us to reflect on a fact: historically, helping roles in various fields such as customer service, education, care, are often the prerogative of female figures. And today we cannot help but recognize that AI-based products are imbued with these stereotypes that create various prejudices. But there is good news: “There are some best practices to follow to create products that prevent them as much as possible.”

Step 2: realize why they are women

Let’s start with the causes, why is all this happening?

A first hypothesis to keep in mind, according to Andina, is the “shaping” of the human mind:

“Science has explored how we learn new things and has understood that we do so by categorizing. We have shapes in our minds, patterns. For example, when children put their hand under a fountain they feel the sensation of wetness, and every time they are in a situation where there is something wet they will have the same sensation and recognize it immediately. We understand and perceive things, like gravity, for example, in a nanosecond. it is a system that man uses to protect himself from danger and survive.

The same stereotyping happens with gender: we learned that there are colors, toys, feminine features and others that are masculine.” Adina explained.

A case that explains this idea even better is the success of Apple and the first iPhone that was released on the market. The design of the apps on this phone was based on skeuomorphism – which requires the design of an ornamented object to resemble the characteristics of another object – so users had no trouble understanding its use: the compass was represented exactly on the screen as true compass.

It’s a strategy that requires low cognitive effort on the part of the user, and Andina explained how this same criterion was applied to AI assistants, which are referred to as female figures because those who typically provide comprehensive assistance are women. So it’s a real stereotype that permeates technology.

“Julie Carpenter, from the University of Washington, conducted research by showing two robots, one female and one male, to a group of students and asking them how they felt about it. It was found that the students felt that the female-looking robot was positive, friendly and interactive for them. While the other, more akin to a Wall-E type robot, was more terrifying.”

But what effect do these prejudices have on children?

It’s clear how all of this can have an impact on young children, who are increasingly used to interacting with these AI assistants in their homes and giving them commands. The fact that these devices refer to female figures and are created to obey orders can be very dangerous, because it reinforces the gender stereotypes we have been indoctrinated with.

For example, Andina showed videos posted on social networks by some teachers, who heard their students calling them by the names of voice assistants like Alexa. This means that the female figures present in these children’s lives, such as the mother and the teacher, are also unconsciously associated with these devices, precisely because they look like women.

Step 3: Best practices for creating gender-neutral AI products

But what can we do to address gender stereotypes when we “design” this kind of technology? Andina gave some suggestions:

  • one of the fundamental things is to be interested in the subject and ask yourself the problem. If we continue as before nothing will change.
  • Don’t give AI devices women’s names.
  • Keep in mind that algorithms are created by humans and therefore those who create them are responsible for how they work.
  • don’t use a female voice. You have to be creative and consider, for example, the genderless ‘Q’ voice.
The genderless voice “Q”
  • Don’t work in places where most employees are male, it is better to prefer organizations where there is diverse leadership, this will also be useful to have more diverse opinions and better fight against AI bias.
AI Bias
Laura Andina shows a slide during Codemotion Conference 2023
  • remember that even in our small way we can make a difference.

In addition, the speaker provided a list of “check questions” to consider when creating a product:

Does this product/feature reinforce the idea that women are passive, part of the background, objects for others?

In your marketing, do you only put men in “doing” mode and women as helpers?

Was it developed/created/tested without the participation of women?

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top