Robot Not a Dev? Buy Now
Not a Dev?

Misty Community Forum

Racial Bias in Perception of Robots

The following study raises some interesting thoughts about how we humans perceive the world around us.

Perhaps it is not surprising, but according to the study, people change their own behavior based on the color of the robot with which they are interacting.

The article mentions that the study arose due to the researchers noticing a fact about the robots currently in the world: “have you ever wondered why you rarely see a brown or black robot?” We here at Misty Robotics, recently performed a study of our backers, i.e. those who purchased a Misty 2 robot, which comes in matte white or matte black colors, and one of the questions posed was “which color do you want to receive?” 42.8% of backers bought matte black, while 57.2% bought matte white. The internal hardware of both robots is identical. When Misty 2 ships and is in the hands of our backers, it will be interesting to see if there are differences between usage, skills developed, interactions, or reception of our robots in the world based solely upon color differences.

For example, I can imagine that a developer who wanted to design a robot to perform nocturnal skills in a stealthy way might prefer that their robot be less noticeable to the human eye, which might make them choose to purchase a black Misty 2 instead of white.

What were your reasons for choosing the color for Misty 2 that you purchased? Was it solely based on appearance/personal preference? Was it motivated by some skill that you thought would be better performed/received by a robot of one color over another? Does having a robot of one particular color provide some perceived advantage to you?


I chose matte white on the basis my partner said “it looks cuter”.
I think to have a robot in your home is thought of in the same way as a pet (at least to those who aren’t actually working on it).
For some reason white is more endearing for the design of Misty II than black. I suppose other types of connotations could play on this decision-making process (ie. good vs evil - that we see everywhere BB-8 vs BB-9E).
Another factor that came into play were the emotions portrayed through the eyes/eyebrows on Misty’s visor. We felt they were more pronounced and effective on the white design (again based on having a more endearing look).


I initially chose black and then switched to white. I was told by a Misty staff person that at night in dim light Misty in black is easier to trip over than Misty in white.
Here is a related question on facial recognition. There are some facial recognition apps that work much better with non-Hispanic Caucasians than with other racial groups. Part of this seems to be because the particular system is trained much more on this group than on others. Has Misty II’s face recognition heuristic been trained to identify with equal ease people of different racial and gender backgrounds?

Since we are not doing our own initial face training, there is still likely the bias you mentioned in the way Misty recognizes faces, especially in more dimly lit areas. We are aware of it but do not know how best to address it at this time.

1 Like

ehh, I don’t agree with the “shooter bias” approach to try and test this theory, it already seems like a bias was introduced with the approach itself, but that’s just me.

As far as the issue with facial recognition, I get annoyed when I hear or read articles about “bias” in machine learning or AI, as it the machine was deciding to be racist or the data scientists / developers were either intentionally leaving out a proper representation of a group in the dataset or even as an “unconscious bias” as some like to say, the truth of the matter is it comes down to a balanced dataset. If there is less representation from a particular label in the set then you will see less accuracy as it is a matter of probability, that is how the classifiers work. It also depends on what features are being used as inputs, the less unique they are then the more generic or inaccurate the output will be. Another challenge is getting the data and worrying about being balanced. many times the case is that the people building the system are just trying to get it working and only have a dataset that is limited, they also might be expecting to balance it out as they get new inputs from users that upload more varying data. (rant over :stuck_out_tongue: )

I pick things if I think they look “cool” and what makes it “cool” is made up of a million things I probably can’t even explain, heck, look at my avatar! :smiley: Most people think it’s the ugliest thing, but for me (although it is ugly) it represents something I think is cool, aka a robot building martian.
Misty has LEDs and a screen and I’d hope it wouldn’t just be hanging out the in the middle of the room to trip over at night. It’s like picking a color for a house or car I think, it can be used for function or personality.

Seeing faces in the dark is a whole other thing which is just hard depending on the sensor anyway, again if more low light examples were given to the face training then that could be improved.

I don’t think that the data scientists doing the training were biased or racist. I think they were using the samples that were conveniently available to them, which in terms of numbers tended to skew significantly towards lighter skin.

Interesting. Something to think about as we design even the aesthetic of a robot.

Hard to give that a “like,” but thanks for sharing. Good info.

I wonder if that bias would hold if the robots were not anthropomorphic?