Robot Not a Dev? Buy Now
Not a Dev?

Misty Community Forum

Ethical Robot Behavior

Like the trolley problem for autonomous cars (Trolley problem - Wikipedia), robot ethics are a design consideration. Humans can become emotionally invested in robots and a considerable amount of trust can be placed in them. Here’s an editorial that suggests that conscious choices need to be made in robot design, so that robot interaction doesn’t lead to manipulation and exploitation.

How should rules for robot behavior be implemented and who decides what those rules should be? How do you design an algorithm that allows a robot to prioritize functional tasks against opinions and feelings of humans with whom the robot interacts?


This article does an excellent job of prompting thought in general

I do have to note, in particular, that the attacks on the SF animal shelter’s robot are part of a larger, very painful situation, which the article does suggest, to their credit. Coming from a local perspective, I mostly see the robot in SF as a victim of generalized anger that is looking for ways to express itself. There has been a wave of actual violence against tech shuttle buses, for example, but no one is suggesting that bus technology is a problem.

The two sides are:

  1. As the article states, many organizations like the animal shelter are concerned, because people are becoming wary of even visiting SF, due to the oppressive amounts of human misery in the streets: SF tourist industry struggles to explain street misery to horrified visitors -


  1. Enormous amounts of anger in SF against the gentrifying influence of tech, which is often seen as being at fault for the human misery.

Lest anyone think I’m unsympathetic to the homeless, I’m definitely not. The situation in SF is an ongoing and worsening human tragedy. But the anger toward this robot was really disproportionate to its behavior and use.

I’m pretty fascinated by humanity’s reaction to change - and how we can have such different reactions. People in Sweden, per the New York Times are accepting of robots while it seems more in the US fear them. Not sure what that says about the macro environment in the two countries…

I also find myself wondering: did lots of horse and carriage tenders vandalize Henry Ford’s inventions? Did we try to tear down early electric lamps?

I do think one of the important distinctions about Knightscope in particular is the robot is foisted upon all of the public without large scale public opt-in, whereas robots in the office or home are deployed in private domains with far fewer people to get to opt in.


the robot is foisted upon all of the public without large scale public opt-in, whereas robots in the office or home are deployed in private domains with far fewer people to get to opt in.

That’s a powerful distinction and a good one to make conscious. When robots interact with humans, it makes sense that we humans want the ability to choose the interaction. We don’t invite people who we don’t know into our homes – we select our guests.

1 Like

This article presents prima facie (seven ethical considerations proposed by the late Scottish philosopher David Ross) as a possible basis for robot ethics.

Quote: “The prima facie duties, according to Ross, are fidelity, reparation, gratitude, non-maleficense (doing no harm, or the least possible harm to obtain a good), justice, beneficence and self-improvement. But these duties often conflict, and when that happens, there’s no established set of rules for deciding which value trumps another.”

They go on to question whether ethics is really what humans desire for robots. They claim that maybe all we really care about is that robots behave in a way that fits human expectations and values. Perhaps being ethical is overkill? Is it sufficient if robots align with our values and comply with our societal norms?


In other posts on this forum, we’ve discussed the value of robot life, and we’ve learned that some people place the value of robot life below the value of animal life. Some people who are replaced by them at work will hate them. Others will fear them, seeing them as predecessors of the singularity. And still others will rejoice in their arrival, as robots take over performing the mundane, repetitive tasks that humans dislike doing.

Given the variance in current human attitudes toward robots, robots that are placed in the world will struggle with equality amid a diversity of opinions toward them.

I enjoyed reading this article from MIT, which touches upon how to design robots to handle adversity. It mentions how small changes in the design of behaviors for the robot can impact how the robot is perceived and treated by humans.

Designing robots to be polite, follow social norms, and considering human-robot interaction in the robot design process go a long way toward acceptance.


To bring these topics full circle, I’d like to add that in addition to being polite, following norms, and being considerate in their interactions with humans, robots should also be kind and gentle with animals.

My robot might be a lot more brilliant and generally awesome than my chicken, but one is vastly easier to reconstruct when damaged than the other. So, I need my robot to respect my dopey bird, despite the latter’s limitations. :robot: :chicken:

1 Like