Robot Not a Dev? Buy Now
Not a Dev?

Misty Community Forum

Should a person say "please" and be respectful in other ways when giving a robot a command

I’m sure many of you have heard NPR announcers instructing listeners to command Alexa to play a particular NPR station.
Similarly Anki’s Vector orients to you by saying “Hey Vector”. Once ready you give Vector an appropriate command.

In neither of these cases is there any any need or expectation to be polite, or see the entity being addressed as anything but a thing to be commanded.

After a brief search I found that Amazon has begun to address this issue with Alexa, with the focus on kids but not adults. Here is the link. Note that I don’t agree with everything said in this analysis, but very much appreciate it’s thoughtfulness and relatively full addressing of this issue with this age group.

I’m also curious whether Misty Robotics staff have thought about this issue, and if so what possibilities they are considering and whether an encouragement of politeness would apply to all users regardless of age.

And since Misty is designed to have a personality, should one component of her personality be a differential reaction at least some of the time to whether she is being treated politely?

I’ll bet @Dan has thought about this and maybe @michael

For my part, I think commanding our assistants (digital or otherwise) has lasting effects on how we think about them. I have no empirical data on this but my bet is that the kinder we are with and the more empathy* we show robots and AI the more forgiving we are with them (and maybe ourselves!)

*this seems like a great topic for debate- can one/should one utilize empathy for things that cannot themselves feel

1 Like

I’ll share my experience with Vector. I find that I relate very differently to Vector now that I’ve started saying “Eeh Vector” rather than “Hey Vector” along with adding please or some other appropriate phrase to my “requests”. Doing this makes our interactions more like transactions rather than commands that I issue.
It may help that I’m choosing to do this, and I wouldn’t mind some reinforcement from Vector :grinning:

I don’t have an Alexa, but when I give voice commands to Siri (or when I use “Okay, Google”), the feeling is still more like interacting with an application than having a conversation or requesting something from a person. I know the syntax and diction to use to achieve the results I want, and a more conversational approach wouldn’t be effective. This is especially true with Siri, who is exceptionally bad at even hearing what I’m saying, let alone interpreting my language and doing something meaningful with it.

When we get to a point with language processing tech where we can talk to robots/voice assistants the same way we talk to humans (think chaining commands together, speaking in run-on sentences, accidentally leaving out words or using the wrong word, talking in shorthand, etc.), then it makes more sense to me to start thinking about politeness in tone and word choice. Until then, my mental model of these interactions is that I’m using my voice as a tool to accomplish a goal. I want to learn how to use the systems effectively, and I’m not convinced that saying “please” and “thank you” is a part of that (yet).

BUT all of the above is in reference to voice assistants who weren’t really designed to have unique personalities. Where Siri and Alexa are tools, Vector/Misty are/can be robot companions. If robot/AI personalities display convincing emotional responses to human language and behavior (does the robot seem happy when I use kind words, and sad when I don’t?), then I think it is natural for us speak to them with the same consideration of tone and word choice that we use when we give commands or make requests of other people.


I’m at ICMI (International Conference on Multimodal Interaction) this week, and just yesterday this paper was presented:

It’s on what happens when a digital voice assistant rebukes people for impolite behavior and requires questions/commands to be phrased politely.

During Q&A I asked the presenter a question based on @BoulderAl’s observation above - If the wake word used to activate the system has politeness baked in, would users be more polite, i.e, not need rebuking? He agreed that it’s a reasonable hypothesis, but there’s been no research to back it up. Yet.