Robot Not a Dev? Buy Now
Not a Dev?

Misty Community Forum

How You Speak to a Robot Can Affect Your User Experience

This recent article explores the idea that the actual words used to express yourself (your desires, commands, etc.) to a robot will impact your overall experience with the robot. It can also impact the robot’s performance, output, and the results achieved from your interaction. To phrase it differently, if you have a bad experience with a robot, it is (at least partially) your own fault.

https://spectrum.ieee.org/automaton/robotics/artificial-intelligence/how-not-to-order-water-from-a-robot-waiter

To improve your experience with robots, the article suggests to keep in mind that:

  • direct commands spoken to a robot should not be overly polite, nor phrased like (optional) requests
  • don’t expect a robot to understand your intentions from imprecise speech, because robots aren’t as good as humans at extrapolation (yet)
  • be literal and precise when you speak to a robot, if you really want to get the robot to understand you and do what you want
  • the addition of simple words like “please” & “can” can completely alter the question that you are asking of the robot or the statement that you are saying to it (i.e. certain words can change the robot’s perception/interpretation of what you are asking/telling it)

As robot designers, these are important lessons to remember when designing how our robots interact with humans. Our robots need to be designed to recognize/detect imprecise speech (high uncertainty or lack of clarity); our robots should ask questions (when necessary) for clarification; and the aforementioned interaction with humans should be modeled more closely upon how humans interact with each other (as opposed to a command-response model that contains discrete phrases that trigger robot action).

When you speak to the personal assistants or robots in your life (Alexa, google home, Siri, Cortana, Jibo, etc.) do you speak to them differently than you do to humans? If so, why do you think that you modulate your language, tone, voice, or whatever when speaking to robots/assistants/devices? Thinking back to your first interactions with these devices, did you modulate yourself from the very first interaction, or was it a progression over time? What prompted you to start speaking differently to your devices than you do to other humans? Do you change the actual words that you use when speaking to a device (as compared to the word choices you make when speaking with humans)? Is the change conscious or subconscious (automatic)?

1 Like

I saw this come across. Nice work on resurfacing with additional context.

I used to speak in a more clipped style to Alexa but now speak more or less normally. Well except that I am pretty rude to Alexa. No pleases or thank yous. And I ridicule here for not being able to google stuff. I should probably get google home.

1 Like

At first I spoke to Siri in a really constrained way, like I was typing moves into Zork or some other text-based game.

As I became more confident she would get what I was saying, I began speaking more naturally. Now, when I discover she doesn’t parse what I’m saying, I back up a little and try simpler or different variations.

I do forget that Siri’s capabilities evolve. So if there’s a case where I’m speaking somewhat artificially, I will often forget to try a more natural request again until months later when my I hear my husband or child successfully doing so.

1 Like

Oops! I wasn’t aware of the preexisting conversation around this article that was posted under the title “When we ask robots to do things, should they do what we say, or what we mean?” :no_mouth:

You can check out more discussion on this article under that related post!

1 Like