Robot Not a Dev? Buy Now
Not a Dev?

Misty Community Forum

Friendly Robots

A few interesting articles came out today touching on the importance of the social aspect in human-Robot Interaction:

An interview with Anca Dragan, from UC Berkeley: Keeping Robots Friendly: Meet The Woman Teaching AI About Human Values where she talks about how the interaction between a robot learner and a human teacher/designer can be used to improve the robot’s objective. One way to perhaps avoid the paperclip-apocalypse that @Ben seems so worried about.

A discussion of some work from the University of New South Wales: Social robot set to revolutionise workplace experience with a telling quote of “Our main goal is to make a companion for humans… I think one of the differentiators will be understanding the person’s emotional requirements and acting not in a physical way, but in a subtle way that facilitates positive arousal… we want to create a heartful robot,” Dr. Thapliya said.

And an interview with Guy Hoffman, from Cornell: Building a Robotic Colleague With Personality where he points out that “Robots have the capacity to affect our behavior emotionally in that they’re using a physical body, they’re sharing space with us, they’re moving in our surroundings.”

All touch on the need for robots to be more than just functional automata, doing exactly what they’re told. Instead, the robots of the future need to think about and interact with humans as humans, and relate to them accordingly.

3 Likes

What is the paperclip apocalypse?

Something @Ben mentioned in this post: I am maybe a little too excited for this game

it’s this game:
http://www.decisionproblem.com/paperclips/

Based on this idea:

1 Like

And something I wrote about here:

I have clicked buttons on the paperclip game for more hours than I care to acknowldge.

1 Like

This harks back to the “Empowerment As Replacement for the Three Laws of Robotics” paper we read some time ago (Frontiers | Empowerment As Replacement for the Three Laws of Robotics | Robotics and AI).

I liked that paper’s approach of creating a “friendly” robot (though that was not the goal) by having the robot have a vested interest in empowering (increasing options) both itself and the human. I thought that was an approach less likely to develop unintended consequences vs more rigid rules.

1 Like

To me, the empowerment paper by Salge & Polani, as well as Asimov’s ‘Three Laws of Robotics’ are more about imbuing a robot with a set of ethics and not about friendliness. I believe that you can be friendly and lack a set of ethics or have a set of ethics but still behave in an unfriendly manner. They are not necessarily correlated. To phrase it differently, just because someone isn’t trying to harm or kill you, that doesn’t mean that person is your friend. Similarly, someone can behave in a friendly manner toward you, and that person can still be working toward your demise.

1 Like

A follow-up article on the UNSW work (http://www.tahawultech.com/lifestyle/robots-team-value-tired-toys/) raises an interesting point - the lack of true long-term studies in Human-Robot Interaction:

“There are very few social robots that have ever been implemented within a working environment or any environment really so we don’t have long term research data about what happens”

I’m super-excited about the possibility of our platform being a robust, off-the-shelf, affordable robot that gets out and interacts with hordes of people over long time-periods.

I’ll also share this quotation, on the importance of variability (but not complete randomness):

“You want an element of surprise. You don’t want this kind of totally subservient, happy robot. The question is when do you introduce the behavioural element of surprise? You don’t want it to be distracting you don’t want it to be something that looks like a fault or isn’t appropriate”

This example of a robot arm serving drinks as a barista with personality highlights some of the challenges in developing a robot that interacts with humans.

  • What does personality look like? “But they all have names, because they have personalities — well, as much of a personality as a mechanical arm can have. They are programmed to be efficient first, and friendly second; to busily fulfill orders and then wave to customers picking up drinks. Often, says Blum, the customers will wave back.”

  • Which gestures elicit further human interaction? “We have these different sort of personality traits that invite people to hang out with the robot a little bit longer and see what it has to offer,” said Blum. Its gestures “invite this interaction, which creates warmth and a more human interaction.”

  • How do we lengthen engagement time? “But the people who design them also want the humans to like them, and to like them, they have to get people to spend more time with them.”

  • How do we maintain/prolong delight and novelty? “There’s a very strong novelty effect in human-robot interaction, which is that people are most uniformly delighted by robots when they first encounter them,” said Henny Admoni, an assistant professor who focuses on human-robot interaction at the Robotics Institute at Carnegie Mellon University. “How quickly that delay drops off kind of depends on the robot and what they’re doing.”

  • How do we manage expectations? “The more we humanize them [robots], the more people think they can do.”

3 Likes

+1000000 on the managing expectations. The article goes on to say: “If a robot has eyes, people think it can see. If a robot waves, people think it can sense your presence. And that can lead to elevated customer expectations, something that can hurt a business in the service industry.” There are many behaviors/actions/etc that a robot could perform that might come off as ‘cute’ or ‘endearing,’ but could imply a greater capability than the robot actually has, leading to disappointment later.

3 Likes