Robot Not a Dev? Buy Now
Not a Dev?

Misty Community Forum

Fear of False Friends

One of the ‘big scary possibilities’ of endowing robots with personality and character is the concept of ‘false friends’ - that forming emotional bonds with artificial creatures is damaging to the human. This article talks about Replika, a chatbot that purports to develop in an individual manner based on your interaction with it, and highlights some of the dangers of getting to attached to such a thing:

“This software, instead, has the potential to enable an individual to form a relationship with a digital concept manifest as a reflection of themselves. This could be seen as encouraging narcissism of alluring proportions. The process of conversing and training this AI app allows it to ensure that you hear only what you want to hear about yourself for yourself.”


This is honestly frightening to me. Humans are not showing the best self-awareness when faced with even simplistic, social media-bot-type AI.

More effectively reflective AI that first builds trust and then takes that trust into, let’s say, a different direction… that’s a real problem.

@Donna - it sounds like your concern is with robots that deliberately build trust, and then abuse that trust to harm the human in some way? Like a robot that makes you care for it and then demands you spend all of your money on upgrades or else it will shut off?

There are also concerns that even if the robots aren’t deliberately harmful like that, that just by being ‘friends’ with a human they are harming them. Either because, as voiced by the author of that article, humans withdraw from human-human interaction to spend time with this perfect reflection of what they want, or that by interacting with robot friends, people will not develop the interpersonal skills necessary to exist well in society.

There’s also a more philosophical concern - that since robots don’t have feelings, any relationship with a robot is really just one-sided, and based on a lie. Which is bad for the human.

(For the record, I disagree with the philosophical concern, but it is a common argument)


Here’s a link to an article that is relevant to this thread. It seems to transform communication into something quite superficial by allowing users to respond to friends with “one-tap answers.” It reduces the amount of thought and attention a user has to put into communicating with friends, because they can select a response from google’s suggested response list.

It seems to me that this tool would have an impact on the development of social bonds & interpersonal skills; and you’re withdrawing from human-human interaction, which is similar to the effects of using Replika that Dan mentions.

1 Like

@dan - I wonder about any kind of attempt by an AI to build trust, then to use that trust for an ulterior purpose. There are a variety of ways that trust could be abused by AIs (who were programmed to do so): commercial, political, social, etc.

BUT, it also seems like some ulterior purposes could be benign. For example, a trusted AI could “coerce” someone with bad health habits to make better choices.

Certainly this is nothing intrinsic to AIs, but people are simply easily influenced. And the more we like someone, the easier that influence is.

1 Like

Ugh, @michael. Just… ugh. Words fail me. (So I guess I should tap.) :wink:

@Donna “people are simply easily influenced. And the more we like someone, the easier that influence is” - Exactly. Which is why I believe that we, as the developers of the future of artificial creatures that may be able to wield enormous amounts of influence on us, need to be very careful and thoughtful as we design their behaviors, drives, etc.

1 Like

When I was reading the following article last night, I thought of this forum thread.

The next time you receive a handwritten note, before you think “oh, how thoughtful,” consider that a robot could have written this note for free, and the person who sent you that note simply has outsourced all their direct human interaction to robots. It is another example of how robots can become interfaces inserted between human-human interaction. They can be used as tools to manage our interpersonal skills for us, and allow us to withdraw from human-human interaction.

1 Like

There are researchers actively studying how to design a robot that people will open up to. It seems like such robots have the power to help humans, but the same techniques could be used to aid robots that will be designed to manipulate people.

Another concern with using robots for therapy is privacy. I’m not sure if a robot is more or less “hackable” than a human therapist, but definitely the confidentiality must be maintained for the patient.

As evidenced by that article, we (humans) don’t need much to get us to open up. Look at the attachment people formed to Eliza, back in the 60s (ELIZA - Wikipedia). And that system was just text on a screen!

I thought of this thread, when I read this article, which supports Dan’s point about humans wanting to open up to & talk to machines.

“…there actually is a sort of pent-up demand for robot conversation because people are lonely. And people like having sort of the illusion of friendship without the demands of intimacy. But I think that the choice point really is whether or not we’re going to let our children grow up socialized to have these intimate conversations with machines that pretend to be their friends.”

“…designers know the power of the relationships that are going to be formed, and they have no idea what those relationships are going to be.”


Of course what some people see as benign coercion is likely seen by others as an attempt at inappropriate manipulation.

1 Like