Robot Not a Dev? Buy Now
Not a Dev?

Misty Community Forum

Robot Personality

At Misty, we talk a lot about giving our robots personality, about making them not just functional, but fun! But what does robot personality mean?

We think that it means more than just pre-scripted interactions and repetitive animations. We think it means more than a few funny answers to particular questions. We think it means more than a collection of shallow changes like name, gender, or screen color.

Instead, we think of robot personality as something that makes each robot individually unique. That ties together disparate skills, written by a variety of programmers into a coherent whole. That makes this robot, yours.

Check out our recent paper on developing an infinite personality space for robots (https://buff.ly/2ALupfR) and share with us what robot personality means to you.

9 Likes

The concept of a drive-based personality model is incredibly thought-provoking, not just for robots, but even for understanding what makes other people (and animals) tick.

I might describe one of my cats as highly food-driven + dominance-driven. And the other as highly novelty-driven and sunshine-driven. Together that provides a fair idea of not only their individual likely behaviors, but their interactions, as well. Powerful stuff!

Love this Dan, fascinating read! Great work.

Love it, it sure can get complicated the more one goes down the rabbit hole of personality origins. There’s been a lot of interesting thoughts on this topic since joining the slack group here for the MIT AGI open course, specifically talking about how an AGI would need a “body” (a robot you might say) to get a better understanding of how the world works, and how this ultimately would drive the cost of actions for a policy that I guess at some point would make up a “personality”. This approach seems like good step in that direction. Can’t wait to implement it when cough I get a Misty dev kit cough

4 Likes

Great! And you have your Dev Kit now… :slight_smile: I am also interested in having Misty develop a personality. (And I do somewhat question if it makes sense to have a personality per user profile versus a collective personality based on all interactions).

But also thinks like facial expressions, word choices, etc. It seems a bit silly but
I was just trying to get animated gifs/png to work just for a fun to have it wink, roll
its eyes for a bad joke and more color to the eyes. (I have a few I am working on but
I am not a ‘artist’, more programmer/linux person).

1 Like

So one of the questions we have is how much of the personality to open up to developers, or end-users. Like @markwdalton, we expect many people to want to develop/change the eyes, or sounds, or particular motions that their misty makes, and we’re developing out some tools to make that easier (as well as provide a more expressive baseline). But how much of the actual personality itself (not the expression thereof) should people be able/allowed to change?

Imagine your misty has likes/dislikes, perhaps randomly generated, hand-crafted, or developed over time. Would you expect/want the ability to go in and change those? I.e, if your misty’s favorite color was red, should you be able to just change it to green? Or if your robot was really friendly (approaching strangers, say), should you be able to modify its personality to make it shyer?

As alluded to in the paper, we’re building out an personality engine that allows for infinite personalities, so every misty can be unique. The underlying thinking is that by having a robot that is uniquely yours, you can bond and engage with it in your own, unique, special way (like people do with pets). But if you can change the personality at will, does it really have a personality at all?

I would really like to hear your thoughts on this. How much control over personality do you want? How often do you think you would exercise it?

2 Likes

Yes, it would be nice if it learned the personality. However, I do wonder like if for the sake of kids to protect it from learning or allow certain overrides. Specifically in the area of words used, images or gestures. For legal if you just allow it to learn, perhaps consider it like a parrot, the pet store is not responsible for you teaching it inappropriate language, the owner is.

But I could imagine certain overrides for when other kids or adults come and teach it things or perhaps hear something. I do not want those words repeated, but perhaps a solution is a mechanism to retrain that area. For example teach phrases like ‘XYZ’ are not appropriate.

The other question is since pros/cons of User profiles. Previously I was thinking having unique learned personality for each user profile (for me partly for experimentation). My guess is they are not completely needed, except for face, voice, emotion recognition. Personality in pets varies between the people significantly, but the general personality is still there with combined experience.

1 Like

My thinking is that the robot has its own personality, which then interacts with unique user profiles for each human it meets. I.e, the robot is generally friendly and approaching of people, but knows that this person, in particular, doesn’t like it, so doesn’t.

As for learning bad words, I agree with you - I think of the robot very much like a parrot (at least until we have more understanding of context and how actions impact people) - it just does what it’s trained, without really understanding. If I train my dog to jump and slobber on people (or, don’t train it not too), I shouldn’t be surprised when it does that to my 90 year old grandmother and knocks her down. As a responsible pet/robot owner, I need to guide my pet/robot’s development.

1 Like

If it is in the normal learning mode I think it is best to limit what people can change. And they should be limited to not be able to change the personality, but eperhaps wipe the current profile.

Also it may seem like a pet but I doubt legal or the courts would purely see it that way. I would suggest making a option to override to remove “offensive speech” and I strongly recommend talking with legal about that unless you have a documented way to remove “offensive” speech.

For what the user can customize treat it like a birth, one time setup choices for a profile and limited. Favorite color is learned and changes over time with new memories/learning.

1 Like

The legal issues of responsibility for learning systems are, indeed, ambiguous at the moment (check out some of the current debate in the EU: Europe divided over robot ‘personhood’ – POLITICO

For the moment, standard ‘parental’ controls seem like a reasonable route. Although there are some really interesting techniques to detect and automatically remove unwanted data, although many involve actual humans reviewing some portion of the data (Facebook spares humans by fighting offensive photos with AI – TechCrunch)

I kinda like the idea of giving owners the ability to to a limited number of personality changes over the lifetime of their robot (or wiping and starting over). Kinda like changing the region on your DVD player.

3 Likes

I like the idea of having a truly personal robot and to me, that implies having at least something of a distinct personality.
Very tricky slope. I get it. And I do feel like there needs to be limits.
I can picture how fun it could be having more than one in the same home / office or such with similar but yet still slightly unique personalities.

2 Likes

interesting work on emotion that came out:
Paper