Robot Not a Dev? Pre-order Now
Not a Dev?

Q. User Experience Researchers/Designers at Misty?


#1

In general, are there User Experience Researchers/Designers at Misty? What are some of the things you looked into? (Or would want to be looked into?)

Someone to design and evaluate the usability of the companion app interface? Or to research what emotions humans are able to understand from those expressive eyes? It came be both fun and informative! I asked my coworkers about their understandings to some of the faces and got some fun responses, especially from the groggy one! Anyone else have similar thoughts or experiences?

I’m a UX researcher/designer at heart so I had to ask…


#2

Hi Wendy, we have a product team with industrial designers and prototype engineers who design and iterate the art perspective of how Misty looks, and a human computer interaction teams led by HCI engineers and researchers to develop skills on Misty to make it better interact with humans in a lively way. Feel free to put comments here about the feedbacks that you have and from your coworkers, we would like to see how people think of Misty’s ability to be emotionally interacting with people and how we could improve Misty:)


#3

That’s good to hear.
I can dig into our slack conversations more, but few people seemed to understand that the “happy.png” was meant for happy and were unclear about the groggy purpose. Of course, this feedback is provided on eyes that are seen out of context and even if they eyes are initially confusing they still could be easily learn-able.


#4

Hi Wendy,

Having universally understood ‘expressions’ is actually a fascinating area of research. The nuance between ‘happy’, ‘elated’, ‘content’, or other broadly positive emotions are pretty fluid between users without context. I would highly encourage anyone/everyone with an interest to dig into expressivity in robots and how it is interpreted by humans!

A few of us attended the HRI (Human-Robot Interaction) Conference and there was an interesting paper that explored some of what you’re talking about:

“Multimodal Expression of Artificial Emotion in Social Robots Using Color, Motion and Sound.”

Hope that’s helpful :slight_smile:


#5

Hey Chris. Exactly. I’m glad you brought academic resources up.

I was at HRI also and the same page you linked has our lab’s work on “Characterizing the Design Space of Rendered Robot Faces,” which is slightly more relevant to this topic. Alisa systematically examined how screen rendered robot faces impacted people perception to its characteristics. The fact that it was one of the best paper nominees and the existence of Diana’s multimodal study shows that the HRI community is actively considering all things related to expression.
All the more reason why I was wanted to know how the Misty Team developed and designed these faces.