This differs (slightly) from sentiment analysis (Sentiment analysis - Wikipedia), where the primary data input is voice/sound, rather than visual (camera) data.
Regardless of the means & mechanism by which human mood, emotion, or sentiment is derived, I’m wondering if it would be helpful for your Misty robot to be equipped with such technology?
If you had the ability to query your robot, for example through an SDK or API, to return current emotion of any faces currently in view and/or mood/sentiment from any voices currently speaking, would you use it? How would you incorporate human emotion, mood, or sentiment into the skills that you build for your Misty robot? Can you think of any skills that you would not be able to build if your Misty robot didn’t have human emotion-sensing capability?
Have you ever used any of these technologies? Do you have a favorite “emotion recognition” API?