Great thoughts Chris! This is a pretty advanced topic actually. I am assuming that you mean the nonverbal gestures that the robot exhibits have specific meaning and that you have meaning misalignment if you verbally say one thing but gesture another.
For instance, giving a thumbs up while saying “no” could create dissonance because you are communicating two different signals (a yes and a no). The implication would seem to be that the underlying gestural meaning needs to be communicated alongside the verbal meaning in time and space. For instance, the gesture is being directed at the addressee of the utterance and you should probably execute and align the gesture with the TTS system so that it doesn’t say “No” and then 5 seconds later lift its arm up and give a thumbs up.
Nonverbal communication becomes even harder when you start thinking about different morphologies. For instance the Paro robot may not have a hand and thumb so you’ll have to find a different way to communicate acknowledgement nonverbally. This means that for every strange robot that is created, the body movements will need to be explored to see what it can communicate and limits it’s communicated meaning to just what it is capable of communicating.
I have a few recommendations on authors but not a whole book. I like to read Michael Argyle’s work, Adam Kendon’s work, and a lot of the work in pragmatics is relevant. Two books that have stayed with me are meta conversations:
Horn, L., & Ward, G. (2008). “The handbook of pragmatics”
Brown, G., Gillian, B., & Yule, G. (1983). “Discourse analysis”.
Moving all of this body of knowledge into robotics is incredibly challenging for the reasons you speak of. We just don’t have the data to learn these gestures and their meaning and because of that we usually animate them. There are, as of yet, no great animation tools for robots. This means we’ve had to do it by hand or through tools like Maya or Blender just to animate.
Other challenges are usually unforeseen. You probably want the robot to look at you when talking to you… you probably want to use the body to refer to things in the environment when talking to you. All of this is done by hand usually. In addition, if you do it wrong then your robot is probably miscommunicating its intentions.
Just my take on this whole problem. I could go on but I’m looking forward to hearing from Ross who is an expert in proxemics - one area of nonverbal communication.