Well, according to Dr. Albert Mehrabian’s studies (http://www.nonverbalgroup.com/2011/08/how-much-of-communication-is-really-nonverbal) communication is 7% verbal, 38% intonation, and 55% non-verbal (facial expressions, gestures, etc). Of course there is some disagreement to his work (http://www.spring.org.uk/2007/05/busting-myth-93-of-communication-is.php) though probably more on the actual percentages rather than the overall assessment that most communication is more than words.
As such, I think any time a robot interacts with a human it’s very beneficial to augment verbal communication with non-verbal. For robots that don’t have to interact with humans it’s probably OK to not have any facial features and body language.
However, there can be a downside to adding such non-verbal communication. People already expect robots to not be very expressive so if they aren’t there isn’t any loss in perception. But if a robot starts having expressions and body language and it gets them wrong that could communicate something completely unintended to the person’s it’s trying to communicate with.
For example, if an expressionless robot delivers bad news to a patient in a hospital there is no loss in the experience since the person already expected robot to be cold and “robot-like”. However, if the robot has expressions and human like body language on a regular basis and then delivers bad news with the wrong expression (say a smirk and posture that reflects disrespect) it would severely impact the experience for the person. This is an extreme example (and somewhat contrived) but I’m sure there are other similar scenarios.