Like the trolley problem for autonomous cars (Trolley problem - Wikipedia), robot ethics are a design consideration. Humans can become emotionally invested in robots and a considerable amount of trust can be placed in them. Here’s an editorial that suggests that conscious choices need to be made in robot design, so that robot interaction doesn’t lead to manipulation and exploitation.
How should rules for robot behavior be implemented and who decides what those rules should be? How do you design an algorithm that allows a robot to prioritize functional tasks against opinions and feelings of humans with whom the robot interacts?