An article posted this month discusses the notion of explicitly vs implicitly ethical robots.
Quote: “explicitly ethical robots: These are robots that select behaviors on the basis of ethical rules - in a sense they can be said to reason about ethics (in our case by evaluating the ethical consequences of several possible actions).”
“Implicit ethical agents: [are] Designed to avoid negative ethical effects. …those that have been designed to avoid harm by, for instance detecting if a human walks in front of them and automatically coming to a stop”
Dieter Vanderelst concludes from a series of experiments (http://www.aies-conference.com/wp-content/papers/main/AIES_2018_paper_98.pdf) that explicitly ethical robots are easily subject to manipulation, hacking, and other unscrupulous behavior, and therefore by creating explicitly ethical robots, those robots can more easily be turned into unethical robots. He raises the question of whether ease of corruptibility should be accounted for in robot ethical design choices.
Would you feel safer interacting with an explicitly ethical robot that can evaluate the ethical consequences of each of it’s behavioral choices and justify why it made a particular behavioral choice or an implicitly ethical robot “zombie” that is “compelled to make decisions based on the harm-minimisation rules hard-coded into it …and it cannot justify those decisions”? Is it important to you that the robot be able to justify why it chose a particular behavior? Would you feel safer interacting with a robot if someone told you that that particular robot was less corruptible?