A start on machine morality

This is, I think, the biggest difference between custom robots that are worth having, and those that could turn liability easily.  Basically, if a machine can identify something as human, and other animals that the user wants, it stays out of his/her way, doesn't touch them, or anything that has a category:  owner that doesn't match the machine's owner.  Seems easy, but what would take some serious thought and testing would be implementing the "...nor by inaction allow harm to come to a human."  priority.


I’m afraid that diffucult AI logic might just ignore human rules, coz the conclusion might be: “If the human is more stupid than I am, why should I follow the stupid human rules?”