In part 1 ( https://www.lesswrong.com/posts/Lug4n6RyG7nJSH2k9/computational-morality ) I set out a proposal for a system of machine ethics to govern the behaviour of AGI (in which you simply imagine that you are all the people and other sentiences involved in any situation and seek to minimise harm to yourself on that basis, though without eliminating any components of harm which are necessary as a means of accessing enjoyment which you calculate outweighs that harm, because that kind of harm is cancelled out by the gains). People assured me in the comments underneath that it was wrong, though their justifications for doing so appear to be based on faulty ideas which they seemed unable to explore, but it's also hard to gauge how well they took in the idea in the first place. One person, for example, suggested that my proposal might be Rule-Utilitarianism, and the following link was provided: https://en.wikipedia.org/wiki/Rule_utilitarianism . Here is the key part:-
"For rule utilitarians, the correctness of a rule is determined by the amount of good it brings about when followed. In contrast, act utilitarians judge an act in terms of the consequences of that act alone (such as stopping at a red light), rather than judging whether it faithfully adhered to the rule of which it was an instance (such as, "always stop at red lights"). Rule utilitarians argue that following rules that tend to lead to the greatest good will have better consequences overall than allowing exceptions to be made in individual instances, even if better consequences can be demonstrated in those instances."
This is clearly not my proposal at all, but it's interesting none the less. The kind of rules being discussed there are really just general guidelines that lead to good decisions being made in most situations without people having to think too deeply about what they're doing, but there can be occasions when it's moral to break such rules and it may even be immoral not to. Trying to build a s