Reminder: Morality is unsolved
Here is a game you can play with yourself, or others: a) You have to decide on a moral framework that can be explained in detail, to anyone. b) It will be implemented worldwide tomorrow. c) Tomorrow, every single human on Earth, including you and everyone you know, will also have their lives randomly swapped with someone else. This means that you are operating under the veil of ignorance. You should make sure that the morality you decide on is beneficial whoever you are, once it takes effect. Multiplayer: The one to first convince all other players, wins. Single player: If you play alone, you just need to convince yourself. Good luck! Morality is unsolved Let me put this another way: Did your mom ever tell you to be a good person? Do you ever feel that sometimes you fail that task? Yes? To your defense, I doubt anybody ever told you exactly what a good person is, or what you should do to be one. * Morality is a famously unsolved problem, in the sense that we don't have any ethical frameworks that are complete and consistent, that everyone can agree on. It may even be unsolvable. We don't have a universally accepted set of moral rules to start with either. An important insight here, is that the disagreements often end up being about whom the rules should apply to. For example, if you say that everyone should have equal rights of liberty, the question is: who is everyone? If you say "all persons" you have to define what a person is. Do humans in coma count? Elephants? Sophisticated AIs? How do you draw the line? And if you start having different rules for different "persons", then you don't have a consistent and complete framework, but a patchwork of rules, much like our current mess(es) of judiciary systems. We also don't understand metaethics well. Here are two facts about what the situation is actually like right now: a) We are currently in a stage where we want and believe different things, some of which are fundamentally at odds which
Actually there is an important point to make here. We didn't get so lucky, of course.
Rather CoT was the logical next step, because, as you just explained, it is more powerful to build chains of reasoning than optimize every single output ad infinitum.
We will get far, but never get to SI, with only an LLM, exactly because they lack integrated learnings and continuous states.
An LLM designing the start of a new SI-seed architecture paradigm is one (speculative) thing, but neither whole-models nor... (read more)