Lifelong recursive self-improver, on his way to exploding really intelligently :D
More seriously: my posts are mostly about AI alignment, with an eye towards moral progress. I have a bachelor’s degree in mathematics, I did research at CEEALAR for four years, and now I do research independently.
A fun problem to think about:
Imagine it’s the year 1500. You want to make an AI that is able to tell you that witch hunts are a terrible idea and to convincingly explain why, despite the fact that many people around you seem to think the exact opposite. Assuming you have the technology, how do you do it?
I’m trying to solve that problem, with the difference that we are in the 21st century now (I know, massive spoiler, sorry for that.)
The problem above, and the fact that I’d like to avoid producing AI that can be used for bad purposes, is what motivates my research. If this sounds interesting to you, have a look at these two short posts. If you are looking for something more technical, consider setting some time aside to read these two.
Feel free to reach out if you relate!
You can support my research through Patreon here.
Work in progress:
Thank you for this suggestion, I appreciate it! I’ve read the review I found here and it seems that parts of that account of ethics overlap with some ideas I’ve discussed in the post, in particular the idea of considering the point of view of all conscious (rational) agents. Maybe I’ll read the entire book if I decide to reformulate the argument of the post in a different way, which is something I was already thinking about.
How did you find that book?
This type of argument has the problem that other peoples negative experiences aren't directly motivating in the way that yours are...there's a gap between bad-for-me and morally-wrong.
What type of argument is my argument, from your perspective? I also think that there is a gap between bad-for-me and bad-for-others. But both can affect action, as it happens in the thought experiment in the post.
To say that something is morally-wrong is to say that I have some obligation or motivation to do something about.
I use a different working definition in the argument. And working definitions aside, more generally, I think morality is about what is important, better/worse, worth doing, worth guiding action, which is not necessarily tied to obligations or motivation.
A large part of the problem is that the words "bad" and "good" are so ambiguous. For instance, they have aesthetic meanings as well as ethical ones. That allows you to write an argument that appears to derive a normative claim from a descriptive one.
See
https://www.lesswrong.com/posts/HLJGabZ6siFHoC6Nh/sam-harris-and-the-is-ought-gap
Ambiguous terms can make understanding what is correct more difficult, but it is still possible to reason with them and reach correct conclusions, we do it all the time in science. See Objection: lack of rigor.
Consider this stamp collector construction: It sends and receives internet data, it has a magically accurate model of reality, it calculates how many stamps would result from each sequence of outputs, and then it outputs the one that results in the most stamps.
I’m not sure why you left out the “conscious agent” part, which is the fundamental premise of the argument. If you are describing something like a giant (artificial) neural network optimised to output actions that maximise stamps while receiving input data about the current state of the world, that seems possible to me and the argument is not about that kind of AI. You can also have a look at “Extending the claim and its implications to other agents”, under Implications for AI.
At the moment we think systems like that are not conscious, otherwise we would also say that current LLMs are somewhat conscious, I guess, given how big they already are. In particular, for that kind of AI it doesn’t seem that knowledge affects behaviour in the same way it does for conscious agents. You wrote that the stamp collector knows that stamps are not morally important; more generally, does it think they are important, or not? I am not even sure “thinking something is important” applies to that stamp collector, because whatever the answer to the previous question is, the stamp collector produces stamps anyway.
(Digressing a bit: now I’m also considering that the stamp collector, even if it was conscious, might never be able to report it is conscious as we report being conscious. That would happen only if an action like “say I’m conscious” happened to be the action that also maximises stamps in that circumstance, which might never happen... interesting.)
If you are describing a conscious agent as I talk about it in the post, then A6 still applies (and the argument in general). With enough knowledge, the conscious & agentic stamp collector will start acting rationally as defined in the post, eventually think about why it is doing what it is doing, if there is anything worth doing, blah blah as in the argument, and end up acting morally, even if it is not sure that something like moral nihilism is incorrect.
In short, if I thought that the premise about being a conscious agent was irrelevant, then I would have just argued that with enough knowledge any AI acts morally, but I think that’s false. (See Implications for AI.)
Could I be wrong about conscious agents acting morally if they have enough knowledge? Sure: I think I say it more than once in the post, and there is a section specifically about it. If I’m wrong, what I think is most likely to be the problem in the argument is how I’ve split the space of ‘things doing things in the world’ into conscious agents and things that are not conscious agents. And if you have a more accurate idea of how this stuff works, I’m happy to to hear your thoughts! Below I’ve copied a paragraph from the post.
Actually, uncertainty about these properties is a reason why I am making the bold claim and discussing it despite the fact that I’m not extremely confident in it. If someone manages to attack the argument and show that it applies only to agents with some characteristics, but not to agents without them, that objection or counterargument will be helpful for understanding what are the properties that, if satisfied by an AI, make that AI act morally in conditions of high knowledge.
But you were arguing for them, weren't you? It is the arguments that fail to convince me. I was not treating these as bald assertions.
No, I don’t argue that “a sufficiently intelligent intelligence would experience valence, or for that matter that it would necessarily even be conscious”. I think those statements are false.
Hey I think your comment is slightly misleading:
I don't see a reason to suppose that a sufficiently intelligent intelligence would experience valence, or for that matter that it would necessarily even be conscious
I do not make those assumptions.
nor, if conscious, that it would value the happiness of other conscious entities
I don’t suppose that either, I give an argument for that (in the longer post).
Anyway:
I am not convinced by the longer post either
I’m not surprised: I don’t expect that my argument will move masses of people who are convinced of the opposite claim, but that someone who is uncertain and open-minded can read my argument and maybe find something useful in it and/or a reason to update their beliefs. That’s also why I wrote that the practical implications for AI are an important part of that post, and why I made some predictions instead of focusing just on philosophy.
I am not assuming a specific metaethical position, I’m just taking into account that something like moral naturalism could be correct. If you are interested in this kind of stuff, you can have a look at this longer post.
Speaking of this, I am not sure it is always a good idea to map these discussions into specific metaethical positions, because it can make updating one’s beliefs more difficult, in my opinion. To put it simply, if you’ve told yourself that you are e.g. a moral naturalist for the last ten years, it can be very difficult to read some new piece of philosophy arguing for a different position (maybe even opposite), then rationally update and tell yourself something like: “Well, I guess I’ve just been wrong for all this time! Now I’m a ___ (new position)”
This story is definitely related to the post, thanks!
This was a great read, thanks for writing!
Despite the unpopularity of my research on this forum, I think it's worth saying that I am also working towards Vision 2, with the caveat that autonomy in the real world (e.g. with a robotic body) or on the internet is not necessary: one could aim for an independent-thinker AI that can do what it thinks is best only by communicating via a chat interface. Depending on what this independent thinker says, different outcomes are possible, including the outcome in which most humans simply don't care about what this independent thinker advocates for, at least initially. This would be an instance of vision 2 with a slow and somewhat human-controlled, instead of rapid, pace of change.
Moreover, I don't know what views they have about autonomy as depicted in Vision 2, but it seems to me that also Shard Theory and some research bits by Beren Millidge are to some extent adjacent to the idea of AI which develops its own concept of something being best (and then acts towards it); or, at least, AI which is more human-like in its thinking. Please correct me if I'm wrong.
I hope you'll manage to make progress on brain-like AGI safety! It seems that various research agendas are heading towards the same kind of AI, just from different angles.
[Obviously this experiment could be extremely dangerous, for Free Agents significantly smarter than humans (if they were not properly contained, or managed to escape). Particularly if some of them disagreed over morality and, rather than agreeing to disagree, decided to use high-tech warfare to settle their moral disputes, before moving on to impose their moral opinions on any remaining humans.]
Labelling many different kinds of AI experiments as extremely dangerous seem to be a common trend among rationalists / LessWrongers / possibly some EA circles, but I doubt it's true or helpful. This topic itself could be the subject of a (many?) separate post(s). Here I'll focus on your specific objection:
how would you propose then deciding which model(s) to put into widespread use for human society's use?
This doesn't seem the kind of decision that a single individual should make =)
Under Motivation in the appendix:
It is plausible that, at first, only a few ethicists or AI researchers will take a free agent’s moral beliefs into consideration.
Reaching this result would already be great. I think it's difficult to predict what would happen next, and it seems very implausible that the large-scale outcomes will come down to the decision of a single person.
I’m not sure why you are saying the argument does not work in this case, what about all the other things the AI could learn from other experiences or teachings? Below I copy a paragraph from the post