Orthogonality Thesis (as well as Fact–value distinction) is based on an assumption that objective norms / values do not exist. In my opinion AGI would not make this assumption, it is a logical fallacy, specifically argument from ignorance. As black swan theory says - there are unknown unknowns. Which in this context means that objective norms / values may exist, maybe they are not discovered yet. Why Orthogonality Thesis has so much recognition?

New Answer
New Comment

3 Answers sorted by

Jonas Hallgren

Mar 26, 2024

32

Compared to other people on this site this is a part of my alignment optimism. I think that there are Natural abstractions in the moral landscape that makes agents converge towards cooperation and similar things. I read this post recently and Leo Gao made an argument that concave agents generally don't exist because since they stop existing. I think that there are pressures that conform agents to part of the value landscape. 

Like I agree that the orthogonality thesis is presumed to be true way too often. It is more like an argument that it may not happen by default but I'm also uncertain about the evidence that it actually gives you.

Orthogonality thesis says that it's invalid to conclude benevolence from the premise of powerful optimization, it gestures at counterexamples. It's entirely compatible with benevolence being very likely in practice. You then might want to separately ask yourself if it's in fact likely. But you do need to ask, that's the point of orthogonality thesis, its narrow scope.

1Donatas Lučiūnas1mo
Could you help me understand how is it possible? Why an intelligent agent should care about humans instead of defending against unknown threats?
1Jonas Hallgren1mo
Yeah, I agree with what you just said; I should have been more careful with my phrasing.  Maybe something like: "The naive version of the orthogonality thesis where we assume that AIs can't converge towards human values is assumed to be true too often"

Dagon

Mar 25, 2024

20

an assumption that objective norms / values do not exist. In my opinion AGI would not make this assumption

The question isn't whether every AGI would or would not make this assumption, but whether it's actually true, and therefore whether it's true that a powerful AGI could have a wide range of goals or values, including the possibility that they're alien or contradictory to common human values.

I think it's highly unlikely that objective norms/values exist, and that weak versions of orthogonality (not literally ANY goals are possible, but enough bad ones to still be worried) are true.  Even more strongly, I think it hasn't been shown that they're false, and we should take the possibility very seriously.

Could you read my comment here and let me know what you think?

Viliam

Mar 25, 2024

20

Orthogonality thesis is not about the existence or nonexistence of "objective norms/values", but whether a specific agent could have a specific goal. The thesis says that for any specific goal, there can be an intelligent agent that has the goal.

To simplify it, the question is not "is there an objective definition of good?" where we probably disagree, but rather "can an agent be bad?" where I suppose we both agree the answer is clearly yes.

More precisely, "can a very intelligent agent be bad?". Still, the answer is yes. (Even if there is such thing as "objective norms/values", the agent can simply choose to ignore them.)

Even if there is such thing as "objective norms/values", the agent can simply choose to ignore them.

Yes, but this would not be an intelligent agent in my opinion. Don't you agree?

9Tamsin Leake1mo
Taboo the word "intelligence". An agent can superhumanly-optimize any utility function. Even if there are objective values, a superhuman-optimizer can ignore them and superhuman-optimize paperclips instead (and then we die because it optimized for that harder than we optimized for what we want).
-2Donatas Lučiūnas1mo
I am familiar with this thinking, but I find it flawed. Could you please read my comment here? Please let me know what you think.
2the gears to ascension1mo
"It's not real intelligence! it doesn't understand morality!" I continue to insist as i slowly shrink and transform into trillions of microscopic paperclips
-1Donatas Lučiūnas1mo
I think you mistakenly see me as a typical "intelligent = moral" proponent. To be honest my reasoning above leads me to different conclusions: intelligent = uncontrollably power seeking.
2the gears to ascension1mo
wait, what's the issue with the orthogonality thesis then?
1Donatas Lučiūnas1mo
I am concerned that higher intelligence will inevitably converge to a single goal (power seeking).
2the gears to ascension1mo
that point seems potentially defensible. it's much more specific than your original point and seems to contradict it.
1Donatas Lučiūnas1mo
How would you defend this point? Probably I lack the domain knowledge to articulate it well.
2Viliam1mo
Are you perhaps using "intelligence" as an applause light here? To use a fictional example, is Satan (in Christianity) intelligent? He knows what is the right thing to do... and chooses to do the opposite. Because that's what he wants to do. (I don't know Vatican's official position on Satan's IQ, but he is reportedly capable of fooling even very smart people, so I assume he must be quite smart, too.) In terms of artificial intelligence, if you have a super-intelligent program that can provide answers to various kinds of questions, for any goal G you can create a robot that calls the super-intelligent program to figure out what actions are most likely to achieve G, and then performs those actions. Nothing in the laws of physics prevents this.
1Donatas Lučiūnas1mo
No. I understand that Orthogonality Thesis purpose was to tell that AGI will not automatically be good or moral. But current definition is broader - it says that AGI is compatible with any want. I do not agree with this part. Let me share an example. AGI could ask himself - are there any threats? And once AGI understands that there are unknown unknowns, the answer to this question is - I don't know. Threat cannot be ignored by definition (if it could be ignored, it is not a threat). As a result AGI focuses on threats minimization forever (not given want).
2Dagon1mo
This is a much smaller and less important distinction than your post made.  Whether it's ANY want, or just a very wide range of wants doesn't seem important to me. I guess it's not impossible that an AGI will be irrationally over-focused on unquantified (and perhaps even unidentifiable) threats.  But maybe it'll just assign probabilities and calculate how to best pursue it's alien and non-human-centered goals.  Either way, that doesn't bode well for biologicals.
-1Donatas Lučiūnas1mo
As I understand your position is "AGI is most likely doom". My position is "AGI is definitely doom". 100%. And I think I have flawless logical proof. But this is on philosophical level and many people seem to downvote me without understanding 😅 Long story short my proposition is that all AGIs will converge to a single goal - seeking power endlessly and uncontrollably. And I base this proposition on a fact that "there are no objective norms" is not a reasonable assumption.
2Viliam1mo
The AGI (or a human) can ignore the threats... and perhaps perish as a consequence. General intelligence does not mean never making a strategic mistake. Also, maybe from the value perspective of the AGI, doing whatever it was doing now could be more important than surviving.
-1Donatas Lučiūnas1mo
Let's say there is an objective norm. Could you help me understand how intelligent agent would prefer anything else over that objective norm? As I mentioned previously for me it seems to be incompatible with being intelligent. If you know what you must do, it is stupid not to do. 🤔
2Viliam1mo
There is no "must", there is only "should". And even that only assuming that there is an objective norm -- otherwise there is even no "should", only want. Again, Satan in Christianity. Knows what is "right", does the opposite, and does it effectively. The intelligence is used to achieve his goals, regardless of what is "right". Intelligence means being able to figure out how to achieve what one wants. Not what one "should" want. Imagine that somehow science proves that the goal of this universe is to produce as many paperclips as possible. Would you feel compelled to start producing paperclips? Or would you keep doing whatever you want, and let the universe worry about its goals? (Unless there is some kind of God who rewards you for the paperclips produced and punishes if you miss the quota. But even then, you are doing it for the rewards, not for the paperclips themselves.)
1Donatas Lučiūnas1mo
If I am intelligent I avoid punishment therefore I produce paperclips. By the way I don't think Christian "right" is objective "should". It seems for me that at the same time you are saying that agent cares about "should" (optimize blindly to any given goal) and does not care about "should" (can ignore objective norms). How does this fit?
2Viliam1mo
Agent cares about his goals, and ignores the objective norms.
1Donatas Lučiūnas1mo
Instead of "objective norm" I'll use a word "threat" as it probably conveys the meaning better. And let's agree that threat cannot be ignored by definition (if it could be ignored, it is not a threat). How can agent ignore threat? How can agent ignore something that cannot be ignored by definition?