Phil_Goetz2

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Good post. Nick's point is also good.

When parents say they don't care who started it, it may also be a strategy to minimize future fighting. Justice is not always optimal, even in repeated interactions.

Jorge Luis Borges, The Babylon Lottery, 1941. Government by lottery. Living under a lottery system leads to greater expectation of random events, greater belief that life is and should be ruled by randomness, and further extension of the lottery's scope, in a feedback loop that increases until every aspect of everyone's life is controlled by the lottery.

Anon: "The notion of "morally significant" seems to coincide with sentience."

Yes; the word "sentience" seems to be just a placeholder meaning "qualifications we'll figure out later for being thought of as a person."

Tim: Good point, that people have a very strong bias to associate rights with intelligence; whereas empathy is a better criterion. Problem being that dogs have lots of empathy. Let's say intelligence and empathy are both necessary but not sufficient.

James: "Shouldn't this outcome be something the CEV would avoid anyway? If it's making an AI that wants what we would want, then it should not at the same time be making something we would not want to exist."

CEV is not a magic "do what I mean" incantation. Even supposing the idea were worked out, before the first AI is built, you probably don't have a mechanism to implement it.

anon: "It would be a mistake to create a new species that deserves our moral consideration, even if at present we would not give it the moral consideration it deserves."

Something is missing from that sentence. Whatever you meant, let's not rule out creating new species. We should, eventually.

Eliezer: Creating new sentient species is frightening. But is creating new non-sentient species less frightening? Any new species you create may out-compete the old and become the dominant lifeform. It would be the big lose to create a non-sentient species that replaced sentient life.

"I propose this conjecture: In any sufficiently complex physical system there exists a subsystem that can be interpreted as the mental process of an sentient being experiencing unbearable sufferings."

It turns out - I've done the math - that if you are using a logic-based AI, then the probability of having alternate possible interpretations diminishes as the complexity increases.

If you allow /subsystems/ to mean a subset of the logical propositions, then there could be such interpretations. But I think it isn't legit to worry about interpretations of subsets.

BTW, Eliezer, regarding this recent statement of yours: "Goetz's misunderstandings of me and inaccurate depictions of my opinions are frequent and have withstood frequent correction": I challenge you to find one post where you have tried to correct me in a misunderstanding of you, or even to identify the misunderstanding, rather than just complaining about it in a non-specific way.

Eliezer: "I'll go ahead and repeat that as Goetz's misunderstandings of me and inaccurate depictions of my opinions are frequent and have withstood frequent correction, that I will not be responding to Goetz's comment."

Really? I challenge you to point to ONE post in which you have tried to correct a misunderstanding by me of your opinion, rather than just complaining about my "misunderstandings" without even saying what the misunderstanding was.

Eliezer, I have probably made any number of inaccurate depictions of your opinions, but you can't back away from these ones. You DO generally think that your opinion on topics you have thought deeply about is more valuable than the opinion of almost everyone, and you HAVE thought deeply about fun theory. And you ARE planning to build an AI that will be in control of the world. You might protest that "take over the world" has different connotations. But there's no question that you plan for your AI to be in charge.

It is deeply creepy and disturbing to hear this talk from someone who already thinks he knows better than just about everybody about what is good for us, and who plans to build an AI that will take over the world.

Michael, I thought that you advocated comfort with lying because smart people marginalize themselves by compulsive truth-telling. For instance, they find it hard to raise venture capital. Or (to take an example that happened at my company), when asked "Couldn't this project of yours be used to make a horrible terrorist bioweapon?", they say, "Yes." (And they interpret questions literally instead of practically; e.g., the question actually intended, and that people actually hear, is more like, "Would this project significantly increase the ease of making a bioweapon?", which might have a different answer.)

Am I compulsively telling the truth again? Doggone it.

Is it just me, or did Wright's writing style sound very much like Eliezer's?

pdf23ds: The claim that atheism inevitably leads to nihilism, and that belief in god inevitably relieves it, is made regularly by religious types in the West as the core of their argument for religion.

Today, in the West, people think that atheism leads to an existential crisis of meaning. But in ancient Greece, people believed in creator gods, and yet had to find their own sense of purpose exactly the same as an atheist.

We assume that the religious person has a purpose given by God. But Zeus would have said that the purpose of humans was to produce beautiful young women for him to have sex with. Ares would have said their purpose was to kill each other. Bacchus would have said it was to party. And so on. The gods ignored humans, had trivial purposes for them, or even hostile intent towards them.

Every believing Greek had to find their own meaning in life; often based on a sense of community. This meaning, or lack thereof, bore no relation to whether they believed in the gods or not.

Anna wrote:

Maybe it will make it easier but they didn't really work at it. By having this alledged surgery will it make then more or less prone to believe in the quick fix or the long term discipline of working at it?

The reason for practicing discipline is to be able to solve problems. It would not be rational to avoid a quick solution to your life's biggest problem, in order to gain experience that might possibly be useful in solving smaller problems later on.

Load More