The mind-killer

I meant no disrespect. (Eliezer has 661 posts on OB.) I do appreciate your direction/correction. I didn't mean to take a stand against.

(Sigh.) I have no positions, no beliefs, prior to what I might learn from Eliezer.

So the idea is that a unique, complex thing may not necessarily have an appreciation for another unique complexity? Unless appreciating unique complexity has a mathematical basis.

brynema, “disrespect” isn’t at all the the right axis for understanding why your last couple comments weren’t helpful. (I’m not attacking you here; LW is an unusual place, and understanding how to usefully contribute takes time. You’ve been doing well.) The trouble with your last two comments is mostly:

  1. Comments on LW should aspire to rationality. As part of this aspiration, we basically shouldn’t have “positions” on issues we haven’t thought much about; the beliefs we share here should be evidence-based best-guesses about the future, not clothes to

... (read more)
4AnnaSalamon11yIf we want to talk usefully about AI as a community, we should probably make a wiki page that summarizes or links to the main points. And then we should have a policy in certain threads: "don't comment here unless you've read the links off of wiki page such-and-such". brynema's right that we want newcomers in LW, and that newcomers can't be expected to know all of what's been discussed. But it is also true that we'll never get discussions off the ground if we have to start all over again every time someone new enters.

The mind-killer

by Paul Crowley 1 min read2nd May 2009160 comments

23


Can we talk about changing the world? Or saving the world?

I think few here would give an estimate higher than 95% for the probability that humanity will survive the next 100 years; plenty might put a figure less than 50% on it. So if you place any non-negligible value on future generations whose existence is threatened, reducing existential risk has to be the best possible contribution to humanity you are in a position to make. Given that existential risk is also one of the major themes of Overcoming Bias and of Eliezer's work, it's striking that we don't talk about it more here.

One reason of course was the bar until yesterday on talking about artificial general intelligence; another factor are the many who state in terms that they are not concerned about their contribution to humanity. But I think a third is that many of the things we might do to address existential risk, or other issues of concern to all humanity, get us into politics, and we've all had too much of a certain kind of argument about politics online that gets into a stale rehashing of talking points and point scoring.

If we here can't do better than that, then this whole rationality discussion we've been having comes to no more than how we can best get out of bed in the morning, solve a puzzle set by a powerful superintelligence in the afternoon, and get laid in the evening. How can we use what we discuss here to be able to talk about politics without spiralling down the plughole?

I think it will help in several ways that we are a largely community of materialists and expected utility consequentialists. For a start, we are freed from the concept of "deserving" that dogs political arguments on inequality, on human rights, on criminal sentencing and so many other issues; while I can imagine a consequentialism that valued the "deserving" more than the "undeserving", I don't get the impression that's a popular position among materialists because of the Phineas Gage problem. We need not ask whether the rich deserve their wealth, or who is ultimately to blame for a thing; every question must come down only to what decision will maximize utility.

For example, framed this way inequality of wealth is not justice or injustice. The consequentialist defence of the market recognises that because of the diminishing marginal utility of wealth, today's unequal distribution of wealth has a cost in utility compared to the same wealth divided equally, a cost that we could in principle measure given a wealth/utility curve, and goes on to argue that the total extra output resulting from this inequality more than pays for it.

However, I'm more confident of the need to talk about this question than I am of my own answers. There's very little we can do about existential risk that doesn't have to do with changing the decisions made by public servants, businesses, and/or large numbers of people, and all of these activities get us straight into the world of politics, as well as the world of going out and changing minds. There has to be a way for rationalists to talk about it and actually make a difference. Before we start to talk about specific ideas to do with what one does in order to change or save the world, what traps can we defuse in advance?

23