LESSWRONG
LW

112
wdmacaskill
888Ω3210450
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
How hard to achieve is eutopia?
wdmacaskill1mo10

Thanks! I think these questions do matter, though it's certainly exploratory work, and at the moment tractability is still unclear. I talk about some of the practical upshots in the final essay.

Reply
Should we aim for flourishing over mere survival? The Better Futures series.
wdmacaskill1mo21

Thanks. On the EA Forum Ben West points out the clarification: I'm not assuming x-risk is lower than 10%; in my illustrative example I suggest 20%. (My own views are a little lower than that, but not an OOM lower, especially given that this % is about locking into near-0 value futures, not just extinction.)  I wasn't meaning to be placing a lot of weight on the superforecasters' estimates.

Actually, because I ultimately argue that Flourishing is only 10% or less, for this part of the argument to work (i.e., for Flourishing to be be greater in scale than Surviving), I only need x-risk this century to be less than 90%! (Though the argument gets a lot weaker the higher your p(doom)).

The point is that we're closer to the top of the x-axis than we are to the top of the y-axis.


Reply
Preparing for the Intelligence Explosion
wdmacaskill6mo40

Ah, by the "software feedback loop" I mean: "At the point of time at which AI has automated AI R&D, does a doubling of cognitive effort result in more than a doubling of output? If yes, there's a software feedback loop - you get (for a time, at least) accelerating rates of algorithmic efficiency progress, rather than just a one-off gain from automation." 

I see now why you could understand "RSI" to mean "AI improves itself at all over time". But even so, the claim would still hold - even if (implausibly) AI gets no smarter than human-level, you'd still get accelerated tech development,  because the quantity of AI research effort would increase at a growth rate much faster than the quantity of human research effort. 

Reply
Preparing for the Intelligence Explosion
wdmacaskill6mo83

There's definitely a new trend towards custom-website essays. Forethought is a website for lots of research content, though (like Epoch), not just PrepIE.

And I don't think it's because of people getting more productive because of reasoning models - AI was helpful for PrepIE but more like 10-20% productivity boost than 100% boost, and I don't think AI was used much for SA, either.

Reply1
Preparing for the Intelligence Explosion
wdmacaskill6mo20

Thanks - appreciate that! It comes up a little differently for me, but still an issue - we've asked the devs to fix. 

Reply
Donating to MIRI vs. FHI vs. CEA vs. CFAR
wdmacaskill12y40

Argh! Original post didn't go through (probably my fault), so this will be shorter than it should be:

First point:

I know very little about CEA, and a brief check of their website leaves me a little unclear on why Luke recommends them, aside from the fact that they apparently work closely with FHI.

CEA = Giving What We Can, 80,000 Hours, and a bit of other stuff

Reason -> donations to CEA predictably increase the size and strength of the EA community, a good proportion of whom take long-run considerations very seriously and will donate to / work for FHI/MIRI, or otherwise pursue careers with the aim of extinction risk mitigation. It's plausible that $1 to CEA generates significantly more than $1's worth of x-risk-value [note: I'm a trustee and founder of CEA].

Second point:

Don't forget CSER. My view is that they are even higher-impact than MIRI or FHI (though I'd defer to Sean_o_h if he disagreed). Reason: marginal donations will be used to fund program management + grantwriting, which would turn ~$70k into a significant chance of ~$1-$10mn, and launch what I think might become one of the most important research institutions in the world. They have all the background (high profile people on the board; an already written previous grant proposal that very narrowly missed out on being successful). High leverage!

Reply
Donating to MIRI vs. FHI vs. CEA vs. CFAR
wdmacaskill12y320

CEA and CFAR don't do anything, to my knowledge, that would increase these odds, except in exceedingly indirect ways.

People from CEA, in collaboration with FHI, have been meeting with people in the UK government, and are producing policy briefs on unprecedented risks from new technologies, including AI (the first brief will go on the FHI website in the near future). These meetings arose as a result of GWWC media attention. CEA's most recent hire, Owen Cotton-Barratt, will be helping with this work.

Reply
'Effective Altruism' as utilitarian equivocation.
wdmacaskill12y00

your account of effective altruism seems rather different from Will's: "Maybe you want to do other things effectively, but >then it's not effective altruism". This sort of mixed messaging is exactly what I was objecting too.

I think you've revised the post since you initially wrote it? If so, you might want to highlight that in the italics at the start, as otherwise it makes some of the comments look weirdly off-base. In particular, I took the initial post to aim at the conclusion:

  1. EA is utilitarianism in disguise which I think is demonstrably false.

But now the post reads more like the main conclusion is:

  1. EA is vague on a crucial issue, which is whether the effective pursuit of non-welfarist goods counts as effective altruism. which is a much more reasonable thing to say.
Reply
'Effective Altruism' as utilitarian equivocation.
wdmacaskill12y70

I think the simple answer is that "effective altruism" is a vague term. I gave you what I thought was the best way of making it precise. Weeatquince, and Luke Muelhauser wanted to make it precise in a different way. We could have a debate about which is the more useful precisifcation, but I don't think that here is the right place for that.

On either way of making the term precise, though, EA is clearly not trying to be the whole of morality, or to give any one very specific conception of morality. It doesn't make a claim about side-constraints; it doesn't make a claim about whether doing good is supererogatory or obligatory; it doesn't make a claim about the nature of welfare. EA is broad tent, and deliberately so: very many different ethical perspectives will agree, for example, that it's important to find out which charities do the most to improve the welfare of those living in extreme poverty (as measured by QALYs etc), and then encouraging people to give to those charities. If so, then we've got an important activity that people of very many different ethical backgrounds can get behind - which is great!

Reply
'Effective Altruism' as utilitarian equivocation.
wdmacaskill12y270

Hi,

Thanks for this post. The relationship between EA and well-known moral theories is something I've wanted to blog about in the past.

So here are a few points:

1. EA does not equal utilitarianism.

Utilitarianism makes many claims that EA does not make:

EA does not claim whether it's obligatory or merely supererogatory to spend one's resources helping others; utilitarianism claims that it is obligatory.

EA does not make a claim about whether there are side-constraints - certain things that it is impermissible to do, even if it were for the greater good. Utilitarianism claims that it's always obligatory to act for the greater good.

EA does not claim that there are no other things besides welfare that are of value; utilitarianism does claim this.

EA does not make a precise claim about what promoting welfare consists in (for example, whether it's more important to give one unit of welfare to someone who is worse-off than someone who is better-off; or whether hedonistic, preference-satisfactionist or objective list theories of wellbeing are correct); any specific form of utilitarianism does make a precise claim about this.

Also, note that some eminent EAs are not even consequentialist leaning, let alone utilitarian: e.g. Thomas Pogge (political philosopher) and Andreas Mogensen (Assistant Director of Giving What We Can) explicitly endorse a rights-based theory of morality; Alex Foster (epic London EtG-er) and Catriona MacKay (head of the GWWC London chapter) are both Christian (and presumably not consequentialist, though I haven't asked).

2. Rather, EA is something that almost every plausible moral theory is in favour of.

Almost every plausible moral theory thinks that promoting the welfare of others in an effective way is a good thing to do. Some moral theories that promoting the welfare of others is merely supererogatory, and others think that there are other values at stake. But EA is explicitly pro promoting welfare; it's not anti other things, and it doesn't claim that we're obligated to be altruistic, merely that it's a good thing to do.

3. Is EA explicitly welfarist?

The term 'altruism' suggests that it is. And I think that's fine. Helping others is what EAs do. Maybe you want to do other things effectively, but then it's not effective altruism - it's "effective justice", "effective environmental preservation", or something. Note, though, that you may well think that there are non-welfarist values - indeed, I would think that you would be mistaken not to act as if there were, on moral uncertainty grounds alone - but still be part of the effective altruism movement because you think that, in practice, welfare improvement is the most important thing to focus on.

So, to answer your dilemma:

EA is not trying to be the whole of morality.

It might be the whole of morality, if being EA is the only thing that is required of one. But it's not part of the EA package that EA is the whole of morality. Rather, it represents one aspect of morality - an aspect that is very important for those living in affluent countries, and who have tremendous power to help others. The idea that we in rich countries should be trying to work out how to help others as effectively as possible, and then actually going ahead and doing it, is an important part of almost every plausible moral theory.

Reply
Load More
19How to make the future better (other than by reducing extinction risk)
1mo
1
41The trajectory of the future could soon get set in stone
1mo
2
22Will morally motivated actors steer us towards a near-best future?
1mo
0
22How hard to achieve is eutopia?
1mo
0
17How hard to achieve is eutopia?
1mo
2
60Should we aim for flourishing over mere survival? The Better Futures series.
1mo
8
40Three Types of Intelligence Explosion
6mo
8
45Intelsat as a Model for International AGI Governance
6mo
0
20Forethought: a new AI macrostrategy group
6mo
0
78Preparing for the Intelligence Explosion
6mo
17
Load More