I managed to turn an essay assignment into an opportunity to write about the Singularity, and I thought I'd turn to LW for feedback on the paper. The paper is about Thomas Pogge, a German philosopher who works on institutional efforts to end poverty and is a pledger for Giving What We Can

I offer a basic argument that he and other poverty activists should work on creating a positive Singularity, sampling liberally from well-known Less Wrong arguments. It's more academic than I would prefer, and it includes some loose talk of 'duties' (which bothers me), but for its goals, these things shouldn't be a huge problem. But maybe they are - I want to know that too.

I've already turned the assignment in, but when I make a better version, I'll send the paper to Pogge himself. I'd like to see if I can successfully introduce him to these ideas. My one conversation with him indicates that he would be open to actually changing his mind. He's clearly thought deeply about how to do good, and may simply have not been exposed to the idea of the Singularity yet.

I want feedback on all aspects of the paper  - style, argumentation, clarity. Be as constructively cruel as I know only you can.

If anyone's up for it, fee free to add feedback using Track Changes and email me a copy - mjcurzi[at]wustl.edu. I obviously welcome comments on the thread as well.

You can read the paper here in various formats.

Upvotes for all. Thank you!

New Comment
11 comments, sorted by Click to highlight new comments since: Today at 11:17 AM

If I'm reading correctly, the argument you appear to present in your paper is:

  1. We (Thomas Pogge) want to end poverty.
  2. An AI could end poverty.
  3. Therefore, we should build an AI.

This isn't a strong argument. Probably Pogge thinks that ending poverty is perfectly feasible without building AI, so if you want to change his mind, you need to show that an AI solution can likely be implemented faster than a non-AI one in addition to being sufficiently safe.

It seems like your paper just sets out to establish that there might be some strong arguments for Singularity activism as a response to global poverty somewhere in the vicinity without trying very hard to spell them out.

[-][anonymous]12y20

Thanks for the feedback - I appreciate it.

I was actually trying for a stronger claim - that AI (as a permanent solution that takes some time to develop) is better than institutional work or humanitarian aid (which has a lot of downsides) for ending poverty. More generally, I want to show that AI dominates other strategies of moral action because of its tremendous scope, despite a) its uncertainty, b) focus on future people, and c) risks of bad consequences.

Your charge of vagueness is worth considering as well, though perhaps I'll just need to apply it to future writing. I'll get back to work. Thanks again.

I guess I'm just not currently seeing the arguments for those things (though I may just be confused somehow). It seems more like you're trying to lobby the burden of proof tennis ball to Pogge's court: AI "might" turn out to be as good as the scenario (50% chance of permanently ending world poverty forever if we're uncharitable for 30 years) he assents to, so it's Pogge's job to show that AI is probably not like that scenario.

[-][anonymous]12y10

Right, I hear you. I definitely try to avoid dealing specifically with arguments about the likelihood of the Singularity - hopefully passing the reader off to treatments created specifically for that purpose, like Chalmers' paper and lukeprog's site.

If I can do one thing with the paper, I'd just like for Pogge to feel that he needs to address the possibility of the Singularity somehow, even if it's just by browsing singinst.org.

Thanks.

I was actually trying for a stronger claim - that AI (as a permanent solution that takes some time to develop) is better than institutional work or humanitarian aid

Have you considered diminishing returns? We have more resources available to us than are currently useful in the goal of persuing AGI. Would you argue that we should let those resources go fallow rather than work to mitigate ongoing problems in the duration of the period before our AGI efforts succeed merely because it's not as worthy a goal as AGI?

An AI could end poverty.

Would seems to be the word that is necessary there!

On the whole, it looks good to me. The approach seems a bit more directly proselytizing than most recent Singularity writing, particularly the specific mention of SIAI in the last paragraph, but I'm not sure whether that's a good or a bad thing.

Some nitpicks:

Would donors and activists support the causes they currently do, if they had full information, better minds, and deep reflection to guide their judgment? Thomas Pogge is one thinker who believes that we wouldn’t

"that they wouldn't"?

attending to the possibility of a negative the Singularity

remove "the"

Muelhauser

"Muehlhauser"

ETA: also, you might want to replace "better minds" in the first sentence with something more specific, like "sharper minds".

[-][anonymous]12y20

Great, thanks for the feedback. I'll make those changes. Elusive 'h', there.

The way you move directly to addressing objections without fleshing out a strong prima facie case for thinking that AI support > poverty reduction weakens your argument, particularly for a reader who isn't already familiar with the concepts. In other words, the reader probably won't have the objections you're refuting, because you haven't really made an argument that one could object to.

I look forward to seeing his response. When do you plan on sending the essay to him?

[-][anonymous]12y00

I'll commit to sending it to him by December. If his response is interesting, I'll make a Discussion post about it. I can also let you (and whoever else) know directly when I do so.