This is a linkpost to a recent blogpost from Michael Nielsen, who has previously written on EA among many other topics. This blogpost is adapted from a talk Nielsen gave to an audience working on AI before a screening of Oppenheimer. I think the full post is worth a read, but I've pulled out some quotes I find especially interesting (bolding my own)

I was at a party recently, and happened to meet a senior person at a well-known AI startup in the Bay Area. They volunteered that they thought "humanity had about a 50% chance of extinction" caused by artificial intelligence. I asked why they were working at an AI startup if they believed that to be true. They told me that while they thought it was true, "in the meantime I get to have a nice house and car".

[...] I often meet people who claim to sincerely believe (or at least seriously worry) that AI may cause significant damage to humanity. And yet they are also working on it, justifying it in ways that sometimes seem sincerely thought out, but which all-too-often seem self-serving or self-deceiving.

 

Part of what makes the Manhattan Project interesting is that we can chart the arcs of moral thinking of multiple participants [...] Here are four caricatures:

  • Klaus Fuchs and Ted Hall were two Manhattan Project physicists who took it upon themselves to commit espionage, communicating the secret of the bomb to the Soviet Union. It's difficult to know for sure, but both seem to have been deeply morally engaged and trying to do the right thing, willing to risk their lives; they also made, I strongly believe, a terrible error of judgment. I take it as a warning that caring and courage and imagination are not enough; they can, in fact, lead to very bad outcomes.
  • Robert Wilson, the physicist who recruited Richard Feynman to the project. Wilson had thought deeply about Nazi Germany, and the capabilities of German physics and industry, and made a principled commitment to the project on that basis. He half-heartedly considered leaving when Germany surrendered, but opted to continue until the bombings in Japan. He later regretted that choice; immediately after the Trinity Test he was disconsolate, telling an exuberant Feynman: "It's a terrible thing that we made".
  • Oppenheimer, who I believe was motivated in part by a genuine fear of the Nazis, but also in part by personal ambition and a desire for "success". It's interesting to ponder his statements after the War: while he seems to have genuinely felt a strong need to work on the bomb in the face of the Nazi threat, his comments about continuing to work up to the bombing of Hiroshima and Nagasaki contain many strained self-exculpatory statements about how you have to work on it as a scientist, that the technical problem is too sweet. It smells, to me, of someone looking for self-justification.
  • Joseph Rotblat, the one physicist who actually left the project after it became clear the Nazis were not going to make an atomic bomb. He was threatened by the head of Los Alamos security, and falsely accused of having met with Soviet agents. In leaving he was turning his back on his most important professional peers at a crucial time in his career. Doing so must have required tremendous courage and moral imagination. Part of what makes the choice intriguing is that he himself didn't think it would make any difference to the success of the project. I know I personally find it tempting to think about such choices in abstract systems terms: "I, individually, can't change systems outcomes by refusing to participate ['it's inevitable!'], therefore it's okay to participate". And yet while that view seems reasonable, Rotblat's example shows it is incorrect. His private moral thinking, which seemed of small import initially, set a chain of thought in motion that eventually led to Rotblat founding the Pugwash Conferences, a major forum for nuclear arms control, one that both Robert McNamara and Mikhail Gorbachev identified as helping reduce the threat of nuclear weapons. Rotblat ultimately received the Nobel Peace Prize. Moral choices sometimes matter not only for their immediate impact, but because they are seeds for downstream changes in behavior that cannot initially be anticipated.
New Comment
7 comments, sorted by Click to highlight new comments since: Today at 3:48 PM

I was at a party recently, and happened to meet a senior person at a well-known AI startup in the Bay Area. They volunteered that they thought "humanity had about a 50% chance of extinction" caused by artificial intelligence. I asked why they were working at an AI startup if they believed that to be true. They told me that while they thought it was true, "in the meantime I get to have a nice house and car".

This strikes me as the sort of thing one would say without quite meaning it. Like, I'm sure this person could get other jobs that also support a nice house and car. And if they thought about it, they could probably also figure this out. I'm tempted to chalk the true decision up to conformity / lack of confidence in one's ability to originate and execute consequentialist plans, but that's just a guess and I'm not particularly well-informed about this person.

[-]gwern8mo203

To paraphrase Von Neumann, sometimes we confess to a selfish motive that we may not be suspected of an unselfish one, or to one sin to avoid being accused of another.

[Of] the splendid technical work of the [atomic] bomb there can be no question. I can see no evidence of a similar high quality of work in policy-making which...accompanied this...Behind all this I sensed the desires of the gadgeteer to see the wheels go round.

("as any number of conversations in the [OpenAI] office café will confirm, the “build AGI” bit of the mission seems to offer up more raw excitement to its researchers than the “make it safe” bit.")

Or perhaps they thought it was an entertaining response and don't actually believe in the fear narrative. 

If we permit that moral choices with very long-term time horizons can be made with the upmost well-meaning intentions and show evidence of admirable character traits, but nevertheless have difficult-to-see consequences with variable outcomes, then I think that limits us considerably in how much we can retrospectively judge specific individuals.

[-]dr_s8mo41

I agree with that principle, but how is that relevant here? The Manhattan Project's effects weren't on long timelines.

The Manhattan Project brought us nuclear weapons, whose existence affects the world to this day, 79 years after its founding - I would call that a long timeline. And we might not have seen all the relevant effects!

But yeah, I think we have enough info to make tentative judgements of at least Klaus Fuchs' espionage, and maybe Joseph Rotblat's quitting.

[-]dr_s8mo81

Well, by that token, every scientific discovery and such has also plenty of very long term implications, simply out of sheer snowballing. I guess my point was more about which concerns dominated their choices: it wasn't some 5D chess long term play, but obvious pressing moral issues of the time. Should we use it on the Nazis, should we use it on Japan, should we share it with the USSR or let the USA establish dominance, should we just try to delay its creation as much as possible, should we stop before fusion bombs... all of those questions ended up mattering on rather short time horizons. Not even 20 years after the end of the project the Cuban missile crisis happened already and the cold war was in full swing. And those consequences weren't particularly hard to guess, though of course, there's always all sorts of chaotic events that can affect them. So my point is that the usual problems with long term thinking - discount rates essentially prompted by uncertainty - don't apply here. People could make decent guesses, in fact most of the people in the project seem to have done just that; they merely rationalised them away with "ehhhh, but who can possibly know for sure" if they wanted to keep doing the thing regardless for their own reasons.