I was at a party recently, and happened to meet a senior person at a well-known AI startup in the Bay Area. They volunteered that they thought "humanity had about a 50% chance of extinction" caused by artificial intelligence. I asked why they were working at an AI startup if they believed that to be true. They told me that while they thought it was true, "in the meantime I get to have a nice house and car".
This strikes me as the sort of thing one would say without quite meaning it. Like, I'm sure this person could get other jobs that also support a nice house and car. And if they thought about it, they could probably also figure this out. I'm tempted to chalk the true decision up to conformity / lack of confidence in one's ability to originate and execute consequentialist plans, but that's just a guess and I'm not particularly well-informed about this person.
To paraphrase Von Neumann, sometimes we confess to a selfish motive that we may not be suspected of an unselfish one, or to one sin to avoid being accused of another.
[Of] the splendid technical work of the [atomic] bomb there can be no question. I can see no evidence of a similar high quality of work in policy-making which...accompanied this...Behind all this I sensed the desires of the gadgeteer to see the wheels go round.
Or perhaps they thought it was an entertaining response and don't actually believe in the fear narrative.
If we permit that moral choices with very long-term time horizons can be made with the upmost well-meaning intentions and show evidence of admirable character traits, but nevertheless have difficult-to-see consequences with variable outcomes, then I think that limits us considerably in how much we can retrospectively judge specific individuals.
I agree with that principle, but how is that relevant here? The Manhattan Project's effects weren't on long timelines.
The Manhattan Project brought us nuclear weapons, whose existence affects the world to this day, 79 years after its founding - I would call that a long timeline. And we might not have seen all the relevant effects!
But yeah, I think we have enough info to make tentative judgements of at least Klaus Fuchs' espionage, and maybe Joseph Rotblat's quitting.
Well, by that token, every scientific discovery and such has also plenty of very long term implications, simply out of sheer snowballing. I guess my point was more about which concerns dominated their choices: it wasn't some 5D chess long term play, but obvious pressing moral issues of the time. Should we use it on the Nazis, should we use it on Japan, should we share it with the USSR or let the USA establish dominance, should we just try to delay its creation as much as possible, should we stop before fusion bombs... all of those questions ended up mattering on rather short time horizons. Not even 20 years after the end of the project the Cuban missile crisis happened already and the cold war was in full swing. And those consequences weren't particularly hard to guess, though of course, there's always all sorts of chaotic events that can affect them. So my point is that the usual problems with long term thinking - discount rates essentially prompted by uncertainty - don't apply here. People could make decent guesses, in fact most of the people in the project seem to have done just that; they merely rationalised them away with "ehhhh, but who can possibly know for sure" if they wanted to keep doing the thing regardless for their own reasons.
This is a linkpost to a recent blogpost from Michael Nielsen, who has previously written on EA among many other topics. This blogpost is adapted from a talk Nielsen gave to an audience working on AI before a screening of Oppenheimer. I think the full post is worth a read, but I've pulled out some quotes I find especially interesting (bolding my own)