Jonah Wilberg


Sorted by New

Wiki Contributions


Very useful post, thanks. While the 'talking past each other' is frustrating, the 'not necessarily disagreeing' suggests the possibility of establishing surprising areas of consensus. And it might be interesting to explore further what exactly that consensus is. For example:

Yann suggested that there was no existential risk because we will solve it

I'm sure the air of paradox here (because you can't solve a problem that doesn't exist) is intentional, but if we drill down, should we conclude that Yann actually agrees that there is an existential risk (just that the probabilities are lower than other estimates, and less worth worrying about)? Yann sometimes compares the situation to the risk of designing a car without brakes, but if the car is big enough that crashing it would destroy civilization that still kinda sounds like an existential risk.

I'm also not sure the fire department analogy helps here - as you note later in the paper Yann thinks he knows in outline how to solve the problem and 'put out the fires', so it's not an exogenous view. It seems like the difference between the fire chief who thinks their job is easy vs the one who thinks it's hard, though everyone agrees fires spreading would be a big problem.

I'd be interested to know more about the make-up of the audience e.g. whether they were AI researchers or interested general public. Having followed recent mainstream coverage of the existential risk from AI, my sense is that the pro-X-risk arguments have been spelled out more clearly and in more detail (within the constraints of mainstream media) than the anti-X-risk ones (which makes sense for an audience who may not have previously been aware of detailed pro- arguments, and also makes sense as doomscroll clickbait). I've seen lot of mainstream articles where the anti-X-risk case is basically 'this distracts from near-term risks from AI' and that's it. 

So if the audience is mainly people who have been following mainstream coverage it might make sense for them to update very slightly towards the anti- case on hearing many more detailed anti- arguments (not saying they are good arguments, just that they would be new to such an audience) even if the pro-X-risk arguments presented by Tegmark and Bengio were strong.

It maybe better first to force person to undergo a course with psychologist and psychiatrist and only after that allow a suicide..

Something along these lines seems essential. It may be better to talk of the right to informed suicide. Arguably being informed is what makes it truly voluntary.

Forcing X to live will be morally better than forcing Y to die in circumstances where X's desire for suicide is ill-informed (let's assume Y's desire to live is not). X's life could in fact be worth living - perhaps because of a potential for future happiness, rather than current experiences - but X might be unable to recognise that fact. 

It may be that a significant portion of humanity has been psychologically 'forced' to live by an instinct of self-preservation, since if they stopped to reflect they would not immediately find their lives to be worth living. That would be a very good thing if their lives were in fact worth living in ways that they could not intellectually recognise.