arisAlexis

Wiki Contributions

Comments

"I suspect that AGI is decades away at minimum". But can you talk more about this? I mean if I say something against the general scientific consensus which is a bit blurry right now but certainly most of the signatories of the latest statements do not think it's that far away, I would need to think myself to be at least at the level of Bengio, Hinton or at least Andrew Ng. How can someone that is not remotely as accomplished as all the labs producing the AI we talk about can speculate contrary to their consensus? I am really curious. 

Another example would be me thinking that I like geopolitics and I think USA is making such and such mistake in Ukraine. The truth is that there are many think tanks with insider knowledge and a lifetime of training that concluded that is the best course of action so I would certainly express my opinion only in very low probability terms and certainly without consequences. Because the consequences can be very grave.

After Hinton's and Bengio's articles that I consider a moment in history, I struggle to understand how most people in tech dismiss them. If Einstein wrote an article about the dangers of nuclear weapons in 1939 you wouldn't have people saying "nah, I don't understand how such a powerful explosion can happen" without a physics background. Hacker News is supposed to be *the place for developers, startups and such and you can see comments that despare me. The comments go from "alarmism is boring" to "I have programmed MySQL databases and I know tech and this can't happen". Should I update my view on the intelligence and biases of humans right now I wonder much. 

I think the stoic's (Seneca's letters, Meditations) talk a lot about how to live in the moment while awaiting probable death. Then the classic psychology book The Denial of Death would also be relevant. I guess The Myth of Sisiphus would also be relevant but I haven't read it yet. The metamorphosis of prime intellect is also a very interesting book talking about mortality being preferable to immortality and so on. 

I tink there is an important paragraph missing from this post about books related to Stoicism and existential philosophy etc. 

But sometimes something happens in the world and your "best man always fun forever" friends can't seem to understand reality. They think it's because God wanted it this way or because there is a world wide conspiracy of Jews. Then you feel really alone.

The metamorphosis of Prime Intellect is an excellent book

What is the duration of P(doom)? 

What do people mean by that metric? What is x-risk for the century? Forever? For the next 10 years? Until we figured out AGI or after AGI on the road to superintelligence?

To me it's fundamentally different because P(doom) forever must be much higher than doom over the next 10-20 years. Or is it implied that if we survive the next period means only that we figured out alignment eternally for all the next generation AIs? It's confusing. 

Thank you. This is the kind of post I wanted to write when I posted "the burden of knowing" a few days ago but I was not rational thinking at that moment.