The prediction that the sun and stars we perceive will go out is absurd only because you are excluding the possibility that you are dreaming. Because of what we label as dreams we frequently perceive things that quickly pop out of existence.
I'm confused since as a buyer if I believed the seller could predict with probability .75 I would flip a fair coin to decide which box to take meaning that the seller couldn't predict with probability .75. If I can't randomize to pick a box I'm not sure how to fit in what you are doing to standard game theory (which I teach).
"Over the past few years, some people have updated toward pretty short AGI timelines. If your timelines are really short, then maybe you shouldn't sign up for cryonics, because the singularity – good or bad – is overwhelmingly likely to happen before you biologically die"
But such a scenario means there is less value in saving for retirement and this should make it financially easier for you to sign up for cryonics. Also, the sooner we get friendly AGI, the sooner people in cryonics will be revived meaning there is a lower risk that your cryonics provider will fail before you can be revived.
Strongly agree. I would be happy to help. Here are three academic AI alignment articles I have co-authored. https://arxiv.org/abs/2010.02911 https://arxiv.org/abs/1906.10536 https://arxiv.org/abs/2003.00812
While not captured by the outside view, I think the massive recent progress in machine learning should give us much hope of achieving LEV in 30 years.
Yes, the more people infected with the virus, and the longer the virus is in people the more time for a successful mutation to arise.
I did a series of podcasts on COVID with Greg Cochran and Greg was right early on. Greg has said from the beginning that the risk of a harmful mutation is reasonably high because the virus is new meaning there are likely lots of potential beneficial mutations (from the virus's viewpoint) that have not yet been found.
From an AI safety viewpoint, this might greatly increase AI funding and drive talent into the field and so advance when we get a general artificial superintelligence.
Yes for high concentration of observers, and if high tech civilizations have strong incentives to grab galactic resources as quickly as they can thus preventing the emergence of other high tech civilizations, most civilizations such as ours will exist in universes that have some kind of late great filter to knock down civilizations before they can become spacefaring.
Thanks, that's a very clear explanation.