If that's the reasoning, why pick Buddhism and meditation, when it's so much easier to find religious communities in the west as a Christian, and praying also has benefits for mental wellbeing, and Christians, like Buddhists, are measurably happier than nonreligious people? I think it's possible to be a secular Christian and not believe the supernatural, and go through the motions of a Christian life while not fully believing in it, and reap at least some of the benefits of it.
I have probably spent over a thousands hours practicing mindfulness meditation, and was pretty successful at achieving what I wanted to achieve with it. I have also read a lot of Buddhist books.
However, I think the basis for Buddhism crumbles if you don't believe in rebirth, karma, Samsara, narakas, the Buddha's omniscience and all those other metaphysical claims made by religious Buddhists. I've become a physicalist so I don't believe those claims anymore so I don't meditate anymore.
If after your death you just disappear, I don't see any point in attainin...
I'm wondering what Nick Bostrom's p(doom) currently is, given the subject of this book. He said 9 years ago in his lecture on his book Superintelligence "less than 50% risk of doom". In this interview 4 months ago he said that it's good there has been more focus on risks in recent times, but there's still slightly less focus on the risks than what is optimal, but he wants to focus on the upsides because he fears we might "overshoot" and not build AGI at all which would be tragic in his opinion. So it seems he thinks the risk is less than it used to be beca...
Replying to David Hornbein.
Thank you for this comment, this was basically my view as well. I think the employees of OpenAI are simply excited about AGI, have committed their lives working long hours to make it a reality and believe AGI would be good for humanity and also good for them personally. My view is that they are very emotionally invested in building AGI and stopping all that progress for reasons that feel speculative, theoretical and not very tangible feels painful.
Not that I would agree with that, assuming this is correct.
Overall I agree with this. I give most of my money for global health organizations, but I do give some of my money for AGI safety too because I do think it makes sense with a variety of worldviews. I gave some of my thoughts on the subject in this comment on the Effective Altruism Forum. To summarize: if there's a continuation of consciousness after death then AGI killing lots of people is not as bad as it would otherwise be and there might be some unknown aspects about the relationship between consciousness and the physical universe that might have an effect on the odds.
Do the concepts behind AGI safety only make sense if you have roughly the same worldview as the top AGI safety researchers - secular atheism and reductive materialism/physicalism and a computational theory of mind?
Do you have to have roughly the same kind of worldview as the top AI alignment researchers? Do you have to be a secular atheist and reductive naturalist/physicalist holding a computational theory of mind?
Anyone know how close we are to things that require operating in the physical world, but are very easy for human beings, like loading a dishwasher, or making an omelette? It seems to me that we are quite far away.
I don't think those are serious obstacles, but I will delete this message if anyone complains.
How do doomy AGI safety researchers and enthusiasts find joy while always maintaining the framing that the world is probably doomed?
Does anyone know what exactly DeepMind's CEO Demis Hassabis thinks about AGI safety, how seriously does he take AGI safety, how much time does he spend focusing on AGI safety research when compared to AI capabilities research? What does he think is the probability that we will succeed and build a flourishing future?
In this LessWrong post there are several excerpts from Demis Hassabis:
...Well to be honest with you I do think that is a very plausible end state–the optimistic one I painted you. And of course that's one reason I work on AI is because I hoped it
This is a meta-level question:
The world is very big and very complex especially if you take into account the future. In the past it has been hard to predict what happens in the future, I think most predictions about the future have failed. Artificial intelligence as a field is very big and complex, at least that's how it appears to me personally. Eliezer Yudkowky's brain is small compared to the size of the world, all the relevant facts about AGI x-risk probably don't fit into his mind, nor do I think he has the time to absorb all the relevant facts related to AGI x-risk. Given all this, how can you justify the level of certainty in Yudkowky's statements, instead of being more agnostic?
I don't really want to argue about this, but "those seem much smaller than the ones you can get out of meditation" is a subjective statement with which people with different temperament would disagree as long as there are no objective facts, like what happens to your consciousness after your death (like you go to Heaven if you are a Christian, or stop rebirth that would have otherwise gone on if you've achieved enlightenment). Anyway, I believe there's nothing after death, so do what makes you happy, I suppose.