Cookiecarver

Posts

Sorted by New

Wiki Contributions

Comments

I don't really want to argue about this, but "those seem much smaller than the ones you can get out of meditation" is a subjective statement with which people with different temperament would disagree as long as there are no objective facts, like what happens to your consciousness after your death (like you go to Heaven if you are a Christian, or stop rebirth that would have otherwise gone on if you've achieved enlightenment). Anyway, I believe there's nothing after death, so do what makes you happy, I suppose.

If that's the reasoning, why pick Buddhism and meditation, when it's so much easier to find religious communities in the west as a Christian, and praying also has benefits for mental wellbeing, and Christians, like Buddhists, are measurably happier than nonreligious people? I think it's possible to be a secular Christian and not believe the supernatural, and go through the motions of a Christian life while not fully believing in it, and reap at least some of the benefits of it.

I have probably spent over a thousands hours practicing mindfulness meditation, and was pretty successful at achieving what I wanted to achieve with it. I have also read a lot of Buddhist books.

However, I think the basis for Buddhism crumbles if you don't believe in rebirth, karma, Samsara, narakas, the Buddha's omniscience and all those other metaphysical claims made by religious Buddhists. I've become a physicalist so I don't believe those claims anymore so I don't meditate anymore.

If after your death you just disappear, I don't see any point in attaining meditative bliss, especially if it leads me to see the world in a less truthful way. Buddhism as I see it is centered around the suffering of Samsara, and not just the occasional suffering of this one life.

Let's compare this to praying, which also feels very good, but it's also something that only makes sense in the context of theistic beliefs.

I'm wondering what Nick Bostrom's p(doom) currently is, given the subject of this book. He said 9 years ago in his lecture on his book Superintelligence "less than 50% risk of doom". In this interview 4 months ago he said that it's good there has been more focus on risks in recent times, but there's still slightly less focus on the risks than what is optimal, but he wants to focus on the upsides because he fears we might "overshoot" and not build AGI at all which would be tragic in his opinion. So it seems he thinks the risk is less than it used to be because of this public awareness of the risks.

Replying to David Hornbein.

Thank you for this comment, this was basically my view as well. I think the employees of OpenAI are simply excited about AGI, have committed their lives working long hours to make it a reality and believe AGI would be good for humanity and also good for them personally. My view is that they are very emotionally invested in building AGI and stopping all that progress for reasons that feel speculative, theoretical and not very tangible feels painful.

Not that I would agree with that, assuming this is correct.

Overall I agree with this. I give most of my money for global health organizations, but I do give some of my money for AGI safety too because I do think it makes sense with a variety of worldviews. I gave some of my thoughts on the subject in this comment on the Effective Altruism Forum. To summarize: if there's a continuation of consciousness after death then AGI killing lots of people is not as bad as it would otherwise be and there might be some unknown aspects about the relationship between consciousness and the physical universe that might have an effect on the odds.

Do the concepts behind AGI safety only make sense if you have roughly the same worldview as the top AGI safety researchers - secular atheism and reductive materialism/physicalism and a computational theory of mind?

Do you have to have roughly the same kind of worldview as the top AI alignment researchers? Do you have to be a secular atheist and reductive naturalist/physicalist holding a computational theory of mind?

Anyone know how close we are to things that require operating in the physical world, but are very easy for human beings, like loading a dishwasher, or making an omelette? It seems to me that we are quite far away.

I don't think those are serious obstacles, but I will delete this message if anyone complains.

How do doomy AGI safety researchers and enthusiasts find joy while always maintaining the framing that the world is probably doomed?

Load More