Meetup : Berkeley: Hypothetical Apostasy

by Nisan1 min read12th Jun 20132 comments


Personal Blog

Discussion article for the meetup : Berkeley: Hypothetical Apostasy

WHEN: 12 June 2013 07:30:00PM (-0700)

WHERE: Berkeley, CA

Dear all, there will be a meetup at Zendo tonight. I'd like to try an exercise called Hypothetical Apostasy that was devised by Nick Bostrom:

Imagine, if you will, that the world's destruction is at stake and the only way to save it is for you to write a one-pager that convinces a jury that your old cherished view is mistaken or at least seriously incomplete. The more inadequate the jury thinks your old cherished view is, the greater the chances that the world is saved. The catch is that the jury consists of earlier stages of yourself (such as yourself such as you were one year ago). Moreover, the jury believes that you have been bribed to write your apostasy; so any assurances of the form "trust me, I am older and know better" will be ineffective. Your only hope of saving the world is by writing an apostasy that will make the jury recognize how flawed / partial / shallow / juvenile / crude / irresponsible / incomplete and generally inadequate your old cherished view is.

The meetup will begin on Wednesday at 7:30pm. For directions to Zendo, see the mailing list:

or call me at:

Discussion article for the meetup : Berkeley: Hypothetical Apostasy

2 comments, sorted by Highlighting new comments since Today at 3:11 PM
New Comment

I like this exercise. It is useful in at least two ways.

  1. Help me take a critical look at my current cherished views. Here's one: work hard now and save for retirement; it is still cherished, but I already know of several lines of attack that might work if I think them through.
  2. Help me take time to figure out how I'd hack myself.

It might also be interesting to come up with a cherished group view and try to take that apart (e.g., cryonics after death is a good idea - perhaps start with the possibility that the future likely to be hostile to you such as unfriendly AI).

That's really interesting for me.