Rationalist Reading Group (Online)
Budapest Meetup on Margit Sziget
Ann Arbor SSC Online Meetup
Sorted by Magic (New & Upvoted)
Magic (New & Upvoted)
Show Low Karma
Saturday, July 25th 2020
Sat, Jul 25th 2020
Become a person who Actually Does Things
Construct a portfolio to profit from AI progress.
Constraints from naturalized ethics.
A Scalable Urban Design and the Single Building City
Taking the first step
Using books to prime behavior
What it means to optimise
A Natural Explanation of Nash Equilibria
How to learn from conversations
Photos Before Drywall
Sporting vs. Herding: two styles of discourse
Access to AI: a human right?
Toby Ord just released a collection of quotations on Existential risk and the future of humanity [https://theprecipice.com/quotations], everyone from Kepler to Winston Churchill (in fact, a surprisingly large number are from Churchill) to Seneca to Mill to Nick Bostrom - it's one of the most inspirational things I have ever read, and when taken together makes it clear that there have always been people who cared about long-termism or humanity as a whole. Some of my favourites: I'm imagining Kepler reaching out across four hundred years, to a world he could barely imagine, and to those 'brave sky-travellers', that he helped prepare the way for. [https://youtu.be/9LzDdfcNsXo]
Philosophical zombies are creatures that are exactly like us, down to the atomic level, except they aren't conscious. Complete philosophical zombies go further. They too are exactly like us, down to the atomic level, and aren't conscious. But they are also purple spheres (except we see them as if they weren't), they want to maximize paperclips (although they act and think as if they didn't), and they are very intelligent (except they act and think as if they weren't). I'm just saying this because I find it funny ^^. I think consciousness is harder (for us) to reduce than shapes, preferences, and intelligence.
Cold Hands Fallacy/Fake Momentum/Null-Affective Death Stall Although Hot Hands [https://en.wikipedia.org/wiki/Hot_hand] has been the subject of enough [https://www.scientificamerican.com/article/momentum-isnt-magic-vindicating-the-hot-hand-with-the-mathematics-of-streaks/] controversy [https://statmodeling.stat.columbia.edu/2015/07/09/hey-guess-what-there-really-is-a-hot-hand/] to perhaps no longer be termed a fallacy, there is a sense in which I've fooled myself before with a fake momentum. I mean when you change your strategy using a faulty bottomline [https://www.lesswrong.com/posts/34XxbRFe54FycoCDw/the-bottom-line]: incorrectly updating on your current dynamic. As a somewhat extreme but actual example from my own life: when filling out answersheets to multiple-choice questions (with negative marks for incorrect responses) as a kid, I'd sometimes get excited about having marked almost all of the questions near the end, and then completely, obviously, irrationally decide to mark them all. This was out of some completion urge, and the positive affect around having filled in most of them. This involved a fair bit of self-deception to carry out, since I was aware at some level that I left some of them previously unanswered because I was in fact unsure, and to mark them I had to feel sure. Now, for sure you could make the case that maybe there are times when you're thinking clearer and when you know the subject or whatever, where you can additionally infer this about yourself correctly and then rationally ramp up the confidence (even if slight) in yourself. But this wasn't one of those cases [https://www.lesswrong.com/posts/H59YqogX94z5jb8xx/inductive-bias], it was the simple fact that I felt great about myself. Anyway the real point of this post is that there's a flipside (or straightforward generalization) of this: we can talk about this fake inertia for subjects at rest or at motion. What I mean is there's this similar tendency to not feel like doing som
Moral uncertainty is a thing that people think about. Do people also think about decision theoretic uncertainty? E.g. how to decide when you're uncertain about which decision theory is correct?