If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "
Does anyone have good or bad impressions of Calico Labs, Human Longevity, or other hi-tech anti-aging companies? Are they good places to work, are they making progress, etc?
I just came up with a funny argument for thirdism in the sleeping beauty problem.
Let's say I'm sleeping beauty, right? The experimenter flips a coin, wakes me up once in case of heads or twice in case of tails, then tells me the truth about the coin and I go home.
What do I do when I get home? In case of tails, nothing. But in case of heads, I put on some scented candles, record a short message to myself on the answering machine, inject myself with an amnesia drug from my own supply, and go to sleep.
...The next morning, I wake up not knowing whether I'm sti... (read more)
For those in this thread signed up for cryonics, are you signed up with Alcor or the Cryonics Institute? And why did you choose that organization and not the other?
Up LW's alley: A Pari-mutuel like Mechanism for Information Aggregation: A Field Test Inside Intel
Abstract:... (read more)
Update on Instrumental Rationality sequence: about 40% done with a Habits 101 post. Turns out habits are denser than planning and have more intricacies. Plus, the techniques for creating / breaking habits are less well-defined and not as strong, so I'm still trying to "technique-ify" some of the more conceptual pieces.
From Gwern's newsletter: did you know that algorithms already can obtain legal personhood?
Not scary at all.
Eliezer Yudkowsky wrote this article about the two things that rationalists need faith to believe in: That the statement "Induction works" has a sufficiently large prior probability, and that some single large ordinal that is well-ordered exists. Are there any ways to justify belief in either of these two things yet that do not require faith?
How do I contact a mod or site administrator on Lesswrong?
Can anyone point me to any good arguments for, or at least redeeming qualities of, Integrated Information Theory?
Is there something that lets you search all the rationality/EA blogs at once? I could have sworn I've seen something - maybe a web app made by chaining a bunch of terms together in Google - but I can't remember where or how to find it.
never mind this was stupid