Leipzig, Germany - https://www.sevensecularsermons.org
I'd like to complain that the original post popularizing really bright lights was mine in 2013: My simple hack for increased alertness and improved cognitive functioning: very bright light — LessWrong . This was immediately adopted at MIRI and (I think obviously) led to the Lumenator described by Eliezer three years later.
I suspect it is creation of memories. You don't experience time when you're not creating memories, and they're some kind of very subtle object that lasts from one moment to (at least) the next so they leave a very subtle trace in causality, and the input that goes into them is correlated in time, because it is (some small selection from) the perceptions and representations you had simultaneously when you formed the memory.
I even believe you experience a present moment particularly intensely when you're creating a long-term memory - I use this to consciously choose to create long-term memories, and it subjectively seems to work.
That's exactly right. It would be much better know a simple method of how to distinguish overconfidence from being actually right without a lot of work. In the absence of that, maybe tables like this can help people choose more epistemic humility.
Well of course there are no true non-relatives, even the sabertooth and antelopes are distant cousins. The question is how much you're willing to give up for how distant cousins. Here I think the mechanism I describe changes the calculus.
I don't think we know enough about the lifestyles of cultures/tribes in the ancestral environment, except we can be pretty sure they were extremely diverse. And all cultures we've ever found have some kind of incest taboo that promotes mating between members of different groups.
I am utterly in awe. This kind of content is why I keep coming back to LessWrong. Going to spend a couple of days or weeks digesting this...
Welcome. You're making good points. I intend to make versions of this geared to various audiences but haven't gotten around to it.
A big bounty creates perverse incentives where one guy builds a dangerous AI in a jurisdiction where that isn't a crime yet, and his friend reports him so they can share the bounty.