Looking carefully at how other people speak and write, there are certain cognitive concepts, that simply don't make sense to me, the way that it "makes sense" to me that red is "hot" and blue is "cold". I wonder if the reason I can't understand them is that my consciousness is somehow "simpler", or less "layered", although I don't really know what I mean by that, it's just a vibe i get. Here are some examples:
I feel a bit confused. I generally sort of just feel (and I don't know what exactly I mean by this) that there is less "structure" separating the "me" part of the Program The Computer-That-Is-My-Brain Is Running from the "not-me" part.
Does any of this make sense?
Frustrated is usually when you keep trying to do something, it's not working, and you are annoyed about that, and you want to give up.
Annoyed is more general, like "this thing peeves me, I want this to stop gah"
- I don't understand the concept of "internal monologue".
I have a hypothesis about this. Most people, most of the time, are automatically preparing to describe, just in case someone asks. You ask them what they're imagining, doing, or sensing, and they can just tell you. The description was ready to go before you asked the question. Sometimes, these prepared descriptions get rehearsed; people imagine saying things out loud. That's internal monologue.
There are some people who do not automatically prepare to describe, and hence have less internal monologue, or none. Those people end up having difficulty describing things. They might even get annoyed (frustrated?) if you ask them too many questions, because answering can be hard.
(I wonder how one might test whether or not a person automatically prepares to describe. The ability to describe things quickly is probably measurable, and one could compare that to self-reports about internal monologue. If there were no correlation, that'd be evidence against this hypothesis.)
This tracks for me. I was explicitly taught, as a small child, to be ready to explain what I was doing at all times. Failure to have a ready and satisfactory answer to "what are you doing?" was treated as strong evidence that I was idle (or up to no good!) and should be redirected to do something explicable instead.
(And today, if a friend asks me "how are you?" as a sincere question rather than a casual politeness, it sometimes locks up my cognition for a few seconds as I scramble to introspect enough to come up with a good answer...)
Have you read Generalizing From One Example and Typical Mind Fallacy stuff? (that won't directly answer all your questions but the short answer is people just vary a lot in what their internal cognitive processes are like)
choose to believe is, in my experience, about situations where what is true depends on what you believe because your actions affect what is true and thus your beliefs affect your motivation to action which affects what you can believe. "I choose to believe that humanity is good" is a common one, which if a lot of people choose, will in fact be more true. "I choose to believe I can handle this problem" is another, where again, the truth in question is (partially) downstream of your beliefs. see also https://www.lesswrong.com/posts/8dbimB7EJXuYxmteW/fixdt
Even in situations where my beliefs affect my action those beliefs are not choices. If I notice that if I had a certain belief I would act in a way that would give me more utility, well then that observation becomes instead my motivation to act as if i have that belief.
"act as if you hold a belief" and "hold a belief for justified reasons" aren't the same thing, the latter seems to me to produce higher quality actions if the belief is true. eg:
in the first one your innate trust falls on they-care-about-you will be less reliable, your caring for them will have little hints anywhere you aren't a good enough actor, etc. if neither of you pick up on it, the first can emulate the second, and thereby produce a world where the second becomes a past-justified true belief. but if you're able to instead reliably make the second a *future-*justified true belief, then you can avoid the first collapsing into the third. (I'm eliding some details about what the uncertainty looks like between these beliefs and what it looks like if you're both uncertain about which of the three statements is true, which makes it a lot more complicated if you're particularly uncertain).
if you have multiple conflicting beliefs you can hold for justified reasons (because what is true comes after the picking between the beliefs), then there can be situations where it's objectively the case that one can choose the beliefs first, and thereby choose actions. maybe your thoughts aren't organized this way! but this is what it seems to me to mean when someone who is being careful to only believe things in proportion to how likely they are (aka "rational") still gets to say the phrase "I choose to believe". I also think that people who say "I choose to believe" in situations where it's objectively irrational (because their beliefs can't affect what is true) are doing so based on incorrectly expecting their beliefs to affect reality, eg "I choose to believe in a god who created the universe" cannot affect whether that's true but is often taken to. ("I choose to believe in a god that emerges from our shared participation in a religion" is entirely rational, that's just an egregore/memeplex/character fandom).
Because the expected value of increasing the chance of living in posthuman eutopia until at least the end of the stelliferous era is still extremely valuable, would be my best guess.
until at least the end of the stelliferous era
How could this possibly be relevant? Stars are very likely extremely wasteful anyway and worth disassembling, and at 100T years all reachable galaxy clusters will be colonized, with enough time to disassemble all stars in the reachable universe much earlier than that.
I was being conservative (hence the "at least" :-), and agree that we'd want to disassemble stars. But maybe reachable technology can't become so advanced to allow that kind of stellar engineering, so we're stuck with living in space habitats orbiting suns. I think the median scenario for an advanced civilization extends far longer than the stelliferous era.
(My point is more that there will probably be no natural "end of stelliferous era" at all, because the last stars that are not part of some nature reserve will be disassembled much earlier than that, and the nature reserve stars could well be maintained for much longer.)
From a straight expected value case, it can still be an easy call - cryonics has a nonzero chance of working, and p(doom) is less than 100%, and cryonics doesn't cost much (for his class of worker).
A low-cost, low-chance wager with an insanely high payoff (more life) is an easy lottery ticket to justify.
It probably does indicate that he puts an extremely low weight on Basilisk-like futures, where he gets resurrected just to be tortured forever. I agree that this is orders of magnitude less probable than other cryonics-works scenarios.
It's sort of a riff on pascal's wager. If he's right about p(doom), not freezing himself means he loses nothing, and freezing himself means he also loses nothing. If he's wrong, not freezing himself means he loses everything, freezing himself means he gains indefinite life extension. The only real cost he faces is the money he could have given to someone else upon death. Seems like a personal value decision.
Quick thought: If you have an aligned AI in a multipolar scenario, other AIs might threaten to cause S-risk in order to get said FAI to do stuff, or as blackmail. Therefore, we should make the FAI think of X-risk and S-risk as equally bad (even though S-risk is in reality terrifyingly worse), because that way other powerful AIs will simply use oblivion as a threat instead of astronomical suffering (as oblivion is much easier to bring about).
It is possible that an FAI would be able to do some sort of weird crazy acausal decision-theory trick to make itself act as if it doesn't care about anything done in efforts to blackmail it or something like that. But this is just to make sure.
This has the obvious problem that an AI will then be indifferent between astronomical suffering and oblivion. In ANY situation where it will need to choose between those two, it will not care about which occurs on the merits, not just blackmail situations.
You don't want your AI to prefer a 99.999% chance of astronomical suffering to a 99.9999% of oblivion. Astronomical suffering is much worse.
Is there a non-mysterious explanation somewhere of the idea that the universe "conserves information"?