LESSWRONG
LW

KvmanThinking's Shortform

by KvmanThinking
6th Mar 2025
1 min read
22

2

This is a special post for quick takes by KvmanThinking. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
KvmanThinking's Shortform
7KvmanThinking
5Cole Wyeth
1KvmanThinking
2Cole Wyeth
3S. Alex Bradt
2Karl Krueger
3Raemon
2the gears to ascension
1KvmanThinking
4the gears to ascension
3KvmanThinking
5niplav
4Vladimir_Nesov
2niplav
4Vladimir_Nesov
2Viliam
2Dagon
1Pasta
2KvmanThinking
6Buck
4Eli Tyre
1KvmanThinking
22 comments, sorted by
top scoring
Click to highlight new comments since: Today at 10:31 AM
[-]KvmanThinking2mo70

Looking carefully at how other people speak and write, there are certain cognitive concepts, that simply don't make sense to me, the way that it "makes sense" to me that red is "hot" and blue is "cold". I wonder if the reason I can't understand them is that my consciousness is somehow "simpler", or less "layered", although I don't really know what I mean by that, it's just a vibe i get. Here are some examples:

  • I don't understand the concept of "internal monologue".
  • I don't understand the difference between "imagine", "picture", "think of", "think about", "consider", etc.
  • I don't understand the concept of "trust your gut/heart.", in a decision. If I am presented with 2 decisions, I do not percieve one of them as labeled as "gut/heart says 'do this' ". Sometimes I percieve one of the options as "jumps to mind instinctively as the Thing People Do In This Situation" or "the kind of thing a book protagonist would do" instead, but I don't know if either of those things are what people mean by trusting your gut/heart. Maybe they just mean to think about it.
  • I don't understand the concept of "choose to believe X". I believe stuff because I have a certain sensory experience. I cannot consciously change the probabilities my brain assigns to things. Maybe people just mean "act as if you believe X" but just have belief-in-belief type confusions.
  • I have never received a satisfactory explaination for the difference between "annoyed" and "frusturated", even though people do tend to insist there is a difference and they're not just synonyms.
  • I experience unusually strong placebo and nocebo effects (at least compared to others around me).

I feel a bit confused. I generally sort of just feel (and I don't know what exactly I mean by this) that there is less "structure" separating the "me" part of the Program The Computer-That-Is-My-Brain Is Running from the "not-me" part.

Does any of this make sense?

Reply
[-]Cole Wyeth2mo54

Frustrated is usually when you keep trying to do something, it's not working, and you are annoyed about that, and you want to give up.

Annoyed is more general, like "this thing peeves me, I want this to stop gah"

Reply
[-]KvmanThinking2mo10

So "frusturated" is what we call "annoyed" when it comes from the source of "repeatedly failing to do something"?

Reply
[-]Cole Wyeth2mo20

Yeah pretty much. Frustration is maybe also a stronger valence.

Reply
[-]S. Alex Bradt2mo30
  • I don't understand the concept of "internal monologue".

I have a hypothesis about this. Most people, most of the time, are automatically preparing to describe, just in case someone asks. You ask them what they're imagining, doing, or sensing, and they can just tell you. The description was ready to go before you asked the question. Sometimes, these prepared descriptions get rehearsed; people imagine saying things out loud. That's internal monologue.

There are some people who do not automatically prepare to describe, and hence have less internal monologue, or none. Those people end up having difficulty describing things. They might even get annoyed (frustrated?) if you ask them too many questions, because answering can be hard.

(I wonder how one might test whether or not a person automatically prepares to describe. The ability to describe things quickly is probably measurable, and one could compare that to self-reports about internal monologue. If there were no correlation, that'd be evidence against this hypothesis.)

Reply
[-]Karl Krueger2mo20

This tracks for me. I was explicitly taught, as a small child, to be ready to explain what I was doing at all times. Failure to have a ready and satisfactory answer to "what are you doing?" was treated as strong evidence that I was idle (or up to no good!) and should be redirected to do something explicable instead.

(And today, if a friend asks me "how are you?" as a sincere question rather than a casual politeness, it sometimes locks up my cognition for a few seconds as I scramble to introspect enough to come up with a good answer...)

Reply
[-]Raemon2mo30

Have you read Generalizing From One Example and Typical Mind Fallacy stuff? (that won't directly answer all your questions but the short answer is people just vary a lot in what their internal cognitive processes are like)

Reply
[-]the gears to ascension2mo2-4

choose to believe is, in my experience, about situations where what is true depends on what you believe because your actions affect what is true and thus your beliefs affect your motivation to action which affects what you can believe. "I choose to believe that humanity is good" is a common one, which if a lot of people choose, will in fact be more true. "I choose to believe I can handle this problem" is another, where again, the truth in question is (partially) downstream of your beliefs. see also https://www.lesswrong.com/posts/8dbimB7EJXuYxmteW/fixdt

Reply
[-]KvmanThinking2mo10

Even in situations where my beliefs affect my action those beliefs are not choices. If I notice that if I had a certain belief I would act in a way that would give me more utility, well then that observation becomes instead my motivation to act as if i have that belief.

Reply
[-]the gears to ascension2mo40

"act as if you hold a belief" and "hold a belief for justified reasons" aren't the same thing, the latter seems to me to produce higher quality actions if the belief is true. eg:

  • believing [someone cares about you if-and-only-if you care about them, AND you care about them if-and-only-if they care about you, AND they don't care about you now, AND you don't care about them, AND (you will act as if they care about you now => you will act as if you care about them) ]
  • vs believing [someone cares about you if-and-only-if you care about them, AND you care about them if-and-only-if they care about you, AND you care about them => they care about you)]
  • vs believing [someone cares about you if-and-only-if you care about them, AND you care about them if-and-only-if they care about you, AND you don't care about each other ever]

in the first one your innate trust falls on they-care-about-you will be less reliable, your caring for them will have little hints anywhere you aren't a good enough actor, etc. if neither of you pick up on it, the first can emulate the second, and thereby produce a world where the second becomes a past-justified true belief. but if you're able to instead reliably make the second a *future-*justified true belief, then you can avoid the first collapsing into the third. (I'm eliding some details about what the uncertainty looks like between these beliefs and what it looks like if you're both uncertain about which of the three statements is true, which makes it a lot more complicated if you're particularly uncertain).

if you have multiple conflicting beliefs you can hold for justified reasons (because what is true comes after the picking between the beliefs), then there can be situations where it's objectively the case that one can choose the beliefs first, and thereby choose actions. maybe your thoughts aren't organized this way! but this is what it seems to me to mean when someone who is being careful to only believe things in proportion to how likely they are (aka "rational") still gets to say the phrase "I choose to believe". I also think that people who say "I choose to believe" in situations where it's objectively irrational (because their beliefs can't affect what is true) are doing so based on incorrectly expecting their beliefs to affect reality, eg "I choose to believe in a god who created the universe" cannot affect whether that's true but is often taken to. ("I choose to believe in a god that emerges from our shared participation in a religion" is entirely rational, that's just an egregore/memeplex/character fandom).

Reply
[-]KvmanThinking1mo30

If Eliezer's p(doom) is so high, why is he signed up for cryonics?

Reply
[-]niplav1mo53

Because the expected value of increasing the chance of living in posthuman eutopia until at least the end of the stelliferous era is still extremely valuable, would be my best guess.

Reply
[-]Vladimir_Nesov1mo42

until at least the end of the stelliferous era

How could this possibly be relevant? Stars are very likely extremely wasteful anyway and worth disassembling, and at 100T years all reachable galaxy clusters will be colonized, with enough time to disassemble all stars in the reachable universe much earlier than that.

Reply
[-]niplav1mo20

I was being conservative (hence the "at least" :-), and agree that we'd want to disassemble stars. But maybe reachable technology can't become so advanced to allow that kind of stellar engineering3%, so we're stuck with living in space habitats orbiting suns. I think the median scenario for an advanced civilization extends far longer than the stelliferous era.

Reply1
[-]Vladimir_Nesov1mo40

(My point is more that there will probably be no natural "end of stelliferous era" at all, because the last stars that are not part of some nature reserve will be disassembled much earlier than that, and the nature reserve stars could well be maintained for much longer.)

Reply1
[-]Viliam1mo20

Was his p(doom) so high back then when he signed up?

Reply
[-]Dagon1mo*22

From a straight expected value case, it can still be an easy call - cryonics has a nonzero chance of working, and p(doom) is less than 100%, and cryonics doesn't cost much (for his class of worker).  

A low-cost, low-chance wager with an insanely high payoff (more life) is an easy lottery ticket to justify.

It probably does indicate that he puts an extremely low weight on Basilisk-like futures, where he gets resurrected just to be tortured forever.  I agree that this is orders of magnitude less probable than other cryonics-works scenarios.

 

Reply
[-]Pasta1mo10

It's sort of a riff on pascal's wager. If he's right about p(doom), not freezing himself means he loses nothing, and freezing himself means he also loses nothing. If he's wrong, not freezing himself means he loses everything, freezing himself means he gains indefinite life extension. The only real cost he faces is the money he could have given to someone else upon death. Seems like a personal value decision.

Reply
[-]KvmanThinking4mo2-2

Quick thought: If you have an aligned AI in a multipolar scenario, other AIs might threaten to cause S-risk in order to get said FAI to do stuff, or as blackmail. Therefore, we should make the FAI think of X-risk and S-risk as equally bad (even though S-risk is in reality terrifyingly worse), because that way other powerful AIs will simply use oblivion as a threat instead of astronomical suffering (as oblivion is much easier to bring about).

It is possible that an FAI would be able to do some sort of weird crazy acausal decision-theory trick to make itself act as if it doesn't care about anything done in efforts to blackmail it or something like that. But this is just to make sure.

Reply
[-]Buck4mo60

This kind of idea has been discussed under the names "surrogate goals" and "safe Pareto improvements", see here.

Reply
[-]Eli Tyre4mo*42

This has the obvious problem that an AI will then be indifferent between astronomical suffering and oblivion. In ANY situation where it will need to choose between those two, it will not care about which occurs on the merits, not just blackmail situations. 

You don't want your AI to prefer a 99.999% chance of astronomical suffering to a 99.9999% of oblivion. Astronomical suffering is much worse.

Reply
[-]KvmanThinking6mo10

Is there a non-mysterious explanation somewhere of the idea that the universe "conserves information"?

Reply
Moderation Log
More from KvmanThinking
View more
Curated and popular this week
22Comments