LESSWRONG
LW

1765
jimmy
4211Ω12279020
Message
Dialogue
Subscribe

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Beneath Psychology: Truth-Seeking as the Engine of Change
Did you know you can just buy blackbelts?
jimmy1d20

I see the point you're getting at, and I agree that there's a real failure mode here about I've been annoyed in similar ways. Heck, I kinda think it's silly for people to show up to promotions to receive the black belt they earned, but that's a separate topic.

At the same time, there's another side of this which is important.

At my jiu jitsu gym there's a new instructor who likes doing constraint led games. One of these games had the explicit goal of "get your opponents hands to the mat" with the implicit purpose of learning to off balance the top player. I decided to be a little muchkin and start grabbing peoples hands and pulling them to the mat even when they had a good base.

I actually did get social acclaim for this. The instructor thought that was awesome, and used it as an example of how he wanted people to play the games. In his view, as in mine, the point of the game is to explore how you can maneuver to win at the game as specified, without being restrained by artificial limitations which really ought to be accounted for in the game design.

If the new instructor would have tried to lecture us about playing to some underspecified "spirit" of the rules instead of the rules as he described them -- and about how we're not earning social points with him for gaming the system -- and was visibly annoyed about this... he would have been missing the point that he's not earning social points with me, and likely not with the others either. And I wouldn't much care for winning points with him, if that's how he were to respond. It's a filter. A feature, not a bug.

Breaking the game is to be encouraged, and if playing the game earnestly doesn't suit the intended purpose, "don't hate the player, hate the game". In his case, the game wasn't so broken so as to ruin the game so it turned out to be more fun and probably more useful than I had anticipated. Maybe it wasn't quite optimal, but it was playable for sure. In your case, the broken game is the sign that calibration isn't what we care about -- because that annoying shit was calibrated, and you weren't happy about it. What we need is a better scoring rule that weights calibration appropriately. Which exist! 

Any time we find ourselves annoyed, there is a learning opportunity. Annoyance is our cue that reality is violating our expectations. It's a call to update. 
 

Reply
I ate bear fat with honey and salt flakes, to prove a point
jimmy2d20

Larger effects are easier to measure, and therefore quicker to update on. I didn't take concerns of "too much sweets" very seriously, so i had no restraint whatsoever.

The clearest updates came after wildly overconsuming while also cutting weight. I basically felt like shit which is probably a much exaggerated "sweet tired", and never ate swedish fish again. And snickers bars before that.

Since then the updates have been more subtle and below the level of what's easy to notice and keep good tabs on, but yes "sweet tired". Just generally not feeling satisfied and fulfilled, and developing more of that visceral distaste for frosting that you have as well, until sweets in general have a very limited place in my desires.

It's not a process like "Oh, I felt bad, so therefore I shall resist my cravings for sugar", it's "Ugh, frosting is gross" because it tastes like feeling tired and bad.

Reply
I ate bear fat with honey and salt flakes, to prove a point
jimmy2d20

That's the right first question to consider, and it's something I was thinking about while writing that comment.

I don't think it's quite the right question to answer though. What I'm doing to generate these explanations is very different than "Go back to the EEA, and predict forward based on first principles", and my point is more about why that's not the thing to be doing in the first place more than about the specific explanation for the popularity of ice cream over bear fat.

It can sound nitpicky, but I think it's important to make hypotheticals concrete because a lot of the time the concrete details you notice upon implementation change which abstractions it makes sense to use. Or, to continue the metaphor, picking little nits when found is generally how you avoid major lice infestations.

In order to "predict" ice cream I have to pretend I don't already know things I already know. Which? Why? How are we making these choices? It will get much harder if you take away my knowledge of domestication, but are we to believe these aliens haven't figured that out? That even if they don't have domestication on their home planet, they traveled all this way and watched us with bears without noticing what we did to wolves? "Domestication" is hindsight in that it would take me much longer than five minutes as a cave man to figure out, but it's a thing we did figure out as cave men before we had any reason to think about ice cream. And it's it's sight that I do have and that the aliens likely would too.

Similarly, I didn't come up with the emulsification/digestion hypothesis until after learning from experience what happens when you consume a lot of pure oils by themselves. I'm sure a digestion expert could have predicted the result in advance, but I didn't have to learn a new field of expertise because I could just run the experiment and then the obvious answer becomes obvious. A lot of times, explanations are a lot easier to verify once they've been identified than they are to generate in the first place, and the fact that the right explanations come to mind vastly more easily when you run the experiment is not a minor detail to gloss over. I mean, it's possible that Zorgax is just musing idly and comes up with a dumb answer like "bear fat", but if he came all this way to get the prediction right you bet your ass he's abducting a few of us and running some experiments on how we handle eating pure fat.

As a general rule, in real life, fast feedback loops and half decent control laws dominate a priori reasoning. If I'm driving in the fog and can't see but 10 feet ahead, I'm really uninterested in the question "What kind of rocks are at the bottom of the cliff 100 feet beyond the fog barrier?" and much more interested in making sure I notice the road swerving in time to keep on a track that points up the mountain. Or, in other words, I don't care to predict which exact flavor of superstimuli I might be on track to overconsume, from the EEA. I care to notice before I get there, which is well in advance given how long ago we figured out domestication. I only need to keep my tastes tethered to reality so that when I get there ice cream and opioids don't ruin my life -- and I get to use all my current tools to do it.

I think this is the right focus for AI alignment too.

The way I see it, Eliezer has been making a critically important argument that if you keep driving in a straight line without checking the results, you inevitably end up driving off a cliff. And people really are this stupid, a lot of times. I'm very much on board with the whole "Holy fuck, guys, we can't be driving with a stopping distance longer than our perceptual distance!" thing. The general lack of respect and terror is itself terrifying, because plenty of people have tried to fly too close to the sun and lost their wings because they were too stupid to notice the wax melting and descend.

And maybe he's not actually saying this, but the connotations I associate with his framing, and more importantly the interpretation that seems widespread in the community, is that "We can't proceed forward until we can predict vanilla ice cream specifically, from before observing domestication". And that's like saying "I can't see the road all the way to the top of the mountain because of fog, so I will wisely stay here at the bottom". And then feeling terror build from the pressure from people wanting to push forward. Quite reasonably, given that there actually aren't any cliffs in view, and you can take at least the next step safely. And then reorient from there, with one more step down the road in view.

I don't think this strategy is going to work, because I don't think you can see that far ahead, no matter how hard you try. And I don't think you can persuade people to stop completely, because I think they're actually right not to.

I don't think you have to see the whole road in advance because there's a lot of years between livestock and widespread ice cream. Lots of chances to empirically notice the difference between cream and rendered fats. There's still time to see it millennia in advance.

What's important is making sure that's enough.

It's not a coincidence that I didn't get to these explanations by doing EEA thinking at all. Ice cream is more popular than bear fat because of how it is cheaper to produce now. It's easier to digest now. Aggliu was concerned with parasites this week. These aren't things we need to refer to the EEA to understand, because they apply today. The only reason I could come up with these explanations, and trivially, is because I'm not throwing away most of what I know, declining to run cheap experiments, and then noticing how hard it is to reason 1M years in advance when I don't have to.

The thread I followed to get there isn't "What would people who knew less want, if they suddenly found themselves blasted with a firehose of new possibilities, and no ability to learn?". The thread I followed is "What do I want, and why". What have I learned, and what have we all learned. Or can we all learn -- and what does this suggest going forward? This framing of people as agents fumbling through figuring out what's good for them pays rent a lot more easily than the framing of "Our desires are set by the EEA". No. Our priors are set by the EEA. But new evidence can overwhelm that pretty quickly -- if you let it.

So for example, EEA thinking says "Well, I guess it makes sense that I eat too much sugar, because it's energy which was probably scarce in the EEA". Hard to do the experiment, not much you can do with that information if it proves true. On the other hand, if you let yourself engage with the question "Is a bunch of sugar actually good?", you can run the experiment and learn "Ew, actually no. That's gross" -- and then watch your desires align with reality. This pays rent in fewer cavities and diabetes, and all sorts of good stuff.

Similarly, "NaCl was hard to get in the EEA, so therefore everyone is programmed to want lots of NaCl!". I mean, maybe. But good luck testing that, and I actually don't care. What I care about is knowing which salts I need in this environment, which will stop these damn cramps. And I can run that test by setting out a few glasses of water with different salts mixed in, and seeing what happens. The result of that experiment was that I already knew which I needed by taste, and it wasn't NaCl that I found my self chugging the moment it touched my lips.

Or with opioids. I took opioids once at a dose that was prescribed to me, and by watching the effects learned from that one dose "Ooh, this feels amazing" and "I don't have any desire to do that again". It took a month or so for it to sink in, but one dose. I talked to a man the other day who had learned the same thing much deeper into that attractor -- yet still in time to make all the difference.

Yes, "In EEA those are endogenous signaling chemicals" or whatever, but we can also learn what they are now. Warning against the dangers of superstimuli is important, but "Woooah man! Don't EVER try drugs, because you're hard coded by the EEA to destroy your life if you do that!" is untrue and counter productive. You can try opioids if you want, just pay real close attention, because the road may be slicker than you think and there are definitely cliffs ahead. Go on, try it. Are you sure you want to? A lot less tempting when framed like that, you know? How careful are you going to be if you do try it, compared to the guy responding "You're not the boss of me Dad!" to the type of dad who evokes it?

So yes, lots of predictions and lots of rent paid. Just not those predictions.

Predictions about how I'll feel if I eat a bowl full of bear fat the way one might with ice cream, despite never having eaten pure bear fat. Predictions about people's abilities to align their desires to reality, and rent paid in actually aligning them. And in developing the skill of alignment so that I'm more capable of detecting and correcting alignment failures in the future, as they may arise.

I predict, too, that this will be crucial for aligning the behaviors of AI as well. Eliezer used to talk about how a mind that can hold religion fundamentally must be too broken to see reality clearly. So too, I predict, that a mind that can hold a desire for overconsumption of sugar must necessarily lack the understanding needed to align even more sophisticated minds.

Though that's one I'd prefer to heed in advance of experimental confirmation.
 

Reply
I ate bear fat with honey and salt flakes, to prove a point
jimmy4d5921

First, props for doing the experiment. And yeah, that sounds delicious. 

 

The fact still stands that ice cream is what we mass produce and send to grocery stores.  Even if our hypothetical aliens could reasonably predict that we’d enjoy any extra fatty, salty, and sweet food should we happen to come across it, that’s not sufficient information to determine what foods we mass produce in practice.

Is it really that hard to predict ice cream over bear fat with honey and salt? I'm skeptical.

To start with, it's a good bet that we're going to mass produce foods that are easily mass produced. Bears? Lol, no. Domesticated herbivores, obviously. Cream, not tallow. Plant sugar, not honey. Cavemen figured out how to solve the "mass produce food without much technology" problem, which is how we stopped being cavemen. If the aliens are willing to spend five minutes actually trying, you'd think they'd figure out that bear fat is out for this reason alone.

More centrally, I roll to doubt the implicit "But I should want to eat lots of pure fat, because I'm evolved to like calories!". Stop being a baby about "Ew it's gross", and try eating 1000 calories of pure rendered fat by itself. I dare you to actually run the experiment, and see what happens. Find out where that "Ew it's gross comes from" and whether it's legit or not. It's not hard to figure out.

Tallow is delicious when potatoes are fried in it, but try to have a meal of pure tallow and you'll feel sick to your stomach because your stomach is going to have a hard time digesting that. Butter is emulsified with water, and is easier to digest in large globs. Cream is emulsified fat-in-water so it actually disperses when consumed with more water, and is therefore way easier to digest in large amounts when not mixed in with other foods. Maybe part of the reason that we fry potatoes in tallow, put globs of butter on bread, and eat bowls of solidified cream -- and not the other way around -- is that the other way around doesn't work?

On top of that though,

I don’t know any bear hunters and don’t want to get parasites,

Emphasis mine.

This is important too, and affects people's taste in a very visceral way -- and pathogen risk is exactly why I was disgusted by bear meat the one chance I had to eat it. Imagine taking a bite of raw chicken, or pork. Or even beef. Disgusting, right?

Except raw meats are delicious when we trust them. Sushi is the obvious example, despite the fact that you'd be disgusted by the idea of taking a bite out of a raw fish you caught in the river. But it's true with other meats too, which are a lot like sushi. In Germany they sell raw pork sandwiches, and call it "mett". It's delicious.

If you want to understand why people aren't always immediately super on board with "Try this weird food that no one else you know eats and survives eating", maybe this is partly why. When I was visiting Sweden, people there were having cheese for dessert. How easy do you think it'd be to sell people on the idea of stinky cheese, if not for cultural learning that it's actually safe?


Is this really that surprising?

That we'd viscerally want to avoid food that brings risk of parasites and disease?

That we'd mass produce food that is easily mass producible?

And want to eat large quantities of food only when we can digest it in large quantities?

There are more details that aren't so immediately obvious. Like why iced cream? Sure, maybe to make it solid, but why does that matter? Or, why do we not salt ice cream? Okay, I guess it'd melt. So maybe it is immediately obvious, since I literally figured that out as I was typing this.

Regardless, there's work to be done in predicting which "superstimuli" people are going to tend towards, and it's not always trivial. "Plant sugar and cream" may be trivial, but predicting "ice cream" in particular is a bit harder.

Back on the first hand though, we don't just eat ice cream. We also drink milk shakes, for one. So the answer to "Why solid?" is "Not just solid!". And ice cream sounds gross to me right now, but a fatty bear steak drizzled with a touch of honey and sprinkled with salt actually sounds delicious. Or cow steak, whatever. Ice cream is but one food we consume, and not some fixed pinnacle of yumminess.

Our tastes and desires actually change, as we learn about things like "How safe is it to eat raw pork in Germany?", and "How much sugar is good for my body right now?". That's why you can't tempt me with ice cream right now.

Run the experiment of eating all the sugar you want -- way more than you should. Experience what it feels like to eat too much sugar, and allow yourself to update on that feeling of sickness. The result is learning that sugar isn't all that great. I still enjoy little bits at the appropriate times, sure, but that actually aligns with my current best estimates of what's best for me -- and gone are the days of gorging on sweets. Try to restrain yourself, and treat your tastes as "unpredictable unchangeable unconscious stuff", and you may never give yourself the chance to learn otherwise.


I agree that most people don't put in more thought than "Uh, bear fat and honey and salt flakes?", and therefore make terrible predictions. Maybe this is how the book presented it.

But I don't think the right conclusion is "Unpredictable!" so much as "So put in the work if you care to predict it?".  

This is directly applicable to the alignment of AI because it turns out we're cultivating AI more than hard coding them, so if we don't learn to cultivate alignment of our own desires.. and learn to make sense of our preferences for ice cream over bear meat -- and to allow them to shift back to bear meat over ice cream when appropriate.. then what chance do we have at aligning an AI? 

You don't want the AI craving something analogous to sweets and trying to restrain itself -- look how well that works out for humans.

Nor do you want to plead with AI -- or people working on AI -- to resist the temptation of the forbidden fruit. Look at how well that one has worked out for humans.
 

Reply
Lack of Social Grace is a Lack of Skill
jimmy5d249

Beautifully written. And visibly practicing what you preach.
 

I was not, however, socially adroit, so what I said was “why do you care about something boring like horses?”

[...]

This is a pure tactical mistake.

I didn’t get more information this way. I wasn’t more honest by being more graceful. This is not a linear scale.

I don't think you could have conveyed this without taking away from the clarity with which you demonstrate your thesis, but I also think you undersell the point here.

It's easy to read this and think "Oh, so social skills and grace are kinda orthogonal to epistemic virtue, at least in cases like this", and that alone is sufficient to justify "Maybe notice the possibility of practicing more grace so that you can do it when it is socially helpful and not epistemically harmful".

It's much deeper than that, because what she was pissed off about is epistemics. Back in your less skilled days, you were being a jackass by making your epistemic vices a social rationality problem. She was forced to either accept falsehoods into the social epistemics, or push back and engage in social conflict.

I'll explain.


So, "horses are boring" asserts that horses are boring. It implies that if someone thinks horses are interesting, they're wrong -- like her. She's wrong. This assertion presupposes that her interest in horses is not meaningful evidence about their interestingness that could change your mind -- but is this presupposition justified? If it was, why the heck would he be asking her in the first place? If he really knew something she didn't, why not just explain it to her so she can realize that horses aren't as interesting as she thought?

Her fascination with horses is evidence that horses are or at least can be fascinating. Your desire to ask her about her interest, presuming that you're being genuine, is evidence that her perspective is meaningful evidence to you. Noticing this, we can take a step towards improved epistemics by noticing what this does to our confidence in the idea that horses really are boring, after all. Because now it's no longer "Horses are boring". It's "Huh, I always thought horses are boring, but she obviously finds something about them to be really interesting. What might she see that I do not?".

And what comes out once you have that realization?

How about "What do you find so interesting about horses?"

Or, if you're going to reference your initial perspective at all, it's going to come out like "Huh, I always thought they weren't interesting" or "I never was able to find anything interesting about horses" -- not a presupposition that they are boring, as if anything she could possibly say would be wrong.

The "grace" here, is specifically in not pushing forth one's ignorance as fact in direct contradiction to the evidence you're responding to. It's epistemic humility - an epistemic virtue, not merely a virtue of social harmony.

It's a great example because it's both relatable and not abusing an edge case to make a point. I think it's central. It's an easy case that we can all look at and say "okay, failing socially isn't epistemically virtuous", and there are harder cases where it's harder to square social grace with epistemic virtue. But those are just that -- harder.

Still a skill issue.

At least, more often than not.
 

Reply11
Do you completely trust that you are completely in the shit? - despair and information -
jimmy12d31

The urgency comes from noticing that the beliefs you're navigating by are likely insufficient, in light of new evidence. E.g. "There are no tigers around, so I can walk outside without getting eaten" is called into question when you hear a rustling in the bushes, and figuring out whether you can actually walk around outside without getting eaten can be pretty urgent. If you already know there are tigers are around, you just won't go outside, so the urgency isn't going to be there unless your beliefs are challenged in a time sensitive manner.

As applied to your situation, I don't know what chance you have of getting the same or similar salary or prestige. "No chance" seems pretty hard to justify given the immense possibility space and inherent uncertainty of the future, but I don't know your situation. It doesn't sound like the end of the world either way to me. I'm not saying it's not important, and if you've been navigating by beliefs that said you'd definitely keep that or more, then it totally makes sense that you'd be shaken when evidence comes in saying this might not be true.

At the same time, not everyone has to have the highest paying most prestigious jobs. Take my parent's old mail man, for example. He's got to be the happiest and most genuinely friendly person I've ever met. Not because he got the most prestigious job or hasn't had struggles outside his work life, but because of the way he chooses to relate to the world with openness to what it might bring. I admire that, and want to be more like that. Making lots of money is definitely nice, and prestige is a good sign you're doing things right and feels good for a reason. But I think a lot of what fuels these drives for salary and prestige is really an underlying drive for respect, and knowing that we're making the most of what we can. And I think he has that, more than a lot of people in much more prestigious and higher paying careers. He definitely has more of my respect than most others in those categories, and I suspect this is also true of people closer to him -- who tend to matter more than the broader society anyway.

If something happened and I found myself needing to deliver mail for a living it would be devastating to me. I've put in a lot of work and a lot of thoughts and expectations into being able to do other things that are higher paying and all that, so it wouldn't just be a giant loss I would also be largely lost. I wouldn't know what to do, where to go, and I certainly wouldn't want to give up on what I once had. If that's something like the potential reality you're navigating right now, I can't say "I get it" in that I haven't actually been there, let alone in your shoes. But I get why it'd be tough, and overwhelming. I hope to never get there. If I do, I know who I'm looking to role model. Proof by example that there's still something difficult to strive towards, which is very worth striving towards.

None of this makes any of it easy, of course. Life is a lot to figure out, regardless. Hopefully this makes it a little clearer what fuzzy light to aim towards, should your fear turn out to be a likely reality. And hopefully having a sketch of a line of retreat makes it easier to explore and figure out if it actually is.

Best of luck to you Joao. I'm looking forward to seeing where you go next, and how things turn out for you.

Reply
Do you completely trust that you are completely in the shit? - despair and information -
jimmy12d31

Belief is about how we think the world is. Fear is about what we think the world might be, or might become, if we don't act to preempt the outcome.

Both can change, because the world itself can change and we can get new information that changes what is most likely. The difference is that changing beliefs usually requires additional information. For example, if you believe that you don't own a bike, learning that your friend bought you one for your birthday will change that belief.

In contrast, when you hear a rustling in the bushes and run screaming "There's a tiger in that bush! It's gonna eat me!", does that mean that once you safely get out of that situation you will recollect and determine "Yes, there was actually a tiger in that bush"? Will you experience surprise when you don't get eaten? Or will you just think "I don't know if it was actually a tiger or not, but I wasn't gonna stay and find out!". Because if it's the latter, then you never actually believed that you would get eaten or just that there was a tiger in the bush, just that the possibility of "Tiger!" was too high to ignore and that you might have to run to keep from getting eaten.

That alarm shouting "Tiger!" raises some hypotheses which urgently call for attention, but you don't wait around until you believe "there is a tiger in that bush, and it is going to eat me". You're trying to get out of there before there is enough evidence to justify these as facts about reality.

If you find yourself "not in deep shit" and recollecting, will you look back and think "Wait, how'd that happen? There was no way out and now I'm out??? This doesn't make sense"? Or will it feel more like "Whew! That was a close one!" or "I'm glad that didn't turn out to be true!"?

As you look forward, do you find yourself still looking for ways out? Writing LessWrong posts in hopes of finding ways out? Because that behavior wouldn't make a whole lot of sense if you don't think there's anything there to be found. It makes a lot of sense if you're not sure what's there, and you sense a danger of losing your way out if you don't act.
 

Reply
Do you completely trust that you are completely in the shit? - despair and information -
jimmy13d31

"I'm in deep shit! There's no way out."

In other words, I believe I'm in the worst there is and that there's no way out; that's information.

 

Beliefs describe the world as you think it is. Fears describe the world as it might be, or might become, if you don't act so as to rectify things. This looks more like a fear than a belief to me, both due to the way its phrased and the way you're responding to it.

This is important because it changes the way we relate to the information.

If it's a belief, then it's just true, so far as we can tell. We can try to take in more information in hopes that we've misestimated, or we can try to figure out what to do about it, but it's kinda just the world [we believe] we're living in. And if part of the belief is "There's no way out", then that's pretty limiting.

If it's a fear, then that's not true. It's something that might be true, or somewhat more true on the margin than we've been giving credit for in our world models, but there's also a gap between what we do believe and this thing which we fear. This gap is likely to generate significant curiosity, once you notice that it exists. Questions like "Am I in deep shit?", "Is there no way out?", "How do/would I know?", "What would be the appropriate action to take if it were true, and how do I know that?", "What can I do to distinguish?,  "Is there something I need to devote more attention to, if I'm going to make sure not to be/stay in deep shit?". These questions can all be investigated relative to what we already believe, from information we already have. And if "There's no way out!" is just being raised as a hypothesis, then it might be getting raised early and preemptively -- and we're not bound to taking it seriously, at face value.

The important difference between beliefs and fears is that fears are not bound by requiring solid evidence before making strong claims and sweeping generalizations. "One person was a jerk to me" isn't sufficient to justify "Everyone hates me!" as the way reality is, but it might be enough to raise the hypothesis -- if you don't already have a secure foundation for rejecting such hypotheses.

Such fears are worth examining, because they are sometimes true, or partly true. But also, just because you thought it doesn't mean it's true. Or that you even believe it.

Noticing that makes it significantly easier to explore, in part because because it's only a "might" and "if we don't react in time", and that gives us room to move and to think. And also because we get to redirect our focus to finding out what's true about the world and let our beliefs update to match, instead of struggling trying to micromanage what we believe to be our own mistaken beliefs, ending up trapped in distinctions we don't see.
 

Reply
The Mom Test for AI Extinction Scenarios
jimmy20d20

No, definitely not dark arts. The exact opposite, actually -- though the latter probably won't come across in this comment.

Again, I'm going to have to point at some distinctions which might feel like nits but which actually change the story completely. In this case, it's the difference between focusing on "coming off as sane" -- which I would not advocate -- and "coming off as obviously sane". Or perhaps more clearly worded "being visibly sane".

If you focus on coming across as sane, then you are Goodharting on appearing sane even if you aren't. "Reality doesn't matter, just [other] people's perceptions" does indeed lead to dark arts, and it has a ceiling. This is politician shit, and comes off as politician shit to anyone who is more perceptive than you take them for.

At the same time, the wise alternative is not "Other people's perceptions don't matter, just reality". Because our perception can never be reality, so what this means in practice is "Other people's perceptions don't matter, just [my own perception of] reality", while losing track of the conflation hiding in the presupposition. This conflation leads to not only shutting out error signals of less-than-perfect sanity, but also to blinding ourselves to the extent to which we've become blind. Us aspiring rationalists tend to be much more prone to this failure mode, partly for reasons that are flattering to us, and partly for reasons that are less so. People often pick up on signs that we're doing this subtle flinching, and it's perfectly rational for people to discount our arguments in such cases even if the arguments appear to be solid -- because how are they to know they're competent to judge? It's not like people can't be tricked with sophistry.

What I'm talking about is critically different than either. When it's just obvious that you're sane, it's not "seduced into a perception that could be believable". It's that the alternative visibly doesn't fit. Like, it's not true, and clearly so.

"Being visibly sane" requires both that you're actually sane, and that it's visible to others. The focus is still on actually being sane, while taking care to notice that if you can't get others to see you as sane this is evidence against your sanity. Not "proof", not "the only thing that matters", but evidence -- and something that will therefore soften your perceived certainty, if you allow your beliefs to update with the evidence.

It's true that if you don't provide receipts, this opens a window to deceive. It's also true that there's no rule saying that you have to abuse the trust people place in you. Do you trust yourself not to abuse it?

It's a hell of a question, actually. The moment people start trusting you too much and putting their wellbeing at risk because they didn't demand the receipts you expected them to demand, you tend to get a reality check about how sure you are of your own words and arguments. It's a very sobering experience, and one that is worth working towards with appropriate caution.

It's also an uncomfortable one. And if we're not extremely careful we're likely to flinch and fail to notice.

Reply
The Mom Test for AI Extinction Scenarios
jimmy23d114


It seems to me you suggest the following:

I should.

Actually, no. I wouldn't suggest you should do any of that. What I'm saying is purely descriptive.

This may sound like a nit, but I promise this is central to my point

I suspect if you'd been on the line when I was actually talking on the phone to my mom about AI extinction risk, you'd have approved.


I'd be surprised.

Not that I'd expect to disapprove, I just don't really think it's my place to do either. I tend to approach such things from a perspective of "Are you getting the results you want? If so, great. If not, let's examine why".

The fact that you're making this post suggests "not". I could reassure you that I don't think you did terribly, and I don't, but at the end of the day what's my hypothetical approval worth when it won't change the results?

I think if I'd skipped talking about bioweapons, I would have triggered less skepticism in the first place. In fact, I think there's probably some way I could have talked about the AI extinction argument that she didn't think sounded crazy at all. If so, then the amount of exploring her perspective and so on I'd need to do would be dramatically reduced.

Rather than start with something that sounds crazy, then assure people it's not and convince them one by one, if we can actually make it not sound crazy in the first place, that sounds valuable.

I get that this might sound crazy from where you stand, but I don't actually see skepticism as a problem. I wouldn't try to route around it, nor would I try to assure anyone of anything.

I don't have to explore my mom's perspective or assure her of anything when I say crazy sounding stuff, because "He gets how this sounds, and has good reasons for his beliefs" is baked in. The reason I said I'd be curious to explore your mom's perspective is because of the "sounds crazy" objection, and the sense that "I know, right?" won't cut it. If I already understand her perspective well enough to navigate it without hiccup, then I don't need to explore it any more. I'm not going to plow forward if I anticipate that I'm going to be dismissed, so when that happens I know I've erred and need to reorient to the unexpected data. That's where the curiosity comes from.

The question of "How am I not coming off as obviously sane?" is much more important to me than avoiding stretching people's worldviews. Because when I come off as obviously sane, I can get away with a hell of a lot of stretching, and almost trivially. And when I don't, trying to route around that and convince people by "strategically withholding the beliefs I have which I don't see as believable" strikes me as fighting the current. Or, to switch metaphors, it's like fretting over excess weight of your toothbrush because lighter cargo is always easier, before fully updating on the fact that there are pickup trucks available so nothing needs to be backpacked in.

Projection onto "shoulds" is always a lossy process and I hesitate to do it at all, but if I were to do a little to make things a little more concretely actionable at the risk of incurring projection errors, it'd come out something like...

  • Notice how incredibly far and easily one can stretch the worldviews of others, once the others are motivated to follow rather than object. Just notice, and let it sink in.
  • Notice how this scales. No one believes the earth is round because they understand the arguments. Few people doubt it, because the visibly sane people are all on one side.
  • Notice the "spurious" connection between epistemic rationality and effectiveness. Even when you're sure you're right, "Make sure I come off as unquestionably sane, or else wonder what I'm missing" forces epistemic hygiene and proper humility. Just in case. Which is always more likely than we like to think.
  • Notice whether or not you anticipate being able to have the effectiveness you yearn for by adopting this mode of operation. If not, turn first to understand exactly where it goes wrong, focusing on "How can I fix this?", and noticing if your attention shifts toward justifying failure and dismissal -- because the latter type of "answering why it's not working" serves a very different purpose.

Things like "Acknowledge that I sound crazy when I sound crazy" and "Explore my moms perspective when I realize I don't understand her perspective well enough" don't need to be micromanaged, as they come naturally when we attend to the legitimacy of objections and insufficiency of our own understanding -- and I have no doubt that you do them already in the situations that you recognize as calling for them. That's why I wouldn't "should" at that level.
 

Reply
Load More
8On failure, and keeping doors open; closing thoughts
2mo
0
10On physiological limits of sense making
2mo
0
22Putting It All Together: A Concrete Guide to Navigating Disagreements, and Reconnecting With Reality
2mo
0
24Solving irrational fear as deciding: A worked example
2mo
4
21How to actually decide
3mo
0
5The Frustrations and Perils of Navigating Blind to Rocks
3mo
0
4Navigating Security: Fighting flammability with fire (when safe)
3mo
4
18The necessity of security for play, and play for seeing reality
3mo
0
14Navigating Respect: How to bid boldly, and when to humble yourself preemptively
4mo
2
18The Role of Respect: Why we inevitably appeal to authority
4mo
2
Load More