All of pwno's Comments + Replies

Why is frame control central to this post? While it explains frame control well, the focus seems to be about people consciously/unconsciously harmfully manipulating one another. How to avoid being manipulated, gaslighted, deceived, etc is an important topic to discuss and a valuable skill to have. And this post offers good advice on it (whether or not it intended to). But it could’ve done so without bringing up the concept of frame control.

LW 2.0 is a good example of trying to fix something that isn't broken and ending up breaking it further.

LW is broken. It's broken socially (there's a serious shortage of high-quality new material here, and quite a shortage of any material at all) and it's broken technically (the best the moderators were able to do about our resident neoreactionary downvote-abuser was to disable all downvoting). (Whether Lesser Wrong is a good solution to those problems is a separate question.)

Thanks for trying it out. Hermes is still a work in progress and one of our top priorities now is improving responsiveness.

Looking forward to helping you out!

I recently launched a new service called Hermes. It connects users with dating experts for live texting advice. It runs on a unique platform designed to greatly simplify sharing and discussing text conversations. Since modern dating is changing so rapidly, especially with the rise of online dating apps and a growing population of young people glued to their phones, helping people improve their texting can greatly improve their dating life. I've been a software developer and dating coach for over 10 years so this is sort of my passion project.

I'd be happy to get some trial users. General feedback is greatly appreciated too.

Just tried out the Hermes trial! I found the coaches aren't too responsive? (~ 1 hr delay between my first message and their response). I'll see if they can help give some thoughts and give feedback later on the actual advice. The layout is pretty cool, though!

In the case of voting for Trump and writing the note in the Wailing Wall, I think there's little to no risk of having it change your prior beliefs or weaken your self-deception defense mechanisms. They both require you to be dishonest about something that clashes with so many other strong beliefs that it's highly unlikely to contaminate your belief system. The more dangerous lies are the ones that don't clash as much with your other beliefs.

In short, your decision not to vote after rational deliberation means it is approximately correct for other voters to think in the same way. This works like a classical cooperation game. TDT prescribes to commit to a small personal cost for a big community gain, in a similar way as one-boxing in Newcomb.

Inter alia, yes. But the step from "rationality is supposed to reduce X" to "I will act as if X has been reduced to negligibility" is not a valid one.

Well, isn't that a good technique to reduce X? Obviously not in all cases, but I think it's a valid technique in the cases we're talking about.

Certainly, as you say, not in all cases. I don't see any very good reason to think it would be effective in this case. Apparently you do; what's that reason?

If you value your belief that's there are no ghost then it's irrational to be scared by ghosts.

Are you talking about "real" ghosts? You shouldn't be afraid of real ghosts because they don't exist, not because you value your belief that there are no ghosts. Why should beliefs have any value for you beyond their accuracy?

Funny you mention that anecdote because I actually wrote it

Human brains aren't very good at detaching themselves from their actions

Isn't that what rationality is supposed to reduce?

No, rationality is about winning. Having certain values isn't irrational. If you value your belief that's there are no ghost then it's irrational to be scared by ghosts. The relationship of most of us to democracy is different. We generally do value it and think the rituals of democracy are valuable for our society.
Oh, very good! I wonder why I thought it was Eliezer. I see that he endorsed the idea, anyway. But I think my objection to it [] still stands (and is closely related to the one I expressed two comments upthread here). Inter alia, yes. But the step from "rationality is supposed to reduce X" to "I will act as if X has been reduced to negligibility" is not a valid one.

The government picks arbitrary ages for when an individual has the mental capacity to make certain decisions, like drinking alcohol or having sex. But not everyone mentally matures at the same rate. It'd be nice to have an institution that allows minors with good backgrounds and who pass certain intelligence/rationality tests to be exempt from these laws.

Who would you trust to design the tests and the criteria for good backgrounds?
Not [] arbitrary []! Those are not small effect sizes. To determine that someone's neurological development is such that they are not at the same level of risk for alcohol dependence as the general population, requires a test that doesn't exist. Moreover there is no need for such a test to exist because simple rules work better than complex rules. The drinking age law is simple and effective.
I think drinking is also about the idea that it might cause problems to people who aren't fully grown. I don't know if that's true, but I don't think that matters politically.
This is called emancipation of minors.

observe the features common to the intuitions in different domains, and abstract the common features out.

Have you explicitly factored these out? If so, what are some examples?

I think it's because system 1 and system 2 update differently. System 1 often needs experiential evidence in order to update, while system 2 can update using logical deduction alone. Doing a bunch of research is effective in updating system 2, but less so system 1. I'd guess that if you continue being positive and and don't experience any downside to it, then eventually your system 1 will update.

I think interviewers rely more on their intuition to evaluate candidates for managerial positions. For purely engineering positions, a longer, more systematic evaluation is needed.

Yes, the way I wrote the scenario makes it seem like he deliberately got himself into an awkward situation for little benefit in return. And I see how this weakens the scenario as an illustration of the problem. So let me try improving the scenario:

Imagine he determined that refraining from disclosing the information to his mother was ethical. A week later, he finds himself in a similar situation. He wants to drink a couple of beers, but knows that by the time he'll finish, he'll need to drive his mother. This time he has no qualms about drinking, making the beer-drinking pleasure worth the consequences.

Then his foot is set upon the road to ruin. Is that the implication you intended?

He might then profitably spend those two hours examining the underlying problem: why he chose to have those beers.

Why would this be a problem?

BTW, his mother already knows he's been drinking.

I didn't make it clear, but in the scenario she doesn't know.

He deliberately got himself into an awkward situation, for nothing more than the pleasure of drinking a couple of beers. No-brainers don't get much simpler, and for him to get this wrong suggests there's something more going on. Another BTW: I didn't make that up arbitrarily, just reasonable conjecture from the ways of the world, and of mothers. You can add as many hypotheses as you like (as could I: "what if she asks point-blank?"), but as I said in my reply to shminux, it doesn't help. This scenario does not work as an illustration of the ethical problem. To scale the example up, it's like asking if a murderer should confess, when what he should have done is not do the murder.
The scenario doesn't make sense. If you ever think that you find yourself in this scenario, please book a time with your doctor and explain to them that you just missed a flight because you couldn't resist drinking in the morning before you knew that you had to drive a car.

So the question is, when your goals conflict with another's, when is it right to use force or subterfuge to get your way?

In the scenarios with the 5-year-old and the mother, the protagonist's goal conflicts with what he deems to be an irrational goal. From his perspective, if they were more rational, their goals wouldn't be conflicting in the first place. So there are two questions that arise 1) can he make that judgement call on their rationality and 2) can he remove their ability to act as agents because of his assessment?

The child does indeed have limited rationality, and is in the care of the protagonist: the protagonist is right to exercise that duty of care by limiting the child's access to chocolate. The mother only has limited rationality by the protagonist's self-serving account. He thinks he can drive safely after a couple of beers; she thinks it too great a risk, did she know of it. His internal monologue --- under the influence of those same two beers --- triumphantly proves her irrationality by the fact that her assessment differs from his. Pah! she has even let herself be irrationally influenced by one of the family dying in a drunken crash! How irrational she is! She has non-transitive preferences, hahaha! Poor old dear, she's not really a PC, not like us, eh? Of course I can drive her safely, are you calling me a drunk? Yes, officer, this is my car, and we've got a plane to catch, so if you don't mind, no I HAVEN'T been drinking--- And so on. That is the general picture I have in my mind of the person you put in that scenario who thinks he's contemplating "the ethicality of denying her agency". Or dressed up in jargon, it's my posterior on seeing the evidence of the story, given my prior knowledge of the ways of the world. ETA: A real answer to what the of course not at all drunk driver could do would be to handle the immediate situation by paying a taxi driver whatever it takes for a two-hour journey. He might then profitably spend those two hours examining the underlying problem: why he chose to have those beers. BTW, his mother already knows he's been drinking.

you're treating them as an agent, but an adversarial one.

But if you thought of them as having agency, you'd want to respect their desires and therefore disclose the information, possibly hoping you'd come to some sort of compromise.

I think "agent" or "agency" is being used in two different senses here — a descriptive/game-theoretical sense and a normative/political sense.

In the game-theory sense of "agent", noticing the presence of an "agent" does not imply "you'd want to respect their desires". For instance, Clippy is an agent, but an adversarial one. We don't want Clippy to get what it wants with our light-cone, thank you very much.

The normative/political sense of "agency" implies a whole slew of values and norms having to... (read more)

Reliable/predictable isn't high status.

The degree to which I feel blame or judgement towards people for not doing things they said they would do is almost directly proportional to how much I model them as agents.

I've noticed that people are angrier at behaviors they can't explain. The anger subdues when they learn about the motives and circumstances that led to the behavior. If non-agents are suppose to be less predictable, I'd guess we're more inclined to judge/blame them.

After reading the article, it seems like their conclusion is still debated. I'm also not convicted, although I have updated that the general-purpose mechanism hypothesis is less likely correct. There needs to be an experiment with the context being non-social but frequently occurs in people's lives. For instance, "if you arrived to the airport less than 30 minutes before your departure, you are not able to check in." Then compare results with those from people who have never been on a plane before.

Edit: I realized my example can also be explaine... (read more)

Ah, I didn't know about holistic/analytical reasoning before. With the intuition/logical thinking styles I had in mind, I wouldn't have predicted that intuition thinkers would ignore situational over personality information. This may be a more cultural difference.

Right, it's probably cultural - I wouldn't assume it to be as prominent in Western holistic thinkers, either. Mostly I just brought it up to highlight the fact that the intuitive/holistic distinction may not map perfectly to the System 1/System distinction.

Yes, those are synonymous. I should clarify that.

Hmm, are you sure that they're synonymous? I initially assumed that your post was talking more about holistic vs. analytical reasoning (see e.g. pages 23-27 of The Weirdest People in the World []), which seems to have some similarities with System 1/System 2 reasoning, but also differences which don't map so clearly to it: (E.g. this difference wouldn't seem to be something that you'd expect to arise from just System 1/System 2 processing:)

Just curious, did you have any explicit beliefs that made you ignore your intuition?

I may have had an explicit belief that my own intuitions were wrong most of the time. I don't think I had a belief that following intuitions period was bad; I always admired people who seemed to be able to do so and get good results.

Good observations.

As an intuition-dominant thinker, how did you improve your logical side?

Rather than improve my logical side I've mostly come up with mechanisms that let me avoid System 2 thinking. For example, to improve my overall communication skills I practiced writing on forums with an upvote mechanism. This let me get feedback pretty rapidly, and over time my communication skills improved significantly-even when it came to explaining how I had developed System 1 skills. For strategy, I mostly rely on conversations with my friends. I can get myself to strategize if I sit down and concentrate but it's very tiring, whereas when I talk to smart people they are usually able to quickly see holes in my long-term plans and point them out to me. For forcing myself to actually use System 2 around 5% of the time...I know I have shifted from thinking consciously <1% of the time, to being able to use System 2 on command (though it is still very tiring). But I'm not really clear what enabled the change. One possible explanation: throughout the last 5 years I have been part of several board game meetups where I would have the opportunity to play a game just once or just a few times. So I was forced to think consciously if I wanted to have any chance of winning.

I first discovered these recurring tendencies in my self and in others. Then, used inferences from what's scientifically known about intuition to explain how the nature of intuition might cause these tendencies in intuition thinkers.

I recall seeing research showing that intuitive thinkers performed better at math / logic problems if they were word problems involving social settings, eg amount of soda to buy for a party or people sitting next to each other.

I would explain this study's result using the following inferential steps:

1) People (some more th... (read more)

1Said Achmiz9y
One such study is the famous Wason selection task [], and there, evolutionary psychology gives a fundamentally very different sort of answer than what you've given: that we have evolved, innate cognitive modules that solve certain types of problems... but are not used at all when the same abstract form of problem is put in a different context: The explanation on wikipedia [] is well worth a read.

In essence, yes; but the intended effect is more psychological.

A thing I have noticed about myself, is that once the intuitive "aha!" circuit activates, I simply cannot continue paying attention to details. My brain wants to gloss over any remaining information, saying "yeah yeah I GET it already!"

Jumping straight into the action satisfies my intuition's need for novelty and immediate feedback.

Moreso, when it turns out my intuition was wrong, I feel genuine surprise - which snaps me back into a state where I'm ready to pay attention to ... (read more)

I think you're right. I was using prior knowledge to interpret the argument correctly. The ambiguity in the language definitely makes my example weaker. I tried empathizing with the commenter as an intuition thinker to try figuring out what the most likely mistake caused the confusion. I still think the commenter most likely didn't pay attention to those words, but it's also quite likely he understood the technically correct alternative interpretation.

In his situation, I'd probably read 'any' in the second sense simply because as a non-mathematician I can imagine the second sense being a practical test: (I give you a number, you show me that the difference between A and B is smaller, we reach a conclusion) whereas the first seems esoteric (you test every conceivable small number...) On the other hand, the first reading is so blatantly wrong, the commenter really should have stepped back and thought 'could this sentence be saying something that wasn't obviously incorrect?' Principle of charity and all that.

...picking up of Russian Norms task

Intuition thinkers probably wouldn't have the foresight to learn Russian norms. However, they wouldn't make a strict rule like "always smile". Even if they did normally smile, in Russia, their intuition would be thrown off and would probably execute a more optimal strategy. Without a strict rule, they'd also be more attuned to the immediate environment and intuit that smiling isn't customary.

Agreed and added a link with a resource I found with a few minutes of googling.

I think the answer may change depending on age. Older intuition thinkers probably have deeper ingrained habits and less motivation.

I am not convinced that it's easy, or even really possible, to change from one thinking style to the other. Everything else I've read suggests this sort of cognitive leaning is largely innate.

I too think it's uncommon to completely change thinking styles, but I do believe it's possible to improve the weaker one. I also suspect one thinking style struggles more to develop the weaker thinking style, but don't know which one.

Do you have anything other than your own experience to suggest otherwise?

Being around many people who are into self-development... (read more)

0Said Achmiz9y
I agree with your suspicion, and hypothesize that it's intuitive thinkers who have more trouble developing logic-based thinking styles.

I will argue that some biases are the consequence of structural properties of the brain, which 'cannot' be affected by evolution

The biases are indirectly affected by evolution. The brain evolved "faulty thinking" because natural constraints put a premium on accuracy - especially when sometimes-accurate beliefs are sufficient.

Glad you liked the post.

This is one of the very few places where I'm not sure we agree. I agree, someone who is really different from others will have a harder time getting the empathy ball rolling. But I still think self-understanding is utterly critical. It's the only way you can control for projection.

I agree, I should've emphasized that finding a proxy is supplementary to self-understanding, not an alternative.

There's also the fact that some people identify with being unusual or different, but such people usually exaggerate their differences mor

... (read more)

I think that category of people are considered low status on average, and thus, not met with much sympathy. Maybe they have a small circle of people enabling their bad habits, but I suspect the strongest force is rationalization.

Sure, it may have had a small (overstatement) effect, but it was worth it.

Right, but more specifically, the annoying parts are their denial of the problem and reluctance to improve. We'd all be a lot more sympathetic otherwise.

On average people in that category get more than enough sympathy (mind you it probably varies a lot in degree and sincerity). More sympathy would tend to be a toxic influence from the perspective of trying to meet their unmet goals. Far better to empathize but show no sympathy whatsoever.

Running around the block is a good start :)

I might write a follow-up post with the kind of advice you're looking for.

There are "empathy challenges" all around you. Whenever you observe or interact with someone, really try to understand why they behaved the way they did - feel it on a gut level. Feeling confident about your conclusions is key. Keeping a checklist similar to the one in the post is helpful to keep in mind when confronted with these challenges.

However, without actually interacting with people, entering relationships or reading about social dynamics, your models of people won't be entangled with reality. My advice is more about how to be an active learner given you are doing these things.

There are also physical challenges all around you, but going to the gym is still a better idea. I find it easier to get better at something if I can practice every little sub-skill repeatedly in a short period of time with immediate feedback. I realize your advice doesn't fit that mold, but I'd still like to find some advice that does :-)

You can do that with a lot of topics on LW...

Explaining her flaws in such a scientific, matter-of-fact way shows how emotionally distant he was. She probably felt like the guy she loved just dropped off an eviction notice.

And this too.
Good point.

It's a good exercise in finding your true objections.

Unless the rejection is accompanied by occasional successes, this may be a good way to lower your self-esteem. The trick is learning to accept rejection - using each opportunity to succeed and learning from each failed attempt.

I find that intelligence is positively correlated with the amount one spends thinking about intelligence.

Does this mean I can become more intelligent by spending more time thinking about intelligence?

You'll feel more uneasy when someone's flirting.

Dating is for people who have trouble hooking up without making their intentions explicit.

YMMV. "You're hot, but I'm really quite keen on knowing if I can bear to be around you for a few hours" can be a good thing to establish.

Some thoughts regarding the difference between level 2 and 3:

Seems like a level 3 understanding necessitates an insight-producing ability (i.e. ability to improve existing models) -- otherwise your models wouldn't regenerate if destroyed. The question is why your insights with a level 2 understanding aren't evidence of a level 3 understanding. Or whether it's even possible to have insights with a level 2 understanding.

If we're able to regenerate a model, we obviously have model-making abilities. But isn't the same happening when you draw connections betwee... (read more)

In short, if you are simply informed about the connections between the fields, you are at level 2, but if you could discover the links yourself with no hints, you are at level 3. For example, if you know how the parameter "speed of light", c, has implications for both general relativity and quantum phenomena, you have a level 2 understanding (to the extent that these fields are involved), but if you couldn't discover the need for a "speed of light" parameter, how to find it, and how it affects the disparate fields, you haven't reached level 3.
Load More