Critical rationalist interested in where Bayesian rationality intersects, where it doesn't, and how both could be improved by each other.
As a first approximation --
Maladaptive Hufflepuff: Hyper-attentive tracking
Maladaptive Ravenclaw: Avoiding
I suspect there are different forms of maladaptive Slytherin, which could result in either coping strategy.
One could imagine a con man either having a jumpy trying-to-fix-the-social-dynamic personality (attentive), or having sociopathic disregard for people (avoidant).
Likewise, either of them could come out of maladaptive Gryffindor:
(Examples of this character type: young people who work for charities in a non-EA manner, have their heart on their sleeve, and judge others for not being charitable; or religious people who have a compulsive desire to proselytise or keep an eye on their community, like an older lady who gossips and takes it upon herself to tell people off. In general, maladaptive Gryffindor trackers can be summed up as "social-minded but insecure Paladin".)
Maladaptive Gryffindors of this type believe that optimising for being liked can slow down or impede justice-related goals. Since there's apparently an inherent conflict there (what is right is a function of people; and the difference between a group and individual is sketchy), maladaptive Gryffindors deal with this by shutting down 'being liked' related thoughts, including sacrificing interaction with people on a more personal level (avoidant).
(This is all pretty conjectural. Comments, criticisms and your own houses-analysis are warmly encouraged.)
At a very fundamental (but perhaps not useful) level, it's to communicate with people how to treat you and what to expect of you.
This might happen in some ways inside one mind (for example, if you have a self-image of being "a reliable person", that might help you think about how to be on time). But I suspect that usually happens when something is going wrong, because usually you can just think about the ideas themselves without needing to go meta.
What are the rival ideas about why we adopt identities? (I assume there's some background here I'm unfamiliar with.)
Great description of the Summon Sapience Spell.
Taking inspiration from this post after failing to set one up during the workshop, I've now attached a Sapience Spell to a freckle on my hand (which I always used to think was unnecessary visual clutter), with the imagery/sense of expanding my peripheral vision and seeing everything (think: clear sight, sensing everything at once, 'whole-universe comprehension' kinda feels), and the incantation simply: "Notice."
... a strong sense based off both research and personal experience that physical proximity matters, and that you can't build the correct kind of strength and flexibility and trust into your relationships without actually spending significant amounts of time with one another in meatspace on a regular basis, regardless of whether that makes tactical sense given your object-level projects and goals.But I'm going to hold off on going into those in detail until people insist on hearing about them or ask questions/pose hesitations that could be answered by them.
Why does physical proximity matter?
It seems intuitively true to me, but what really is it about physical proximity that makes such a big difference?
My guesses are around 'information bandwidth': seeing how someone interacts in the physical world is a large channel of information, which you don't get from online interaction. (At least, we won't until VR becomes more sophisticated, and can do things like track and render our eye movements.)
This may not answer your question directly, but:
Why would one have a different way of dealing with information inside one's mind vs the same kind of information outside it? What's the mechanism that prevents leak between these two cases?
One potential crux is that it IS the same kind of information, and that different parts of you have very similar logic to different individual humans.
What are the alternatives -- how would someone deal with information inside their mind differently from information outside it?
(There's a cliché saying, "you have to love yourself before you can love others". I think this is drawing on the same idea.)
An argument is:
One needs ways of dealing with the world, and ways of dealing with oneself. Both of these are pretty difficult tasks. If the logic of how you deal with one can apply to the other, that would save a lot of time and energy -- it'd mean you don't have to start from scratch. So there's an incentive to re-use ideas.
Good point. In hindsight, I somewhat wish I had described a more broad version of the hyper-attentive strategy (rather than saying what people do is directly try to model the minds of other people).
Now that you mention it, I think hyper-attentive people usually use the model-free reinforcement learning version of it. Or the model they use is some kind of 'average person' or 'what the culture says the model should be'.
And if they did stop to model the real individual (rather than an average or cultural version of them), they'd deal much better. (I've noticed this in myself: I'll be modelling the social version of a person, but if I stop to think what that individual is like, it's much less scary and easier to think about.)
If people enjoy demon threads, it may not be strictly true that the 'Someone is wrong on the internet' feeling (noticeably) feels bad.
When reading the OP, I thought, "I recognise that feeling, but my main (noticed) 'someone is wrong on the internet'-response is a positive, inspired motivational one."
Perhaps these feelings do get jumbled, and distinguishing how much is 'inspired' vs 'this is wrong' is part of the skill of avoiding demon threads.
I still sense that there's two different feelings here:
Type 1. Clearly negative – "This can't stand" or "That person needs to be corrected" or "If other people see that person's post, they will become wrong too – I need to save them."
Type 2. Positive(?) –"There's some interesting ideas to be corrected" or "Wow, this person thinks really differently from me, how did that happen?"
The second type might have shock and incredulity, but the core feels like surprised curiosity.
The first type feels more uncomfortable, as if tribal honour has been breached.
Yeah, it happens from time to time. Sometimes these mistakes cause errors in basic things to be taught. I think schools especially are quite slow on the uptake of new ideas -- they have syllabuses and courses and textbooks, and it's quite a lot of trouble to change them, so they usually only get updated every couple years at best.
For example, schools often teach that light is both a particle and a wave, or that it's sometimes a particle and sometimes a wave, or that it has 'particle-wave duality'. They do this because in order to explain that light is just a particle and not a wave at all, one has to take the multiple universe theory of physics seriously.
In philosophy, an example of an important discovery not penetrating the field would be Karl Popper's epistemology. In that case, it's probably because, although it's fairly straightforward, what Popper actually says is very counter-intuitive. Most people don't understand what he means when he talks about us not actually needing any justification for our theories, because it's built into our culture and our language that we need to justify our theories in order to believe them.
No doubt in any field, you will find people ahead of the pack whose ideas haven't been adopted yet. Often they will be good ideas, but not presented well enough to make people understand, remember or pass on the ideas (especially if they're new and counter-intuitive). So, it can take time for them to get off the ground.
Reference class forecasting might be an OK way to criticise an idea (that is, in situations where you've done something a bunch of times, and you're doing the exact same thing and expect a different outcome despite not having any explanations that say there should be a different outcome), but the idea of using it in all situations is problematic, and it's easy to misapply:
It's basically saying 'the future will be like the past'. Which isn't always true. In cases like cryonics -- cases that depend on new knowledge being created (which is inherently unpredictable, because if we could predict it, we'd have that knowledge now) -- you can't say the future will be like the past.
To say the future will be like the past, you need an explanation for why. You can't just say, look, this situation is like that situation and therefore they'll have the same outcome.
The reason I think cryonics is likely is because a) death is a soluble problem and medical advancements are being made all the time, and b) even if it doesn't happen a couple hundred years from now, it would be pretty shocking if it wasn't solved at all, ever (including in thousands or millions of years from now). There would need to be some great catastrophe that prevents humans from making progress. Why wouldn't it be solved at some point?
This idea of applying reference class forecasting to cryonics and saying it has a 0% success rate is saying that we're not going to solve the death problem because we haven't solved it before. But that can be applied to anything where progress beyond what was done in the past is involved. As Roko said, try the reference class of shocking things science hasn't done before.
All of this reasoning doesn't depend on the future being like the past. It depends on explanations about why we think our predictions about the future are good, and the validity of these explanations doesn't depend on the outcomes of predictions of other stuff (though, again, the explanations might be criticised by saying 'hey, the outcome of this prediction which has the same explanation as the one you're using turned out to be false. So you need an explanation about why this criticism doesn't apply').
In short, I'm just saying: you can't draw comparisons between stuff without having an explanation about why it applies, and it's the explanation that's important rather than the comparison.