My experience has been that rationalist-style approaches have been really helpful socially too. I think the system 1/system 2 thing is more about trained responses vs reflection, not about things that can be changed or not. You can still make plans / do practice for social things, and eventually they will become automatic.
Some rationalist things that have helped me socially:
That's an encouraging data point! There is still the broader question of why the rationalist community isn't bigger and AI alignment isn't more popular/supported, but if more rationalists are having your experience it may suggest that relationships and large group influence should be considered separate domains with separate principal components.
I do think there's a plausible hypothesis that my lack of relationship winning boils down to: a) lack of time invested in acquaintanceships and b) that trying to make a single relationship work is as hard as trying to make a single person like you.
I'm interested in the articles you linked. It appears that the sex advice article is behind a paywall. Do you know if there is a non-paywalled version or satisfactory summary available?
I guess I also disagree about the rationalist community not having influence. Senators talk about P(doom), the VP read AI 2027 (which was also written up in the NYT), and for better or for worse the big AI labs were partially inspired or are run by rationalists.
I have mixed feelings about whether we could be more effective at building local communities, but my read is that very few people are focused on this, but the people are seem to be doing well.
For specific advice to you, I'd recommend making more time if you want to succeed at relationships. If that's impossible, see if you can add relationship building to other activities (i.e. get a local job or cowork, specifically with people in your age range). Learn how to be good at relationships but also learn when to give up and try forming a relationship with someone you're more compatible with.
I'm not really sure if anyone else has written similarly-good advice to Aella. I think it's possible on Substack to get a single article as a trial somehow.
I didn't know any of those points about rationalist influence, so I'll update my view about how much influence rationalists have. Still, personally, I have to take a model hit here. Maybe other rationalists came in with better models that predicted less astounding success, but I believed that rationalists could do something on the same order of magnitude as 5 rationalists solving quantum gravity in 1 month. Even if I give myself the benefit of the doubt of fuzzy memory and not having made public predictions and say that I predicted 1 or 2 orders of magnitude lower than that, we are definitely not at the level of 5 rationalist students solving quantum gravity in 10 or 100 months. The fact that I didn't immediately dismiss that as laughable fiction or imagine Brennan as possibly having a different number of toes, given that he must obviously live in a world where humans evolved differently - means that my model needs to take a hit unless rationalists are basically running the deep state by now and can shut down public AI development whenever they wish (or I guess, have at least solved quantum gravity by now).
While I'm learning new things, do you happen to know if there's any directory of community building projects here or any way to determine if in person meetups in my area exist (other than the obvious tools of Google, Facebook search, and meetup search)?
Thank you for the personal advice. I'm working on learning how to be good at relationships, so I'll check those out, but I suspect that my bigger weakness is knowing when to quit.
I don't see the way to do a free trial, but I'll keep looking.
While I'm learning new things, do you happen to know if there's any directory of community building projects here or any way to determine if in person meetups in my area exist (other than the obvious tools of Google, Facebook search, and meetup search)?
For rationalist-specific meetups, LessWrong has a map, although I think anyone taking meetups seriously also advertises on Meetup.com. For things that aren't listed, showing up a local meetup and asking people what else is going on is probably the best option. The ACX meetups tend to be much larger but happen less often. I think going to them is too infrequent to make friends, but they can be a good source of information about more frequent meetups.
Other ways to find communities are:
Whatever you pick, my advice is to find something that you can do at least once a week. Meeting people who you actually keep hanging out with is really difficult for one-off events.
Are rationalists winning... compared to what?
What would you be like without rationality? Or rather, with the typical amount of rationality, compared to the population around you, maybe controlling for social class and education.
I imagine that alt-Viliam is still smart and educated, good with computers, probably has a good salary. Probably wastes a lot of time debating politics, studies some pseudo-science, maybe is an active Mensa member. Most likely interested in science, but using a slightly different definition of "science". A libertarian, a big fan of Taleb, maybe a Christian (guessing by what is popular in my former social bubbles).
I don't know, honestly. The alternative me does not have the rationalist friends, but maybe has other friends. Does not have bitcoins (I have read about them on LW, and my friends at a LW meetup advised me on how to buy them). I definitely wouldn't want to be at his place, but he is probably happy... I guess.
Rationality means not only doing things, but also avoiding things. Not buying homeopathy, for example. There is a certain chance (not certainly, just a chance) that alt-Viliam joined a cult, or lost all his money in some pyramid scheme. That chance doesn't seem too high (many non-rational people avoided that fate, too), maybe 10%?
So, I seem to have a lot of winning, compared to that. But it also seems that much more is possible...
As I look around myself, it actually seems to me that most rationalists in my proximity were quite successful in their personal lives. My hands were somewhat tied by the fact that my first child was born soon after I discovered Less Wrong, so I couldn't join their adventures. Unfortunately for me, the success of people around me often involved their moving away (a foreign job, a foreign university, changing their residence for tax purposes), so my rationalist bubble is shrinking.
I have a good marriage, two smart children, a low-stress job with decent salary. I have recently lost 15 kg of body weight by volunteering for medical research on GLP-1, which sounds like the kind of life hack that rationalists are supposed to do. Sounds like winning -- when I look around me, I see many people who either can't find a partner, or are already divorced, who have trouble with money, can't bring up their children properly, etc.
But when I look at the list of dreams that I had, which didn't come true and probably never will, I feel quite bad. I am an aspie, and I think that I have ADHD -- not sure if rationality can do anything about it. I need a full-time job to pay my bills, and afterwards I am usually too exhausted to do anything meaningful. So I have achieved some of my goals, but failed to achieve some others, and I don't see a realistic quick way forwards.
Seems to me that the problem is that the day only has 24 hours, there are many things you need to get right, and getting right some of them takes a lot of time. In order to achieve everything, you would need to do everything really fast and never get tired, or outsource many things (but that would require a lot of money). Notice how the fictional beisutsukai have no families and no jobs, so they can devote their entire days to the study and practice of rationality. They have the teachers and the classmates; they don't have to figure out everything on their own and become lonely weirdos when they do.
Sadly for me, back then when I had enough time, I didn't see the path, and no one in my social bubble was interested in similar things; and now that I see the path and know a few people, I don't have enough free time.
Humans are social creatures. A rationality community should be much stronger than the sum of its parts. I can't say much about that, because here where I live, the density of potential rationalists is very low. :(
There is the topic of AI Safety, which consumes time and attention of many people who otherwise would probably spend their effort developing the rationalist community. Depending on how it all ends up, it may be the right move, but... well, it probably seems like a waste of time trying to "raise the sanity waterline" if you believe that in a year or two the AGI will kill us all.
I like the Alt-Viliam thought experiment. For myself, I have trouble projecting where I'd be other than: less money, more friends. I was very Christian and had a lot of friends through the Church community, so I likely would have done that instead of getting into prediction markets (which works out since presumably I'd be less good at prediction markets). I think your point about rationality preventing bad outcomes is a good one. There aren't a lot of things in my life I can point to and say "I wouldn't have this if I weren't a rationalist", but there are a lot of possible ways I could have gone wrong into some unhappy state - each one unlikely, but taken together maybe not.
I also like your points about the time limitations we face and the power of a community. That said, even adjusting for the amount of time we can spend, it's not like 5 of us solve quantum gravity in 10 or even 100 months. As for the community - that may be really important. It's possible that communal effects are orders of magnitude above individual ones. But if the message was that we could only accomplish great things together, that was certainly not clear (and also raises the question of why our community building has been less than stellar).
Based on the responses I've gotten here, perhaps a better question is: "why did I expect more out of rationality?"
There's a phenomenon I've observed where I tend to believe things more than most people, and it's hard to put my finger on exactly what is going on there. It's not that I believe things to be true more often (in fact, its probably less), but rather that I take things more seriously or literally - but neither of those quite fit either.
I experienced it in church. People would preach about the power of prayer and how much more it could accomplish than our efforts. I believed them and decided to go to church instead of study for my test and pray that I'd do well. I was surprised that I didn't and when I talked to them they'd say "that wasn't meant for you - that was what God said to those people thousands of years ago - you can't just assume it applies to you". Ok, yeah, obvious in hindsight. But then I swear they'd go back up and preach like the Bible did apply to me. And when I tried to confirm that they didn't mean this, they said "of course it applies to you. It's the word of God and is timeless and applies to everyone". Right, my mistake. I'd repeat this with various explanations of where I had failed. Sometimes I didn't have enough faith. Sometimes I was putting words in God's mouth. Sometimes I was ignoring the other verses on the topic. However, every time, I was doing something that everyone else tacitly understood not to do - taking the spoken words as literal truth and updating my expectations and actions based on them. It took me far longer to realize this than it should have because, perversely, when I asked them about this exact hypothesis, they wholeheartedly denied it and assured me they believed every word as literal truth.
It's easy to write that off as a religious phenomenon, and I mostly did. But I feel like I've brushed up against it in secular motivational or self-help environments. I can't recall an instance here, but it feels like I reason: this speaker is either correct, lying, or mistaken, and other people don't feel like its any of the above - or rather, choose correct until I start applying it to real life and then there's always something wrong about how I apply it. Sometimes I get some explanation of what I'm doing wrong, but almost always there's this, confusion about why I'm doing this.
I don't know if that's what is happening here, but if so, then that is surprising to me, because I had assumed that it was my rationalism or some other mental characteristic I'd expect to find here that was the cause of this disconnect. I read Class project and while it is obviously fiction, it is such boring fiction, in the middle of two posts telling us that we should do better than science that it seemed clear to me that it was meant as an illustration of the types or magnitudes of things we could accomplish. I don't think I'm being overly literal here - I'm specifically considering context, intent, style. Almost the whole story is just a lecture, and nothing interesting happens - it is clearly not aimed at entertainment. It sits in the middle of a series about truth, specifically next to other non-fiction posts echoing the same sentiment. It's just really difficult for me to believe it was just intended for whimsy and could just have easily been a whimsical story about a cat talking to a dandelion. Combine that with non-fiction posts telling us to shut up and do the impossible or that we should be sitting on a giant heap of utility, and the message seems clear.
However, the responses I've gotten to this posts feel very much like the same confusion I've experienced in the past. I get this "what did you expect?" vibe, and I'm sure I'm not the only one who read the referenced posts. So did others read them and think "Yes, Eliezer says to do the impossible and specifically designates the AI box experiment as the least impossible thing, but clearly he doesn't mean we could do something like convince Sam Altman or Elon Musk not to doom humanity, (or in personal life, something like have a romantic relationship with no arguments and no dissastisfaction)."?
I think your feelings of disappointment are 100% valid.
It's just that I am already over the "anger" phase, and more in the "bargaining / depression" phases.
I keep writing and deleting the following paragraphs, because it feels like there is something important in my mind that I want to say, but the words that come out keep missing the point...
First, it seems obvious to me that doing much better is possible. Not literally like in the stories, but those stories generally point in the right direction. It wouldn't take literally five days to reinvent quantum theory. It could take maybe five years to do something amazing, even if not world-changing. Still worth it.
But sometimes it is important to get all the details right. If you build 90% of a car, you cannot take a ride.
I know I can't do it alone. I am not even really good at thinking, unless I have someone to talk to. I find it difficult to collect the mental energy to start doing something... and even more difficult to continue doing it when I get interrupted. (And I do get interrupted every day for many hours; it's called having a job.) The best way for me to regain the focus is to talk to someone else who cares about the same project.
And it's difficult for me to find such people. The intersection of "interested in truth" and "interested in self-improvement" and "wants to do great things together" is almost zero. (Not sure if it's just my bubble, but everyone interested in self-improvement is also deeply interested in pseudoscience and magic.) When I organize a rationality meetup, fewer than 10 people come. Half of those who come only want to chat.
For a moment I had a group that actually was a cooperative rationalist self-improvement project, but various things happened (including covid) and most of those people moved to other countries. It is important to meet in person. Talking over internet doesn't have the same bandwidth, and I don't get that visceral feeling of being a member of a tribe.
I keep wondering what happens on the other side of the planet, in the Bay Area. I don't know the details, but I suspect that most people aren't "winning" there either. Probably you also need to find the right subset of people, and ignore most of the drama outside.
Very well said. I also think more is possible - not nearly as much more as I originally thought, but there is always room for improvement and I do think there's a real possibility that community effects can be huge. I mean, individual humans are smarter than individual animals, but the real advantages have accrued though society, specialization, teamwork, passing on knowledge, and sharing technology - all communal activities.
And yeah, probably the main barrier boils down to the things you mentioned. People who are interested in self-improvement and truth are a small subset of the population[1]. Across the country/world there are lots of them, but humans have some psychological thing about meeting face to face, and the local density most places is below critical mass. And having people move to be closer together would be a big ask even if they were already great friends, which the physical distance makes difficult. As far as I can see, the possible options are:
1. Move into proximity (very costly)
2. Start a community with the very few nearby rationalists (difficult to keep any momentum)
3. Start a community with nearby non-rationalists (could be socially rewarding, but likely to dampen any rationality advantage)
4. Teach people nearby to be rational (ideal, but very difficult)
5. Build an online community (LW is doing this. Could try video meetings, but I predict it would still feel less connected that in person and make momentum difficult)
5b. Try to change your psychology so that online feels like in person. (Also, difficult)
6. Do it without a community (The default, but limited effectiveness)
So, I don't know - maybe when AR gets really good we could all hang out in the "metaverse" and it will feel like hanging out in person. Maybe even then it won't - maybe it's just literally having so many other options that makes the internet feel impersonal. If so, weird idea - have LW assign splinter groups and that's the only group you get (maybe you can move groups, but there's some waiting period so you can't 'hop'). And of course, maybe there just isn't a better option than what we're already doing.
Personally - I'm trying to start regularly calling my 2 brothers. They don't formally study rationality but they care about it and are pretty smart. The family connection kinda makes up for the long distance and small group size, but it's still not easy to get it going. I'd like to try to get a close-knit group of friends where I live, though they probably won't be rationalists. But I'll probably need to stop doing prediction markets to have the time to invest for that.
Oh, and what you said about the 5 stages makes a lot of sense - my timing is probably just not lined up with others, and maybe in a few years someone else will ask this and I'll feel like "well I'm not surprised by what rationalists are accomplishing - I updated my model years ago".
I read Alexander Scott say that peddling 'woo' might just be the side effect of a group taking self-improvement seriously and lacking the ability to fund actual studies and I think that hypothesis makes sense.
My hypothesis for the relation between self-improvement and woo is that people suck at holding two perspectives that seems to be pointing in the opposite direction, long enough to figure out a synthesis.
Let me give you an example: historically, people dreamed about flying. There are two simple ways how to respond to this desire:
The correct solution, as we know now, is to accept gravity as a fact, and then explore the other laws of nature until we find a force that can overcome gravity. There are even multiple solutions -- balloons, gliding, reactive motors -- but all of them require doing something complicated.
The difficulty is not that gravity is fundamentally incompatible with flying, but that both require contradictory emotions. You can feel the inescapable pressure of the universal law of gravity... or you can feel lightness and imagine flying... but it is difficult to feel both at the same time. Human thinking is just a thin layer on top of a fundamentally emotional machine, people usually get addicted to one emotion or the other, and then they become unable to consider the other part of the picture.
Similar pattern: effective altruism. People feel sad about bad things happening in the world and our inability to address them efficiently. The simple solutions:
A correct solution: collect data and calculate, promote the actions with the greatest impact.
The emotional problem: "observing the reality and calculating the hard data" and "desire to change reality" are emotionally incompatible. People choose one emotion or the other, and get stuck with it.
And the self-improvement seems to follow the similar dichotomy:
Again, competing emotions of "noticing that the world sucks" and "the feeling that more is possible". Can you keep trying, when you know that the motivational literature is scam, the success stories are mostly lies, many scientific findings don't replicate, and your own results are probably just placebo?
.
About your list of options:
If you want to move somewhere, it would be nice (if you have enough time and money) to check the rationalist communities outside the Bay Area, because that place just seems doomed -- the social pressure to take drugs and meditate will be too strong, and even if you personally resist it, the vortex will keep pulling away parts of your social circle.
Maybe Scott Alexander knows about this more: a few years ago, he traveled across the world, visiting various ACX meetups. He may at least narrow down the list of interesting places.
Recruiting new rationalists seemed to me like the best option, a few years ago. I mean, if I was impressed with the Sequences, surely there must be other people who will feel the same way, if only they get the text. Maybe I should go to a local math college and give away a few free copies of HP:MoR! These days, I don't believe it anymore. The internet is a small place, and the rationalist community has a decent online presence. Most nerds have already heard about us. If they don't join, it's probably because they are not interested. There may be an exception or two, but probably not enough to start a local meetup.
If you start a community with non-rationalists, the chance to change it to a community of rationalists seems zero to me. (One possible exception: You could start a community that teaches self-help, or something like that, and then gradually find potential rationalists among the students.)
"Raising the sanity waterline" was the old plan, and some CFAR materials are freely available online. But you probably shouldn't do it alone, as it is a lot of work.
I think you could achieve a better "tribe" feeling online with a smaller dedicated group, meeting regularly, having video calls, and a private channel for asynchronous communication. (Or maybe try The Guild of the Rose? I don't know much about them, though.)
Regularly calling two or three people still sounds preferable to doing it alone. Maybe you could try to meet more people, and hope to find someone rationality-compatible.
There is no taboo against it, but if the new comment initiates a big debate, I would prefer that to happen in a new thread. It will be easier to read.
Of course, it can be difficult to predict what happens.
Good to know! In your experience, do necro comments tend to get interaction on LW, or is it more like "socially acceptable, but don't be surprised if no one reads or responds"?
"Why Aren't Rationalists Winning" is quite a broad question. I prefer to ask myself "Why Aren't Rationalists Winning at Chess". I believe it has to do with insufficient education in openings and endgames.
Good point that its broad, maybe the reasons are domain-specific. You might be right about chess in specific, but I lean against concluding that insufficient eduction is the reason rationalists aren't winning in most domains. Rationalists on average, are significantly more educated that the general population, and I'd imagine that gap grows when you take into account self-directed education.
This is super interesting. Let's focus on your case. Why aren't you winning? How can I explain this data? You don't think you are winning. What are your criteria for winning?
Regarding the hypothesis that rationality improves system 2 and not system 1 - how does system 1 improve? I think mostly through training with some simple and accurate feedback loops. I think one can establish them with system 2. Example would be setting a goal to exercise. You try various exercises untill you find some that you like. Why didn't you find anything for relationship like that?
Thank you! I'm not sure if the first paragraph questions are intended for you to answer, for me to answer, or as purely rhetorical. The only one I feel like I have an answer for is what my criteria for winning are: Broadly, I mean achieving my goals. Specifically for relationships, my goals are: be comfortable and good at meeting new people and forming relationships, have a close group of friends that I spend a significant amount of time with and share a significant portion of my thoughts and feelings with, feel generally connected to other people, have a romantic relationship that is good enough that I don't wonder if I could do better with someone else or that question doesn't seem important, and have positive relationships with my family. The goals are all necessarily subjective, so its always a possibility that I just have too high expectations, but that would also seem to me to be a form of not winning. Of the goals I listed, I have succeeded at "good at meeting new people" (but not comfortable, which matters since I don't do it often as a result), and "have positive relationships with family".
I agree with you about improving system 1. I get better at pickleball by hitting a pickleball a lot with a goal of where the ball should go and noticing immediately if it did or didn't go there. I believe there are 3 reasons I haven't found that for relationships:
1. Moral restrictions - People don't like to be experimented on and the bounds of monogamy prevent me from practicing with other people. It feels wrong to "play games" with my significant other and do/say things just to see how they will react. Perhaps there's a way around this with some meta-permission and boundaries.
2. Complex goals - in pickleball I know exactly what the goal and can instantly tell whether it was or was not achieved with a lot of objectivity. In relationships, there are multiple competing goals, many poorly defined, and almost none are objective.
3. Slow feedback. Certain behaviors get immediate feedback - I give a gift and I see gratitude. I give criticism and I see defensiveness. Others don't come for a while or have complex patterns. I give a true answer instead of the desired answer - I see unhappiness. Later, after repeatedly doing this, I may get increased happiness when giving the desired answer relative to the other world where I always gave the desired answer regardless of truth, but this is difficult to detect. I display anger or disappointment - the offensive behavior immediately stops. However, in the long term, there may be some invisible threshold where if I display anger or disappointment too frequently, they lose their effectiveness, and a separate threshold where the other persons desire to be around me decreases. Add to this that the context matters and the exact same actions on my part may get very different results at a different time, and it becomes quite difficult to construct good system 1 practice.
According to the 2024 survey results, 55% of LW are married or in a relationship. So I guess "winning."
That's new info to me. I wouldn't consider that winning though. First, because that's lower than the general population. And 2nd because I am in a long term relationship, but don't consider myself to be winning (I have areas where I want improvement and don't see meaningful growth from my efforts).
I'm not sure what you mean by "winning" broadly, I thought it was just getting a girlfriend or something. Successfully improving in some target area? Literally I was expecting this post to be about an AI arms race or something, apparently it's just calling all rationalists losers at an undefined contest.
Ok, that's fair - I didn't define my terms here and am guilty of "Expecting Short Inferential Distances". (I've now edited the post to add some background). By winning I was referencing this post where "winning" is defined as gaining utility, aka achieving your goals, whatever those goals may be.
As I concluded in my case, the issue here isn't that rationalists aren't winning at all, but that to my (limited) knowledge, they aren't achieving their goals as much as I would have predicted. Anyone who only predicted single digit percentage improvement from learning about rationality probably doesn't have anything to explain. But those of us who expected rationalism to produce large and obviously significant gains or expected rationalists to become known for their success across domains, do have something to explain.
OK the thesis makes sense. Like, you should be able to compare "people generally following rationalist improvements methods" and "people doing some other thing" and find an effect.
It might have a really small effect size across rationalism as a whole. And rationalism might have just converged to other self-improvement systems. (Honestly, if your self-improvement system is just "results that have shown up in 3 unrelated belief systems" you would do okay.
It might also be hard to improve, or accelerate, winningness in all of life by type 2 thinking. Then what are we doing when we're type 2 thinking and believe we're improving, idk. Good questions, I guess.
Because Rationalists overemphasize rationality while underemphasizing other important aspects of reality like strategy, persuasion, and meta-goal-directed-rationality. (What I mean by meta-goal-directed-rationality in this context is the degree with which an individual's actions are actually rational relative to a stated or internal goal as opposed to an externalized system that is actually minimally goal aligned in form or function)
Yes, this topic has been discussed multiple times. But at this point, long enough ago that my choices for joining the conversation are [Necro] or [New post], so here we are. (You can jump to the 2nd to last paragraph if you want to just see my conclusion.)
Background: Why am I asking this question?
After reading some comments I realize I should give some background on what I mean and why I'm asking this question.
Winning here refers to gaining utility, aka achieving your goals, whatever those goals may be, and is in reference to this post.
The reason for asking the question at all is that I personally expected a large improvement across domains from learning rationality. Plenty of articles here hint at how strong of a power rationality might be, from a fictional story about 5 rationalist students solving quantum gravity in 1 month to claims that rationalists should be faster than Einstein who was faster than science, rationality is presented as a superpower. To me, this didn't seem farfetched: given the astronomical advantage the human brain gave to its wielders over other animals, it seemed plausible that learning the correct usage of the human brain could produce outsized gains (even gains several orders of magnitude lower than the human advantage over the smartest animals would be rightly considered a superpower).
So, anyone who would never have predicted gains large enough to set rationalists in a league of their own might have nothing to explain and find the question uninteresting. But those of us who expected that the hallmark of a rationalist would be winning (aka achieving goals) do have something to explain. So perhaps a better phrasing is "Why aren't rationalists winning more?"
Let me start by immediately copping out of answering my own question:
I have no idea why rationalists aren't winning. Heck, I don't even know that they aren't winning, though I assume that one of their goals would be to spread the rationalist community, so I do have to downgrade the probability from my prior. And since I'm pretty confident that rationalists have the goal of eliminating AI X-risk and that that hasn't been achieved, I have to rate the probability of rationalists winning in all areas as <50%. But to answer the question of why tons of people I've never met or even interacted with aren't fulfilling utility curves I don't know, I would need to be a superintelligence or just plain epistemically irresponsible.
So, I'll substitute an easier question of "Why am I, an aspiring rationalist, not winning?" and then hope that either the reasons will happen to be similar or it can spark others to answer the same question and help assemble the mosaic of an answer to the big question.
Why am I not winning?
This is still a bad question. It has the form of "Why have I not stopped beating my wife" and also presumes there is something to explain and that "me not winning" isn't the default.
Am I winning? Is that unexpected in my model?
Ok, better - now we can start. Am I winning - well, yes and no. Life has many aspects. I win at pickleball, but we probably don't care. So let's oversimplify life into financial/technical and social/relational to make it easier to talk about.
Area Winning?
Financial/Technical Yes
Social/Relational No
Ok, so now that we've condensed my life into 2 bits of info, we can ask the next question: does my model predict that?
I don't need to tell you about myself to tell you my model results, but you might want to try your own model out, so I'll give you info that might be inputs in your model. Feel free to skip if you don't care to try your own model on me.
Personal Info
ASL: 37/M/USA
Race: Caucasian
Disabilities: None
Education: Masters + Professional Certification
Education type: public
Parental Education: PhD (Father) and Masters (Mother)
Parental Income Level: Upper middle class
IQ: Unknown, but scored 98-99th percentile on standardized tests in school
EQ: Unknown and is this even actually a thing? I think I read that it disappeared when g was accounted for.
Religion: Agnostic (Previously Christian)
History as rationalist: Became aware of cognitive bias in 2012. Lead to doubt of human testimony as source of knowledge and human ability to know truth on controversial issues which lead to losing my faith (which was the biggest or 2nd biggest part of my identity up to that time). Agnosticism and lack of epistemic closure drove repeated investigations into bias and whether I could see or overcome my own biases.
First heard of LW a few years later when a friend shared the double-crux method with me. Have occassionally read LW articles since then. Became convinced by an independent blog post that AI X-risk was a real concern around 2017. Became interested in how other people think and in psychological wellbeing for myself around 2020, and began periodic mindfulness meditation. Read Thinking Fast and Slow in 2023. Read Rationality: AI to Zombies this year (2025).
Personality: Generally, spend a lot of time thinking/analyzing. Meta strategy of engine-building, self-improvement, recursing back to the problem that precedes my current problem. Analytical and not decisive. Middle popularity in middle/high school. Good sense of humor and decent sense of self kept me from getting picked on, but wasn't part of the 'in' crowd. Had a friend clique, was most in the band/drama geek crowd (and also the church crowd) but freely associate with other crowds. Decent athlete until dual-enrollment got in the way Junior year. Would be described by others as: intelligent, intense, book-smart, funny, a good listener, debatey, honest, stable, principled, a bit proud, good at social analysis, bad at social perception.
Neurodivergence: Probably something like undiagnosed ADD. Use caffeine to help focus and have difficulty focusing on mundane tasks but hyperfocus on complex ones. Have time-blindness. Have some traits in common with ASD (low social perceptivity, literalness, strong desire to talk about interests) but others that don't fit (enjoy new places, enjoy meeting new people, enjoy new foods/textures/experiences, good ability to understand emotions when I am paying attention).
Since I've been working on rationality in some way or another since 2012, my model splits predictions.
My model predictions:
Area Winning pre-2012? Winning now?
Financial/Technical Yes Yes
Social/Relational No Yes
And the real results:
Area Winning pre-2012? Winning now?
Financial/Technical Yes Yes
Social/Relational No No
Ok, so 3/4 correct. Sounds not bad until I realize that I lose to the null hypothesis of "No change", and do only 1 prediction better than the simplest possible hypothesis of "No". My model is pretty simple: predict initial winning will follow personality description, but that later winning is a product of intelligence + commitment to rationality + commitment to self-improvement.
So why did my model fail to predict the lack of social/relational winning now?
Possibilities:
Ok, so if I got the effect size/timing wrong - why?
Here's my current best hypothesis.
System 1 = Fast intuitive gut thinking: everything that doesn't require intentional attention.
System 2 = Slow, methodical logical thinking: everything that does require intentional attention.
Rationality and self-improvement techniques are learned with system 2. Financial/Technical tasks are handled by system 2. Better system 2 -> better performance on financial/technical tasks (at least if those are areas you learned techniques for).
Relationship and social interaction are handled by system 1. System 2 is not fast enough, so trying to do relationships on system 2 is hard mode and has a low ceiling for how much success you can get. Better system 2 != Better performance at system 1 tasks.
System 2 has two ways to help: 1) Train system 1 to be better. But brains are complex and hard to intentionally change, so this can be a very slow process. 2) Take over when things start going badly to limit damage - works, but ironically, can decrease average relationship quality by stopping the natural pruning process where relationships with bad dynamics go badly enough to end. Best for close friends/family where relationship will continue anyway.
This hypothesis has the advantage of correctly predicting that my close friends/family relationships would improve more than others. I did know the result before making the model, so it's not technically a good test, but I at least wasn't consciously thinking about it and didn't realize that this prediction would match until after creating the model.
So that's what I have to contribute to the discussion: My datapoint and the hypothesis that the reason for not winning is that the knowledge resides in a different part of the brain. Does that hypothesis fit your data?