This feels like an extremely important point. A huge number of arguments devolve into exactly this dynamic because each side only feels one of (the Rock|the Hard Place) as a viscerally real threat, while agreeing that the other is intellectually possible.
Figuring out that many, if not most, life decisions are "damned if you do, damned if you don't" was an extremely important tool for me to let go of big, arbitrary psychological attachments which I initially developed out of fear of one nasty outcome.
I agree, but I was more asking about how you think your insight about the "distance to safety" can help with that.
Well, after a bounded number of initially difficult "far-out explorations" that cover the research landscape efficiently, the hope is that almost everything is reasonably close to safety henceforth.
Interesting. My own approach is usually to collaborate/ask someone who knows the subject you want to learn. But that does require being okay with asking stupid questions.
Yes, I think your approach is ideal for the efficiency of learning if anxiety was not a factor. Unfortunately the people who know the subjects I want to learn best are people I care about impressing and/or people so well-versed in the subject that they have difficulty bridging the inferential abyss between us. At least for me it is hard to treat them as a "psychologically nearby" companion who has my back.
Even after getting much better at asking stupid questions, it feels like the maximum I feel okay with asking in a meeting with someone who knows a subject already is ~3, and not ~40, which is the number I want to ask.
Very nice post! I would add that it is a useful and nontrivial skill to notice what you're paying attention to. It may not be helpful to try getting curious unless you know concretely what this means about how you move your eyes and attention.
To give a video game example, players new to a genre have no idea where to put their eyes on the screen. When I told a friend playing Hades to put their eyes on their own character, instead of on the enemies, they instantly started taking half as much damage. I got a lot better at Dark Souls, on the other hand, by staring at the enemies to catch their telegraphed movements and not on myself. Similarly, I had a friend who could not get into Path of Exile because they wanted to dive into playing the game mechanically and was frustrated about my claim that to properly enjoy the game you spend most of your energy staring at skill trees and item builds and wikis. I found that my natural state playing PoE was leaving my eyes half unfocused on the game, spamming my skill rotation while thinking about my next item or skill upgrade on my second monitor.
To listening properly and be curious, I think the main places one should focus one's attention (in addition to the words they're saying) are: (a) on the other person's face and body, (b) on the other person's tone of voice, and (c) on your own bodily sensations. In other words, everywhere but your own thoughts.
To be clear, the papers would almost certainly have gone through anyway, the helpful thing was being very comfortable with Bayes rule and immediately noticing, for example, that conditioning on an event with probability 1-o(1) doesn't influence anything by very much.
Another trick I derived from this comfort is to almost never actually condition on small-probability events. Instead, the better thing to do is to modify the random variables you care about to fail catastrophically in the small probability scenario.
For example, in graph theory I might care about controlling a random variable X which is the number of times a certain substructure appears in the random graph G(n,p), but to do so I need to condition away some tail event E like the appearance of a vertex of extremely high degree. Instead of working with conditional probability for the rest of the argument (which might go on to condition away 3 or 4 other tail events), the nicer thing to do is to modify X into a variable X' which is defined to be 0 when E occurs, and reason about X' instead. This is better for multiple reasons; the most important one being that the edge appearances in G(n,p) are no longer independent when you condition on E complement.
I think mostly what I got out of the Sequences was removing an air of mystery around Bayes rule. Here by mystery I mean "System 1 mystery," i.e. that before I read the Sequences, to figure out a conditional probability I would have to sit down and carefully multiply and divide. This post also helped.
How do you think this apply to intellectual pursuits? I have in mind research advising: in my experience, some people that I think could be great researchers are terrified of exploring some part of knowledge where there is no answer yet. And even we established researchers can easily be afraid of learning a new subject or a new technique that would help them tremendously. Maybe the comfort flags should be links with stuff that the graduate student/researcher knows well? Anecdotally, people seem more open to learning about what you want to say if you link it to their own field.
I don't pretend to be an established researcher, but here is what I had in mind. Most researchers at one point or other spend some amount of time white-knuckle learning things that are outside their comfort zones, but usually these things are just barely outside. My suggestion would be that all other things equal, some of that time should be spent learning things really far out instead.
Also I think learning in pairs is a very helpful tool. The active ingredient is to have someone you trust enough to freely share your ignorance and ask basic questions, and the easiest way to get this trust is to find someone who is also obviously ignorant of the same thing.
I wonder if the following are also examples of motive ambiguity:
Let me share some more gears/evidence. I believe something a little more interesting happens than what you're saying (which is definitely one piece of the puzzle).
(1) It's fun to look at how the audience organizes itself during math talks. The faculty almost always sit in the front row, point out mistakes more directly ("You mean this" instead of "Is this correct?"), ask questions more often (and with less hand-raising), and sometimes even feel comfortable to answer questions in the speaker's stead. I suspect this is a social role that everyone learns through attending enough seminars.
(2) Faculty have access to a lot more privileged information about other mathematicians than everyone else. They are on editorial boards, hiring committees, admissions committees, conference organization, awards panels, etc. I got a confidence boost after peer reviewing my first couple of papers, the transition to faculty is this x10 in terms of data to train on and notice you're being underconfident.
(3) Professors spend a lot of time with their research groups/PhD students/undergrads compared to in the company of other faculty, so they aren't doing as much comparing themselves with other faculty as you would think. At least in mathematics, it's generally preferred for faculty at the same university to have research interests as far as part as possible (to cover a breadth of fields), so each professor interacts a great deal on the day-to-day with their group of undergrads/grad students/postdocs. Meetings with other faculty are mostly logistical, with the possible exception of a handful of close collaborators. This is probably even more true in fields where a professor is literally a head of their own lab and the PI for all research that happens in the lab. I think status feelings tend to work on the level of "people you interact with most on a daily basis" instead of "people you intellectually compare yourself to."
Fascinating! Definitely plan to check this out, thanks for the recommendations and detailed introduction.
Thank you for writing this, it led me to reconsider this phenomenon from a different perspective and revisit Lsusr's post as well as competent elites, which seemed to really string things together for me.
Lsusr is primarily talking about success "outside of the usual system", which generally frees someone up even more from the usual system. Start-ups are the primary example of this.Alkjash is primarily talking about success within the existing system. The stereotypical successful career is an example of this.
This definitely feels like part of the thing, but I would (as with many things) phrase it in the language of status. I claim that much of the "freedom" that Lsusr talks about and the "intelligence and aliveness" Eliezer talks about is consequences of feeling high status. In academia, the standard solution to all of the ennui and anxious underconfidence a grad student or postdoc feels is ... wait for it ... tenure. Your inhibitions magically disappear when you become faculty, and mathematicians often become confident to explore, gregarious, and willing to state beliefs even in dimensions orthogonal to their expertise (e.g. Terry Tao on Trump). This is explained by direct changes in the brain, as well as external changes in how the intelligent social web coopts your cognition, when a person gains status.
My guess is that the difference between what you call Lsusr's "outside of the usual system" and my "within the existing system" is the difference between systems with shorter and looser status hierarchies and those with longer and tighter ones. In the former it is easier for an exceptional individual to quickly gain competence and reputation and reap the benefits of status. This difference is in turn mostly explained by systems having different levels of play. Thus, one would find success more agency-limiting for a longer period of time in professional Go than in professional Starcraft, in mathematics than in AI/ML, in Google than in a startup, etc.
My interpretation of Lsusr's philosophy is that there is a magic sauce that rhymes with arrogance which allows one to turn on powerful high-status feelings and behaviors (confidence, agency, vision) regardless of circumstance. Unfortunately there are harsh cultural defenses against this kind of thing that one has to prepare for.
Very interesting! This thread is the first time I've heard of NLP (might have seen the acronym before but I thought it was ML people referring to Natural Language Processing), I will definitely check it out. I guess I just rounded off my observations to the nearest things I recognized. I'm not surprised that Robbins stuff is embedded in a larger technique but am kind of surprised that I've been ignorant of it for so long.
Is there a book or resource that you would most recommend to learn NLP?