There's this notion going around in the community sometimes that holds that the best way to progress on one's rationality skills is to make lots and lots of theoretical study by yourself or to go off and "level up" a bunch on your own prior to getting involved in object-level projects or efforts.
I think that's false and in fact that it's not only false but it's almost the opposite of what needs to be done. In point of fact, much of the time I've seen people go off to meditate in the darkness for a year and "level up" a bunch, this has not only not helped much but in some cases has seemed to actually harm, as people have a tendency to go in strange and unsound directions when doing this sort of thing on their own without feedback.
Instead, I think that if you want to make progress on your rationality skills, the best way to do that is to get involved with object-level projects and use those as testing grounds for your practice. Not only will this help you test skills in a more realistic and practical setting, but it will also provide demonstrations that you can later refer to to show how things worked (or didn't), and it will quite possibly help you build a secret identity as well.
So, yeah. If you want to be an advanced rationalist, don't just theorize - go out there and do stuff.
This works for versions of "do something" that mainly interact with objective reality, but there's a pretty awful value-misalignment problem if the way you figure out what works is through feedback from social reality.
So, for instance, learning to go camping or cook or move your body better or paint a mural on your wall might count, but starting a socially legible project may be actively harmful if you don't have a specific need that's meeting that you're explicitly tracking. And unfortunately too much of people's idea of what "go do something" ends up pointing to trying to collect credit for doing things.
Sitting somewhere doing nothing (which is basically what much meditation is) is at least unlikely to be harmful, and while of limited use in some circumstances, often an important intermediate stage in between trying to look like you're doing things, and authentically acting in the world.
I'm not sure I'm thinking about the same thing you are, so let me know what you thing of these examples:
"Become a well known writer/blogger"
"Start a popular meetup for Y topic"
"Get respected in a community"
"Make a viral video"
Me phrasing what I think is your point:
Some of the most readily imaginable "things to do" are identified by their effects on social reality (make something popular, be respected). Learning to shape social reality is a skill in itself, but if you mistakenly believe that you are learning how to shape reality you will hit problems when you are confronted with a problem that requires shaping reality.
Thanks for the specific examples. I'm more worried about subtler cases, that aren't overtly about social reality, but where feedback is mediated through it.
For instance, people like Taleb often name entrepreneurship as an especially "real" thing you can do, but founding a startup can look more like passing a series of tests where you're supposed to look like VCs' consensus idea of a business, than figuring out how to make a product you can sell profitably. And success in the corporate world is often even sillier (see just about any story from Moral Mazes for details - or Dilbert for the fictional version), even in firms that make useful physical products. If you're not careful about what kinds of feedback you respond to or incentive gradients you follow, you may learn to conflate the symbolic representation of the thing (optimized to get approval) with the thing itself.
Acting on social reality is an important skill for many projects, but not all ways of interacting with social reality are the same. In particular, coordinating to manage appearances and stories is very different from coordinating to do something in objective reality. (The engineer and the diplomat, Actors and scribes, words and deeds, On Drama, and Blame games all touch on this.)
I don't think that all your feedback needs to come from predominantly social sources; that said, I do think that maintaining at least *some* degree of alignment with social reality is pretty important - one failure mode that I've seen is people who go out there, develop very strange views, don't reconcile them with others, and basically end up in schism from the community, unable to bridge the inferential distance that their time away has created.
I'm not saying that their views are always wrong, and I am certainly not saying that social consensus is always right - I have very substantial disagreements with many views that are locally popular here! But what I do know is that, if you move too far out of contact with social reality, even if you find great insights they may become insights that you are unable to articulate or bring to others.
Yes, feedback from social reality shouldn't be your only tool -- but it's still important!
It's been said before for sure, but worth saying periodically.
Something I'd add, which particularly seems like the failure mode I see in EA-spheres (less in rationalist spheres but they blur together)
Try to do something other than solve coordination problems.
Or, try to do something that provides immediate value to whoever uses it, regardless of whether other people are also using it.
A failure mode I see (and have often fallen to) is looking around and thinking "hmm, I don't know how to do something technical, and/or I don't have the specialist skills necessary to do something specialist. But, I can clearly see problems that stem from people being uncoordinated. I think I roughly know how people work, and I think I can understand this problem, so I will work on that."
This is a fairly different framing than Benquo's (and Eliezer's) advice, although I think it amounts to something similar.
At some point I'll get around to writing a proper post on this topic, but a few brief bullet points:
I think the main take-away is not "try to do something other than solve coordination problems", but rather "coordination problems are really difficult in general, like beating-the-stock-market level of difficult". They're a big-game kind of problem, with potentially huge rewards, but you need to go into it with the same mindset as beating the market: you need to either find a highly specialized niche, or be the very best in the world at some relevant skill, and either way you also need to be fully competent at all the other relevant skills. If it looks like there's some easy low-hanging fruit to pick, you're probably missing something, unless there's a really good reason why nobody else in the world could have noticed that particular fruit.
There are lots of ways for people to improve their own life and those of friends without this being massively massively profitable, though. Like, it seems like you're conflating the coordination required to, say, start a discussion group, with the coordination required to run a tech empire. (I have talked to someone in the rationalist community recently who believes that starting a club is hard because of the social dynamics involved, including expected social discouragement for excluding people).
You can't justifiably reason from "doing this at world-class competence is hard" to "you can't get large gains by being moderately good at this instead of not trying at all".
[EDIT: note that I'm including things like "having more illuminating intellectual discussions", "being less afraid to communicate", and "doing less bullshit work" in "improving one's own life", so these feed into other goals, not just personal ones; put on your own oxygen mask first, and all that]
Totally agree. In particular, I do think that solving small-scale coordination problems is one of the main ways that individuals can have high positive impact on their company/community, relative to effort expended. (I like to use an example from an online car dealership where I used to work: the salespeople had no idea what cars were listed or at what price, which caused a lot of friction when someone called in about a car. Our product manager eventually solved this with five minutes of effort: he asked our marketing guy to forward his daily car-ad spreadsheet to the sales team.)
That said, generalized efficient markets principle doesn't go completely out the window the moment we zoom in from the whole-world-level. The bigger and more obvious the gain from some coordination problem, the more people have probably tried to solve it already, and the harder it's likely to be. All the usual considerations of generalized efficiency still apply.
This still leaves the question of why coordination problems have unusually high returns (at the world-scale). Are there few people who are actually good at it? Is it a matter of value capture rather than value creation? Are people just bad at realizing coordination problems need to be solved? Different theories about the large-scale potentially have different predictions about the difficulty & reward of small-scale coordination problems.
Value capture. There are lots of valuable coordination things and valuable non-coordination things, but coordination things lead to network effects and natural monopolies that allow more efficient value capture. If you can become a coordination bottleneck you can often capture more than all of the value.
Also because those who can coordinate use that and other political skills to capture more of the value from people doing other more rationality-compatible useful things.
That was exactly what the little Zvi voice in the back of my head said. I'm not yet convinced. The "network effects -> natural monopoly" argument is a strong one, but it still seems like coordination problems are the main economic bottleneck even when there isn't value capture involved, especially in smaller-scale situations.
In all of these cases, there's no clear natural monopoly and no obviously outsized value capture relative to value created. Rather, the "potential energy" is created by language barriers, intra-organization political coalitions, information silos, and communities with limited cross-talk.
That's not to say value capture isn't relevant to e.g. Google or Facebook. Obviously it is. But Google (and more debatably Facebook) still creates huge amounts of real value, regardless of how much it captures, and it does so with little "effort" - Google's employee base is tiny relative to value created, and most of those employees don't even work on search.
There is an argument to be made that I'm really talking about two qualitatively different cases here: coordination problems which involve breaking down cross-silo barriers, and coordination problems which involve building new markets. Maybe both of these are interesting on their own, but generalizing to all coordination problems goes too far? On the other hand, there are outside-view reasons to expect that coordination problems in general should get worse as the world modernizes - see From Personal to Prison Gangs.
I would also say that when coordination problems exist, it is often easy to see that they exist, so they look like a bottleneck. Whereas if other types of advances could improve things, it is often much harder to notice that a piece of technology is missing.
Specifically, they are not the sort of thing you should be practicing on if you haven't yet accomplished much (to the point that "go out and DO something" is the most useful advice to be following)
Agreed, though with the caveat that losing some money in the stock market is an important early step in gaining experience - presumably it's the same with coordination problems. But that sort of practice should be undertaken with the understanding that it's likely to fail on an object-level, and you want that learning experience to be cheap - e.g. don't make it harder for the next person to solve the coordination problem.
In particular, I wouldn't want to discourage people from building coordination skills by having a minimum level of status required to even try. Rather, we ideally want ways to experiment that aren't too damaging if they fail. (And, of course, we want to have realistic expectations about chance of success - ideally people go into a learning experience fully aware that it's a learning experience, and don't bet their house on day-trading.)
Additional very important reason to avoid working on coordination problems is Benquo's reason. If you are attempting a coordination game, even if you have an important technical innovation, you're going to spend the bulk of your time playing politics and social games, and getting feedback from political/social world. So by default you won't be training your rationality, instead you'll be training something that opposes rationality.
It's something we need. And at some point we need people with both skill sets. But you need to become stronger first.
This is an excellent point.
To the list of “but”s, I would add:
It is often (usually?) much more difficult to correctly identify coordination problems (due to the lurking danger of unknown unknowns, un-perceived strategic/game-theoretic considerations, insufficient domain knowledge, etc.) than it is to correctly identify simpler (or “object-level” or “technical” or “immediate” etc.) problems.
When attempting to solve such “non-coordination-problems”, it is often easy to get immediate, clear feedback on your attempted solution; whereas, when attempting to solve coordination problems, clear feedback on your attempted solution is hard to come by, may be obscured by a variety of factors, and may come in with a great delay (which itself is an obscuring factor).
(These two problems, of course, leads to the sort of situation described by this Russian saying: “It is very difficult to find a black cat in a dark room—especially if the cat is not there.” In the most pernicious such cases, you may end up contributing to the very problem you were trying to solve—while all the while thinking that your efforts are absolutely critical to preventing things from getting far worse!)
This seems like a reverse all advice thing. The failure mode you describe certainly exists, but so does "grab whatever is nearby in a desperate attempt to look busy (to yourself or others)"
I agree, but at least in the rationalist community the "balance of the error" seems to tilt towards inaction.
Trying to do things in the most expensive/competitive places to live is often needlessly punishing. Even if you have slack, you'll be trying to coordinate with people who don't. Plus, mimesis.
This seems reasonable.
See also Yudkowsky in Inadequate Equilibria for a similar sentiment:
Try to spend most of your time thinking about the object level. If you’re spending more of your time thinking about your own reasoning ability and competence than you spend thinking about Japan’s interest rates and NGDP, or competing omega-6 vs. omega-3 metabolic pathways, you’re taking your eye off the ball.
Along the same lines as Benquo’s and Elizabeth’s comments: Don’t Act. Just Think. (Slavoj Zizek)
Looking back, to the extent that I developed rationality skills, I learned almost all of them by Go Do Something combined with deliberate practice.
That doesn't mean you shouldn't do the reading, or have the discussions, but they need the context of Went And Started Doing Something to work.
What form of deliberate practice did you apply? This is an area that I'm really interested in, both personally and professionally.
I think pretty textbook. Looking at actions and details, not results. Constantly going over everything I'm doing, during and after doing it, picking apart every decision and action and mistake set to stupidly high standards, including writing all of it up where secrecy permitted.
I'm guessing that misses the details that would most be useful to you, so likely you should say more about what details you're curious about.
Soares also did a good job of impressing this in Dive In:
In my experience, the way you end up doing good in the world has very little to do with how good your initial plan was. Most of your outcome will depend on luck, timing, and your ability to actually get out of your own way and start somewhere. The way to end up with a good plan is not to start with a good plan, it's to start with some plan, and then slam that plan against reality until reality hands you a better plan.
The idea doesn't have to be good, and it doesn't have to be feasible, it just needs to be the best incredibly concrete plan that you can come up with at the moment. Don't worry, it will change rapidly when you start slamming it into reality. The important thing is to come up with a concrete plan, and then start executing it as hard as you can — while retaining a reflective state of mind updating in the face of evidence.
Malcolm Ocean and Duncan Sabien also made a good go of it in Just Do a Thing.
Malcolm: At the CFAR alumni reunion this August, my friend Alton remarked: “You’re really self-directed and goal-oriented. How do we make more people like you?” It didn’t take me long to come up with an answer: “I think we need to get people to go and do things that nobody’s expecting them to do.”
Duncan: When I was maybe nine years old, I had a pretty respectable LEGO collection dropped into my lap all at once. I remember that there was one small spaceship (about 75 or 80 pieces) that I brought along to summer camp, with predictable results.
I found myself trying to piece the thing back together again, and succeeded after a long and frustrating hour. Then, to be absolutely sure, I took it completely apart and reassembled it from scratch. I did this maybe forty or fifty times over the next few weeks, for reasons which I can’t quite put my finger on, and got to where I could practically put the thing together in the dark.
These days, I have an enormous LEGO collection, made up entirely of my own designs. My advice to pretty much everyone: “First become the kind of person who does things. Then worry about which things you’re doing.”