Why I'm betting on doing both of these things at once: from my POV they're basically one thing. And that one thing is: having a go at the world with everything we've got.
Yeah, to be clear, this is totally what I'm doing as well, and I strongly empathize with this sentence.
A better phrasing would not have been for me to challenge doing them at once, but rather to challenge learning them at once. Let me try to be concrete here. Some people have a level of stoicism or repression that means that they've almost never cried. For those people, getting to a level of emotional release that allows crying is a pretty important step towards emotional health. After that point, it's pretty powerful for them to mix that skill with a simultaneous "orienting towards the world" move (as we both observed at LARC). But when I look through the classes listed in the CFAR handbook, almost all of them seem pretty structurally antithetical to "learning to cry", because they are trying to get participants to absorb a primarily-epistemic frame.
(Though to be clear, I'm not saying that CFAR should be optimizing for making people cry. Running workshops where a lot of people cry is a risky endeavor. Again, it's more about the meta-level question of how aCFAR orients to all the crazy powerful stuff lying around.)
When it comes to accomplishing cool stuff [the other half of being human, besides figuring out what's true]
Not super important but wanted to flag that this feels like a very impoverished description of being human. There's also, you know, having fun, loving others, relaxing, etc.
I think your claim about AoA being orthogonal relies on rejecting my claim above that "emotional blocks are the main reason why people make bad decisions".
For example, AoA doesn't teach stuff like Murphyjitsu or pre-mortems, but on my model both of those are primarily useful as ways of sidestepping how painful and aversive the prospect of failure usually is. So I'd count AoA as trying to do the same thing, but more robustly, if it tried to address the underlying pain of / aversion to failure (which it sounds like it does).
Re Jhourney, I know they advertise themselves as being about accessing bliss states, but when I attended a month or two ago they actually seemed to place just as much (or more) emphasis on emotional processing more generally.
I appreciate this comment, both in its substance and how careful you've been to phrase it non-adversarially. The simple answer is that I am very unfamiliar with the lineage that influenced Land, and continental philosophy more generally. As far as my philosophy tutors at Oxford were concerned, that whole field might as well have not existed; and I never found my way into it independently.
I have been intending to look into several of the people you listed (and other related thinkers) more. Until I've done so, I may well say silly things due to ignorance of them, and am very happy to be corrected when I do so.
Having said that, your comment reads like the sort of response I'd expect if I'd made a claim like "Land is one of the most original thinkers ever" or "Land's thinking is unprecedented". By "Land has been pushing the frontier" I meant something much weaker, which is totally consistent with him being greatly influenced by the people you named (who also were pushing the frontier in their own eras).
To be more specific and object-level: in a healthy discipline, many people (e.g. most PhD students) will sync up on what the field as a whole considers to be open questions, and then try to push the frontier forward on some of them. However, I think of modern political philosophy as very unhealthy, because the people who do the structuralist theorizing (to borrow your phrase from your other comment) are unwilling to engage with various obvious and important yet tabooed facts/positions. Whereas the people who are willing to engage with such facts are largely uninterested in structuralist theorizing. And so, while there's no "consensus" on what the open questions are, or what progress on them would look like, merely being willing to combine these two approaches gets you a long way towards (and sometimes over) the frontier in my book.
I agree with you, but also think you're not going far enough. In a world where things are changing radically, the space of possibilities opens up dramatically. And so it's less a question of "does advocating for policy X become viable?", and more a question of "how can we design the kinds of policies that our past selves wouldn't even have been able to conceive of?"
In other words, in a world that's changing a lot, you want to avoid privileging your hypotheses in advance, which is what it feels like the "pro AI pause vs anti AI pause" debate is doing.
(And yes, in some sense those radical future policies might fall into a broad category like "AI pause". But that doesn't mean that our current conception of "AI pause" is a very useful guide for how to make those future policies come about.)
Re politics, people like Yarvin and Land have been pushing the frontier pretty illegibly. Scott Alexander too. And Vassar is a central example I was thinking about. For others who are a bit less well-known see my curriculum—I like N. S. Lyons, Nathan Cofnas, Ben Landau-Taylor, etc.
Re emotional processing, I don’t have a great sense of the history here. But I feel like there’s been gradual development of stuff like internal family systems, ideal parent figure therapy, circling, jhana meditations, and various kinds of body work over the last few decades. Stuff like The Body Keeps the Score (though I haven’t read that specifically) and Existential Kink have been popularized more recently. Unlocking the Emotional Brain was 2012. I don’t want to hang my hat on any of these in particular, since it’s hard for me to concisely convey my models for when and how each of them is useful. But hopefully that conveys a rough sense of the kinds of things that seem plausibly like progress.
Yeah, good question. Now that I reflect on it I don’t have a great reason. I think I had mentally cached their work on psychology as attempting to be general enough to apply to AIs too, but I don’t know if that’s accurate.
(Geoff’s orientation to philosophy of science feels related to some lines of thinking that led me to focus on agent foundations, but again that’s a pretty speculative connection.)
Yepp, sounds like a thing we should chat about.
I’m using “top-down” and “bottom-up” pretty loosely, but some intuitions to help triangulate:
I like this comment a lot.
It reminds me of a tweet I saw recently, which read in part: "being very online makes you crazy but if you're not very online then the crazy just sneaks up on you".
That is, I think you're plausibly right that most people you knew in the bay area were half-way to crazy. But also, it's a hard unsolved problem how to adapt to a world that's changing rapidly without being half-way to crazy.
Even noticing this problem seems like a valuable step, though.
I was recently looking at the Astra Fellowship program, and found myself noticing skepticism that much value would come out of their "Strategy" category of research. In order to channel that skepticism productively, I want to talk a bit about how I think people might do valuable strategy research.
The most important thing is something like: trying to orient to the possibility of big changes in our ontologies going forward. What I mean by "strategy" is in significant part "ideas and plans that are robust to paradigm shifts".
Having said that, there's no way to do strategic thinking without relying on some part of your existing ontology. So the thing I'm advocating for is more like "keep track of the fields where paradigm shifts seem most plausible, and actually try to picture what it would be like for that part of your ontology to radically improve". Of course answering questions like these is extremely difficult because ontology improvements are anti-inductive: if you could picture them then you would have already improved your ontology.
One strategy for getting around that is to use analogies—e.g. "imagine a future where we understand social sciences as well as we currently understand natural sciences". And if you flesh out these analogies enough, I expect that in doing so you'll actually help push the frontier towards that paradigm shift. (E.g. if you thoroughly flesh out a world where we've solved alignment as well as we've solved problem X or problem Y, I expect that you'll discover some promising alignment research directions.)
To be more concrete, here are the three frontiers that I'm currently tracking as places where there's a lot of room for radically better ontologies that transform how we think and act:
Note that the distinction between these three frontiers is a bit artificial, because I think of these three areas as deeply entangled (e.g. understanding ethics cuts across all of them). And in fact, my current bet on my own career is something like "by trying to stay near these three frontiers at once, I'll be able to transfer insights between them to make progress, even if I don't have the skills required to push any one of these frontiers individually".
I don't know of anyone else who's trying to do this right now (though my sense is that Leverage was doing all three for most of the 2010s). The closest I'm aware of are:
When I encounter people who aren't near any of these frontiers, one common reason seems to be that they over-index on legibility. The most common case is people who care too much about academic prestige, and aren't tracking the ways in which academia is broken. Another common category is EAs who try to backchain from having impact, thereby ruling out the most promising strategies (which are often too nascent or weird or controversial to have clear paths to impact). But I also encounter this with more rationalist-flavored people who are "stuck" in existing paradigms (like bayesianism) because they over-index on the legible advantages of those paradigms (e.g. coherence arguments).
Conversely, when I encounter people who seem to me to have meaningfully pushed one of these frontiers, they are often significantly more sympathetic to illegible thinking than I am. To me, illegible thinking is a useful waypoint towards coming up with theories or ideas that are legible to a wide range of people. Whereas they often seem to see an illegible framework or theory as a success in itself, and are confused or frustrated that I am asking for more legibility (in the form of verifiable predictions, concrete case studies, precise equations, or even just written statements of the framework's key points). The easiest example to point to is this exchange (in section 5.3) with Yudkowsky where I asked him which successful advance predictions his model of utilities had made. Not only did he not name any, he also was critical of me even wanting him to do so. (I get similar vibes from Wei Dai's uncertainty about why I asked him for an in-depth writeup of cases where metaphilosophy has been useful.)
This leaves me confused as to whether I'm too focused on legibility to meaningfully push the frontiers I mentioned above. As one concrete example, should I even be writing this post? It feels like making my thinking more legible to more people is an easy win, but maybe there's a kind of "anchoring" effect that makes it harder for me to think crazy novel thoughts if I know that I'm going to need to justify them later. (But also, maybe this kind of anchoring is good for staying sane.) I'm confused about this overall, but for now will continue trusting my instincts.
Somehow this is bouncing off me. We should probably talk more directly about it. I'll quickly give two pointers to what feels like it's bouncing off:
This is enough of a stretch in my ontology that I think we're probably talking past each other.
Yeah, I very much got the latter vibe from LARC. But the former vibe does seem pretty hard to avoid in context where there are designated teachers, and they're teaching students, and it's a time-limited container with no expectation of future interaction, and so on.
I also notice that things like meditation traditions are super into a kind of "everyone's a wise self-authored creature" vibe while also being super into a "here is the tradition of knowledge I'm in, which y'all have come to learn from and be helped by" vibe, and don't think of them as in conflict.
Hence there's something here where isn't quite clicking for me. But you shouldn't feel pressure to try to explain it in this format.