Nominating this for 2024 review. It seems like an accurate (in many cases, at least) model of a phenomenon I care about (and encounter fairly frequently, in myself and in people I end up trying to help with things) that I didn't previously have an accurate model of.
A further wrinkle / another example is that a question like "what should I think about (in particular, what to gather information about / update about)", during the design process, wants these predictions.
Yes; this (or something similar) is why I suspect that "'believing in' atoms" may involve the same cognitive structure as "'believing in' this bakery I am helping to create" or "'believing in' honesty" (and a different cognitive structure, at least for ideal minds, from predictions about outside events). The question of whether to "believe in" atoms can be a question of whether to invest in building out and maintaining/tuning an ontology that includes atoms.
Prediction and planning remain incredibly distinct as structures of cognitive work,
I disagree. (Partially.) For a unitary agent who is working with a small number of possible hypotheses (e.g., 3), and a small number of possible actions, I agree with your quoted sentence.
But let’s say you’re dealing with a space of possible actions that’s much too large to let you consider each exhaustively, e.g. what blog post to write (considered concretely, as a long string of characters).
It’d be nice to have some way to consider recombinable pieces, e.g. “my blog post could include idea X”, “my blog post could open with joke J”, “my blog post could be aimed at a reader similar to Alice”.
Now consider the situation as seen by the line of thinking that is determining: “should my blog post be aimed mostly at readers similar to Alice, or at readers similar to Bob?”. For this line of thinking to do a good estimate of ExpectedUtility(post is aimed at Alice), it needs predictions about whether the post will contain idea X. However, for the line of thinking that is determining whether to include idea X (or the unified agent, at those moments when it is actively considering this), it’’ll of course need good plans (not predictions) about whether to include X, and how exactly to include X.
I don’t fully know what a good structure is for navigating this sort of recombinable plan space, but it might involve a lot of toggling between “this is a planning question, from the inside: shall I include X?” and “this is a prediction question, from the outside: is it likely that I’m going to end up including X, such that I should plan other things around that assumption?”.
My own cognition seems to me to toggle many combinatorial pieces back and forth between planning-from-the-inside and predicting-from-the-outside, like this. I agree with your point that human brains and bodies have all kinds of silly entanglements. But this part seems to me like a plausible way for other intelligences to evolve/grow too, not a purely one-off humans idiosyncrasy like having childbirth through the hips.
Also we understand basic arithmetic around here, which goes a long way sometimes.
It's a good point, re: some of the gap being that it's hard to concretely visualize the world in which AGI isn't built. And also about the "we" being part of the lack of concreteness.
I suspect there're lots of kinds of ethical heuristics that're supposed to interweave, and that some are supposed to be more like "checksums" (indicators everyone can use in an embodied way to see whether there's a problem, even though they don't say how to address it if there is a problem), and others are supposed to be more concrete.
For some more traditional examples:
It would be too hard to try to equip humans and human groups for changing circumstances via only a "here's what you do in situation X". It's somewhat easier to do it (and traditional ethical heuristics did do it) by a combination of "you can probably do well by [various what-to-do heuristics]" and "you can tell if you're doing well by [various other checksum-type heuristics]. Ethics is help to let us design our way to better plans, not to only always give us those plans.
Another place where I'll think and act somewhat differently as a result of this conversation:
Okay, but: it's also find individuals who are willing to speak for heuristic C, in a way I suspect differs from what it was like for leaded gasoline and from what I remember as a kid in the late 80's about the ozone layer.
It's a fair point that I shouldn't expect "consensus", and should've written and conceptualized that part differently, but I think heuristic C is also colliding with competing ethical heuristics in ways the ozone situation didn't.
I listed the cases I could easily list of full-blown manic/psychotic episodes in the extended bay area rationalist community (episodes strong enough that the person in most cases ended up hospitalized, and in all cases ended up having extremely false beliefs about their immediate surroundings for days or longer, eg “that’s the room of death, if I walk in there I’ll die”; "this is my car" (said of the neighbor's car)).
I counted 11 cases. (I expect I’m forgetting some, and that there are others I plain never knew about; count this as a convenience sample, not an exhaustive inventory.)
Of these, 5 are known to me to have involved a psychedelic or pot in the precipitating event.
3 are known to me to have *not* involved that.
In the other 3 cases I’m unsure.
In 1 of the cases where I’m unsure about whether there were drugs involved, the person had taken part in a several-weeks experiment in polyphasic sleep as part of a Leverage internship, which seemed to be part of the precipitating event from my POV.
So I’m counting [between 6 and 8] out of 11 for “precipitated by drugs or an imprudent extended sleep-deprivation experiment” and [between 3 and 5] out of 11 for “not precipitated by doing anything unusually physiologically risky.”
(I’m not here counting other serious mental health events, but there were also many of those in the several-thousand-person community across the last ten years, including several suicides; I’m not trying here to be exhaustive.)
(Things can have multiple causes, and having an obvious precipitating physiological cause doesn’t mean there weren’t other changeable risk factors also at play.)
I appreciate the explicit, fairly clear discussion of a likely gap in what I'm reading about parenting and kids. I was aware of a gap near here, but the post added a bit of detail to my model, and I like having it in common knowledge; I also hope it may encourage other such posts. (Plus, it's short and easy to read.)