- Conversely, there is some (potentially high) threshold of societal epistemics + coordination + institutional steering beyond which we can largely eliminate anthropogenic x-risk, potentially in perpetuity
Note that this is not a logical converse of your first statement. I realize that the word "conversely" can be used non-strictly and might in fact be used this way by you here, but I'm stating this just in case.
My guess is that "there is some (potentially high) threshold of societal epistemics + coordination + institutional steering beyond which we can largely eliminate anthropogenic x-risk in perpetuity" is false — my guess is that improving [societal epistemics + coordination + institutional steering] is an infinite endeavor; I discuss this a bit here. That said, I think it is plausible that there is a possible position from which we could reasonably be fairly confident that things will be going pretty well for a really long time — I just think that this would involve one continuing to develop one's methods of [societal epistemics, coordination, institutional steering, etc.] as one proceeds.
Basically nobody actually wants the world to end, so if we do that to ourselves, it will be because somewhere along the way we weren’t good enough at navigating collective action problems, institutional steering, and general epistemics
... or because we didn't understand important stuff well enough in time (for example: if it is the case that by default, the first AI that could prove would eat the Sun, we would want to firmly understand this ahead of time), or because we weren't good enough at thinking (for example, people could just be lacking in iq, or have never developed an adequate sense of what it is even like to understand something, or be intellectually careless), or because we weren't fast enough at disseminating or [listening to] the best individual understanding in critical cases, or because we didn't value the right kinds of philosophical and scientific work enough, or because we largely-ethically-confusedly thought some action would not end the world despite grasping some key factual broad strokes of what would happen after, or because we didn't realize we should be more careful, or maybe because generally understanding what will happen when you set some process in motion is just extremely cursed.[1] I guess one could consider each of these to be under failures in general epistemics... but I feel like just saying "general epistemics" is not giving understanding its proper due here.
Many of these are related and overlapping. ↩︎
the long run equilibrium of the earth-originating civilization
(this isn’t centrally engaging with your shortform but:) it could be interesting to think about whether there will be some sort of equilibrium or development will meaningfully continue (until the heat death of the universe or until whatever other bound of that kind holds up or maybe just forever)[1]
Summarizing documents, and exploring topics I'm no expert in: Super good
I think you probably did this, but I figured it's worth checking: did you check this on documents you understand well (such as your own writing) and topics you are an expert on?
I think this approach doesn't make sense. Issues, briefly:
also, note that it generally doesn't make sense to speak of the of a matrix — a matrix can have (infinitely) many logarithms ( https://en.wikipedia.org/wiki/Logarithm_of_a_matrix#Example:_Logarithm_of_rotations_in_the_plane ) ↩︎
I'd rather your "that is" were a "for example". This is because:
I feel like items on your current list have of the responsibility for what I'd consider software updates in humans, and that it sorta fails to address almost all the ordinary stuff that goes on when individual humans are learning stuff (from others or independently) or when "humanity is improving its thinking". But that makes me think that maybe I'm missing what you're going for with this list?[1] Continuing with the (possibly different) question I have in mind anyway, here's a list that imo points toward a decent chunk of what is missing from your list, with a focus on the case of independent and somewhat thoughtful learning/[thinking-improving] (as opposed to the case of copying from others, and as opposed to the case of fairly non-thoughtful thinking-improving)[2]:
I will note that when I say , this is wrt a measure that cares a lot about understanding how it is that one improves at doing difficult thinking (like, math/philosophy/science/tech research), and I could maybe see your list covering if one cared relatively more about software updates affecting one's emotional/social life or whatever, but I'd need to think more about that. ↩︎
it has such a focus in large part because such a list was easy for me to provide — the list is copied from here with light edits ↩︎
two sorta-examples: humanity starting to think probabilistically, humanity starting to think in terms of infinitesimals ↩︎
I agree it’s a pretty unfortunate/silly question. Searle’s analysis of it in Chapter 1 of Seeing Things as They Are is imo not too dissimilar to your analysis of it here, except he wouldn’t think that one can reasonably say “the world we see around us is an internal perceptual copy” (and I myself have trouble compiling this into anything true also), though he’d surely agree that various internal things are involved in seeing the world. I think a significant fraction of what’s going on with this “disagreement” is a bunch of “technical wordcels” being annoyed at what they consider to be careless speaking that they take to be somewhat associated with careless thinking.
see e.g. Chapter 1 of Searle's Seeing Things as They Are for an exposition of the view usually called direct realism (i'm pretty sure you guys (including the op) have something pretty different in mind than that view and i think it's plausible you all would actually just agree with that view)
https://www.lesswrong.com/posts/DMxe4XKXnjyMEAAGw/the-geometric-expectation