"it is impossible for there to be a language in which most sentences were lies"Is it? If 40% of the time people truthfully described what colour a rock was, and 60% of the time they picked a random colour to falsely describe it as (perhaps some speakers benefit from obscuring the rock's true colour but derive no benefit from false belief in any particular colour), we would have a case where most sentences describing the rock were lies and yet listening to someone describing an unknown rock still allowed you to usefully update your priors. That ability to benefit from communication seems like all that should be necessary for a language to survive.
Without rejecting any of the premises in your question I can come up with:
Low tractability: you assign almost all of the probability mass to one or both of "alignment will be easily solved" and "alignment is basically impossible"
Currently low tractability: If your timeline is closer to 100 years than 10, it is possible that the best use of resources for AI risk is "sit on them until the field developers further" in the same sense that someone in the 1990s wanting good facial recognition might have been best served by waiting for modern ML.
Refusing to prioritize highly uncertain causes in order to avoid the Winner's Curse outcome of your highest priority ending up as something with low true value and high noise
Flavours of utilitarianism that don't value the unborn and would not see it as an enormous tragedy if we failed to create trillions of happy post-Singularity people (depending on the details human extinction might not even be negative, so long as the deaths aren't painful)
I got all of the octopus questions right (six recalled facts, #6 intuitively plausible, #9 seems rare enough that it should be unlikely for humans to observe, and #2 was uncertain until I completed the others then metagamed that a 7/2 split would be "too unbalanced" for a handcrafted test) so the only surprising fact I have to update on is that the recognition thing is surprising to others. My model was that many wild animals are capable of recognizing humans, and octopuses are particularly smart as animals go, no other factors weigh heavily. That octopuses evolved totally separated from humans didn't seem significant because although most wild animals were exposed to humans I see no obvious incentive for most of them to recognize individual humans, so the cases should be comparable on that axis. I also put little weight on octopuses not being social creatures because while there may be social recognition modules, A: animals are able to recognize humans and all of them generalizing their social modules to our species seems intuitively unlikely and B: At some level of intelligence it must be possible to distinguish individuals based on sheer general pattern-recognition, for ten humans an octopus would only need four or five bits of information and animal intelligence in general seems good at distinguishing between a few totally arbitrary bits.
The evolutionary theory of aging is interesting and seems to predict that an animal's maximum age will be proportionate to its time -to-accidental-death. Just thinking of animals and their ages at random this seems plausible but I'm hardly being rigorous, have there been proper analyses done of that?
Could it be that the average customer hasn't thought it through enough to realize they are incinerating $1.67 of time-value, and would thus prefer to pay $15 plus *mumble* time as opposed to $15.25 plus zero time?
If you're not saying to go into AI safety research, what non-business-as-usual course of action are you expecting? Is your premise that everyone taking this seriously should figure out their comparative advantage within an AI risk organization because they contain many non-researcher roles, or are you imagining some potential course of action outside of "Give your time/money to MIRI/HCAI/etc"?