It's not just an irony. The arguments for rational / successful agents "having a utility function" are stronger when applied to stuff involving convergent instrumental stuff. Indeed, why can't I just want to go in a cycle from San Jose to SF to Berkeley back to San Jose? The only argument against is that it's wasteful (...if you just wanted to get to a specific place).
FWIW, I think that in some sort of hypothetical involving a bunch more resources (something like $200 million -- $1 billion, maybe), you could plausibly technically get to strong reprogenetic HIA within 5 or 10 years. This would go through IVG plus either iterated CRISPR, iterated recombinant selection, and/or chromosome selection. (Then you'd have to wait for the kids to grow up, and as you say, uptake would be slow at first and would see regulatory obstacles.)
HIA is very clearly going to be a lot slower than the development of ASI
(FWIW, I don't think that's right. I think there's quite substantial chance we have time for reprogenetics; see https://www.lesswrong.com/posts/sTDfraZab47KiRMmT/views-on-when-agi-comes-and-on-strategy-to-reduce and https://www.lesswrong.com/posts/5tqFT3bcTekvico4d/do-confident-short-timelines-make-sense . Also, in theory, some of the methods listed here (https://www.lesswrong.com/posts/jTiSWHKAtnyA723LE/overview-of-strong-human-intelligence-amplification-methods) could be much faster, e.g. brain implants or signaling molecules. Pushing up HIA timelines still helps: https://tsvibt.blogspot.com/2022/08/the-benefit-of-intervening-sooner.html )
Thanks.
I am interested in at some point thinking about how to measure "WQ" (see some speculations on what wisdom is here: https://www.lesswrong.com/posts/fzKfzXWEBaENJXDGP/what-is-wisdom-1 ).
I do also think you could increase wisdom with better memes plus some intelligence, which upvotes making wise memes.
(I mentally checked again, and I still don't feel like posting it, IDK why. It was written under a lot of time pressure during Inkhaven. I intend to at some point post my fuller / more fully explained reasoning.)
what does the "strategic competence landscape" look like after significant HIA has occurred?
It's a good question, thanks. I'm still thinking these things through, but my guess at where I'll end up is something like:
Strategic competence is in a class with several other things. The class is something like: A kind of competence, which humanity needs, and doesn't have, and which you could hope that some people would exert if they could, but apparently it's rare. For example, competence at solving deep philosophical problems; at deep / full-spectrum cognitive empathy for others; at organizing large groups (e.g. shepherding group epistemics); at Wisdom. For things in this class, you need some combination of high cognitive capacity and high values of other unknown traits. If you have the unknown traits, then more IQ still helps a lot with what you can achieve. Also, IQ can funge against those traits, but only with enough of the right memes and at a fairly poor rate. (For example, you can do cognitive empathy well just by working really hard at it and being smart, without innate talents / attunements / whatever, but you have to work really hard and be really smart.)
I think this will imply a high value of HIA. It also implies a high value on figuring out how to influence other traits (e.g. through good parenting, or though reprogenetics though that has other fraughtness). It also implies a high value on generally contributing to a good memetic / group epistemic / group agentic environment in which HIA kids could grow up.
(I don't update very much on anecdotes from people about their supposed IQ and talents; I don't feel I know how to evaluate it. IQ measurements such as the SAT do have noise; are you not the smartest person in a room of 1000 actually-random people? How heavily selected was your crypto research environment?)
It's a concern. Several related issues are mentioned here: https://berkeleygenomics.org/articles/Potential_perils_of_germline_genomic_engineering.html E.g. search "personality" and "values", and see:
Antagonistic pleiotropy with unmeasured traits. Some crucial traits, such as what is called Wisdom and what is called Kindness, might not be feasibly measurable with a PGS and therefore can’t be used as a component in a weighted mixture of PGSes used for genomic engineering. If there is antagonistic pleiotropy between those traits and traits selected for by GE, they’ll be decreased.
A related issue is that intelligence itself could affect personality:
Even if a trait is accurately measured by a PGS and successfully increased by GE, the trait may have unmapped consequences, and thus may be undesirable to the parents and/or to the child. For example, enhancing altruistic traits might set the child up to be exploited by unscrupulous people.
An example with intelligence is that very intelligent people might tend to be isolated, or might tend to be overconfident (because of not being corrected enough).
One practical consideration is that sometimes PGSes are constructed by taking related phenotypes and just using those because they correlate. The big one for IQ is Educational Attainment, because EA is easier to measure than IQ (you just ask about years of schooling or whatever). If you do this in the most straightforward way, you're just selecting for EA, which would probably select for several personality traits, some maybe undesirable.
I think in practice these effects will probably be pretty small and not very concerning, though we couldn't know for sure without trying and seeing. A few lines of reasoning:
From the subjective perspective of an unmodified human, these changes are likely to be "for the worse."
Glancing at the correlations given in the wiki page ( https://en.wikipedia.org/wiki/Intelligence_and_personality ) I don't especially feel that way.
If you pick your child's genes to maximize their IQ (or any other easily-measurable metric), you might end up with the human equivalent of a benchmaxxed LLM with amazing test scores but terrible vibes.
I'm not sure I follow. I mean I vaguely get it, but I don't non-vaguely get it.
And in the case of superbabies, we'd have to wait decades to find out what they're like once they've grown up.
I don't think this is right. If we're talking about selection (rather than editing), the child has a genome that is entirely natural, except that it's selected according to your PGS to be exceptional on that PGS. This should be basically exactly the same as selecting someone who is exceptional on your PGS from the population of living people. So you could just look at the tails of your PGS in the population and see what they're like. (This does become hard with traits that are rare / hard / expensive to measure, and it's hard if you're interested in far tails, like >3 SDs say.) (In general, tail studies seem underattended; see https://www.lesswrong.com/posts/i4CZ57JyqqpPryoxg/some-reprogenetics-related-projects-you-could-help-with , though also see https://pmc.ncbi.nlm.nih.gov/articles/PMC12176956/ which might be some version of this (for other traits).)
:map H 6h
:map J 5j
:map K 5k
:map L 6l
I suggest tracking a hypothesis piece like "a lot of people are fairly deeply intuitively tuned to something called power and power-seeking". I don't feel that I know what those things are well enough to test, judge, or communicate about it, but it seems like a salient hypothesis in this area. I mean something like taking a stance in line with presuming something like:
Whatever positive-sum / man vs. nature games are going on, those are other people's job. I will instead focus on positioning myself to get as much as I can in [the zero-sum negotiation/scuffle that will inevitably occur over [whatever surplus or remainders there may end up being from [the man vs. nature struggle that's going on in the area that I'm somehow important in]]].
In particular I'd suggest that we (someone) figure out how that works.
You could care about outcomes (states of stuff). You could care about trajectories. You could care about internal / mental activity. You could care about unseen instances of these (e.g. in other possible worlds). You could care about your actions for their own sake (e.g. aesthetics of musical output).