Engineer working on next-gen satellite navigation at Xona Space Systems. I write about effective-altruist and longtermist topics at nukazaria.substack.com, or you can read about puzzle videogames and other things at jacksonw.xyz
Thanks for all these clarifications; sorry if I came off as too harsh.
"Yes, so would I! Again, when it is a personal informed choice, the situation is entirely different." -- It seems to me like in the case of the child (who, having not been born yet, cannot decide either way), the best we can do is guess what their personal informed choice would be. To me it seems likely that the child might choose to trade off a bit of happiness in order to boost other stats (relative to my level of happiness and other stats, and depending of course on how much that lost happiness is buying). After all, that's what I'd choose, and the child will share half my genes! To me, the fact that it's not a personal choice is unfortunate, and I take your point -- forcing /some random other person/ to donate to EA charities would seem unacceptably coercive. (Although I do support the idea of a government funded by taxes.) But since the child isn't yet born, the situation is intermediate between "informed personal choice" vs coercing a random guy. In this intermediate situation, I think choosing based on my best guess of the unborn child's future preferences is the best option. Especially since it's unclear what the "default" choice should be -- selecting for IQ, selecting against IQ, or leaving IQ alone (and going with whatever level of IQ and happiness is implied by the genes of me and my partner), all seem like they have an equal claim to being the default. Unless I thought that my current genes were shaped by evolution to be at the optimal tradeoff point already, which (considering how much natural variation there is among people, and the fact that evolution's values are not my values) seems unlikely to me.
Agreed that it is possible that IQ --> less happiness, for most people / on average, even though that strikes me as unlikely. It would be great to see more research that tries to look at this more closely and in various ways.
And totally agreed that this would be a tough tradeoff to make either way; that selecting for emotional stability and happiness alongside IQ would be a high priority if I was doing this myself.
This is funny, although of course what this is really pointing to isn't a literal U-shaped graph, but that it's really better to think about this in a much more multidimensional way, rather than just trying to graph happiness vs intelligence. Of course there are all sorts of other traits (like conscientiousness, etc) that might influence happiness. But more importantly IMO is what you are pointing to -- there are all sorts of different "mindsets" that you can take towards your life, which have a huge impact on happiness... maybe high-IQ slightly helps you grope your way towards a healthier mindset, but to a large extent these mindsets / life philosophies seem independent of intelligence. By "mindset", I am thinking of things like:
- "internal vs external locus of control"
- level of expectations like you say, applied to lots of different life areas where we have expectations
- stoic vs neurotic/catastrophizing attitude towards events
- how you relate to / take expectations and desires your social environment (trying to keep up with the joneses, vs deliberately rebelling, vs lots of other stances).
- being really hard on yourself vs having self-compassion vs etc
And so on; too many to mention.
"We have a confusing situation here." -- Indeed, I think this post is a little confused, mixing up a few very different questions:
Wait, it seems like those last two points would totally change the argument! Consider:
To be clear, I also dispute the idea that more intelligence --> less happiness. You could probably find some narrowly-defined type of happiness which is anticorrelated with intelligence. But a lot of the meaning and happiness in my life seem like they would get better with more intelligence. Like my ability to understand my place in the world and live an independent life, planning my career/relationships/etc with lots of personal agency. Or my ability to appreciate the texture/experience of being alive -- noticing sensations, taking time to "smell the roses", and making meditative/spiritual/introspective progress of understanding my own mind. My ability to overcome emotional difficulties/setbacks by "working through them" and communicating well with the person I might be angry at. My material quality of life, enabled by my high-income job, which I couldn't hold down if I wasn't reasonably smart. My ability to appreciate art on a deep level (see my lecture series about the videogame "The Witness", an intellectual pursuit which brings me great joy). And so forth.
First few couple of steps towards solving for the equilibrium:
First time I've had the opportunity to comment "just tax land lol" -- if we're thinking about how to craft an ideal policy situation (which we are doing, by talking about UBI), it shouldn't be too much to posit that UBI would pair best with:
That said, on a larger, more philosophical level, if the economic fundamentals of society are naturally super unequal (huge number of powerless people hoping that elites take pity on them and implement an ideal UBI+georgism+etc policy regime, while a tiny portion of the population produces like 99% of all economic value), that is inherently gonna be a more precarious situation than one in which the economic fundamentals are naturally pretty egalitarian (maybe imagine a world where manual labor is in high demand, and pretty much anyone can do manual labor, so wages are naturally high across the society). The unequal society will have to rely on the stability of political institutions and human willingness to do the right thing; the naturally-equal society gets it for free.
Unfortunately, we don't really get much control over the economic fundamentals of our civilization (which depends on stuff like technology, supply and demand driven by random exogenous factors, etc), so I think crafting an ideal policy situation is the best we can aspire to.
On the downside, if matrix multiplication is the absolute simplest, most interpretable AI blueprint there is -- if it's surrounded on all sides by asynchronous non-digital biological-style designs, or other equally weird alien architectures -- that sounds like pretty bad news. Instead of hoping that we luck out and get a simpler, more interpretable architecture in future generations of more-powerful AI, we will be more likely to switch to something much more inscrutable, perhaps at just the wrong time. (ie, maybe matrix-based AIs help design more-powerful successor systems that are less interpretable and more alien to us.)
In fairness, "biosecurity" is perhaps the #2 longtermist cause area in effective-altruist circles. I'm not sure how much of the emphasis on this is secretly motivated by concerns about AI unleashing super-smallpox (or nanobots), versus motivated by the relatively normal worry that some malevolent group of ordinary humans might unleash super-smallpox. But regardless of motivation, I'd expect that almost all longtermist biosecurity work (which tends to be focused on worst-case GCBRs) is helpful for both human- and AI-induced scenarios.
It would be interesting to consider other potential "swiss cheese approach" attempts to patch humanity's most vulnerable attack surfaces:
I agree with @shminux that these hacky patches would be worth little in the face of a truly superintelligent AI. So, eventually, the more central problems of alignment and safe deployment will have to be solved. But along the way, some of these approaches might help might buy crucial time on our way to solving the core problems -- or at least help us die with a little more dignity.
So, perhaps a better statistic might be:
For a total negative-social-externalities-per-big-mac of $1.21?
Of course, some of these estimates might swing wildly depending on key assumptions...
Personally, I love the idea of trying to tax/subsidize things to account for social externalities. But of course the trouble is finding some way to assess those externalities which is fair and not subject to endless distortion from political pressure, ideological fads, etc. (For more on the practical difficulties of theoretically-perfect Pigouvian taxation , see this post by economist Bryan Caplan.) So I'd be happy to see more discussion of this Big Mac question; I'd encourage you to make a cross-post to the EA Forum!
For those who prefer listening, note that there is a very nice recording of this speech, which you can view / listen to here on youtube!