Jackson Wagner

Engineer working on next-gen satellite navigation at Xona Space Systems. I write about effective-altruist and longtermist topics at nukazaria.substack.com, or you can read about puzzle videogames and other things at jacksonw.xyz

Wiki Contributions

Comments

For those who prefer listening, note that there is a very nice recording of this speech, which you can view / listen to here on youtube!

Thanks for all these clarifications; sorry if I came off as too harsh.

"Yes, so would I! Again, when it is a personal informed choice, the situation is entirely different."  -- It seems to me like in the case of the child (who, having not been born yet, cannot decide either way), the best we can do is guess what their personal informed choice would be.  To me it seems likely that the child might choose to trade off a bit of happiness in order to boost other stats (relative to my level of happiness and other stats, and depending of course on how much that lost happiness is buying).  After all, that's what I'd choose, and the child will share half my genes!  To me, the fact that it's not a personal choice is unfortunate, and I take your point -- forcing /some random other person/ to donate to EA charities would seem unacceptably coercive.  (Although I do support the idea of a government funded by taxes.)  But since the child isn't yet born, the situation is intermediate between "informed personal choice" vs coercing a random guy.  In this intermediate situation, I think choosing based on my best guess of the unborn child's future preferences is the best option.  Especially since it's unclear what the "default" choice should be -- selecting for IQ, selecting against IQ, or leaving IQ alone (and going with whatever level of IQ and happiness is implied by the genes of me and my partner), all seem like they have an equal claim to being the default.  Unless I thought that my current genes were shaped by evolution to be at the optimal tradeoff point already, which (considering how much natural variation there is among people, and the fact that evolution's values are not my values) seems unlikely to me.

Agreed that it is possible that IQ --> less happiness, for most people / on average, even though that strikes me as unlikely.  It would be great to see more research that tries to look at this more closely and in various ways.

And totally agreed that this would be a tough tradeoff to make either way; that selecting for emotional stability and happiness alongside IQ would be a high priority if I was doing this myself.

This is funny, although of course what this is really pointing to isn't a literal U-shaped graph, but that it's really better to think about this in a much more multidimensional way, rather than just trying to graph happiness vs intelligence.  Of course there are all sorts of other traits (like conscientiousness, etc) that might influence happiness.  But more importantly IMO is what you are pointing to -- there are all sorts of different "mindsets" that you can take towards your life, which have a huge impact on happiness... maybe high-IQ slightly helps you grope your way towards a healthier mindset, but to a large extent these mindsets / life philosophies seem independent of intelligence.  By "mindset", I am thinking of things like:

-  "internal vs external locus of control"
- level of expectations like you say, applied to lots of different life areas where we have expectations
- stoic vs neurotic/catastrophizing attitude towards events
- how you relate to / take expectations and desires your social environment (trying to keep up with the joneses, vs deliberately rebelling, vs lots of other stances).
- being really hard on yourself vs having self-compassion vs etc

And so on; too many to mention.

"We have a confusing situation here."  -- Indeed, I think this post is a little confused, mixing up a few very different questions:

  • Is it a good idea to literally punish & reward people based on their level of intelligence, in the hopes that they will spontaneously make themselves more intelligent?
    • Usually no, as your example of Frank illustrates.  Because your own intelligence level is a hard thing to change.  Punishing people for being born dumb is thus a bit like punishing people for being born short -- pointless to try and get people to change something that they can't change.
  • Is it a good idea to reward intellectual achievements and hard work on important problems, while punishing laziness / wasted time / underperformance?  And similarly, to reward open-minded thoughtfulness while punishing "lazy thinking" and knee-jerk responses.
    • Yes, because this is a way of motivating people about something they can change -- what they choose to work on, and how hard they work, and etc.  It's a good thing that we have Nobel Prizes to reward people who discover breakthrough cancer medicines, but not for people who discover breakthrough strategies in esports videogames, or for that matter Nobel Prizes for people who just sit around watching TV.  For instance, it would be a good idea to praise Frank when he does a good job at work, or if he shows a bit of openness towards the idea of going to the doctor.
  • Is it a good idea to effectively "reward" & "punish" people on a societal level, by trying to have a meritocratic society where we find the smartest (and hardest-working, and most prosocial, and otherwise virtuous) people to run important institutions, while dumb people get less well-paying, less-impactful jobs?
    • Yes, because a society/corporation/government/etc run by effective, virtuous people will work more smoothly and create a better life for everyone.  For instance, I would rather have you be my financial advisor, than have your dog be my financial advisor!
  • Is intelligence good for happiness on an individual level, or is it better for your own sake to be dumb?
    • Opinions differ on this; personally I think that intelligence is very good for personal happiness and life-satisfaction and living a meaningful life.  Here I will quote from another comment I recently made: "You could probably find some narrowly-defined type of happiness which is anticorrelated with intelligence.  But a lot of the meaning and happiness in my life seem like they would get better with more intelligence.  Like my ability to understand my place in the world and live an independent life, planning my career/relationships/etc with lots of personal agency.  Or my ability to appreciate the texture/experience of being alive -- noticing sensations, taking time to "smell the roses", and making meditative/spiritual/introspective progress of understanding my own mind.  My ability to overcome emotional difficulties/setbacks by 'working through them' and communicating well with the person I might be angry at.  My material quality of life, enabled by my high-income job, which I couldn't hold down if I wasn't reasonably smart.  My ability to appreciate art on a deep level (see my lecture series about the videogame "The Witness", an intellectual pursuit which brings me great joy).  And so forth."

Wait, it seems like those last two points would totally change the argument!  Consider:

  • "It is unethical to donate to effective-altruist charities, since giving away money will mean that your life becomes less happy.  It may benefit society as a whole and lead to greater happiness overall.  But it does not change the argument: donations are unethical because the donation makes your own life worse."  This seems crazy to me??  If anything it seems like many would consider it unethical to keep the money for yourself.
  • Your logic would seem to go beyond "don't use embryo selection to boost IQ, have kids the regular way instead".  It seems to extend all the way to "you should use embryo selection to deliberately hamstring IQ, in the hopes of birthing a smiling idiot".  Am I thus obligated to try and damage my child's intelligence?  (Perhaps for instance by binge-drinking during pregnancy, if I can't afford IVF?)
  • It also seems like the child's preferences would matter to this situation.  For instance, personally, I am a reasonably happy guy; I wouldn't mind sacrificing some of my personal life happiness in order to become more intelligent.  (Actually, since I also consider myself a reasonably smart guy, what I would really like is to sacrifice some happiness in order to become more hardworking / conscientious / ambitious.  A little more of a "Type-A" high-achieving neurotic... not too much, of course, but just a little in that direction.  I think this would improve my material circumstances since I'd work harder, and it would also be better for the world since I'd be producing more societal value.  Having a slightly more harried and tumultuous inner life seems like an acceptable price to pay; I know lots of people who are more Type-A than I am, and they seem alright.)  I would hate for someone to paternalistically say to me: "Nope, you would be happier if you were even more of a lazy slacker, and had fewer IQ points.  So you're not allowed to trade away any happiness.  In fact, I'm gonna turn these intelligence and conscientiousness dials down a few notches, for your own good!"  
    • I guess this is just the classic conflict between preference utilitarianism vs hedonic utilitarianism.  But in this situation, preference utilitarianism seems (to me) to be viscerally in the right, while hedonic utilitarianism seems to be doing something extremely cruel and confining.

To be clear, I also dispute the idea that more intelligence --> less happiness.  You could probably find some narrowly-defined type of happiness which is anticorrelated with intelligence.  But a lot of the meaning and happiness in my life seem like they would get better with more intelligence.  Like my ability to understand my place in the world and live an independent life, planning my career/relationships/etc with lots of personal agency.  Or my ability to appreciate the texture/experience of being alive -- noticing sensations, taking time to "smell the roses", and making meditative/spiritual/introspective progress of understanding my own mind.  My ability to overcome emotional difficulties/setbacks by "working through them" and communicating well with the person I might be angry at.  My material quality of life, enabled by my high-income job, which I couldn't hold down if I wasn't reasonably smart.  My ability to appreciate art on a deep level (see my lecture series about the videogame "The Witness", an intellectual pursuit which brings me great joy).  And so forth.

First few couple of steps towards solving for the equilibrium:

  • It does seem like there are certainly plenty of ways to use such bots to cause harm, either running scams for personal enrichment, or trying to achieve various ideological/political/social goals, or just to cause havoc and harm for its own sake.
    • Naturally, people will be most motivated to run scams, and intermediate levels of motivated to do stuff with political/ideological/social motivations, and the least motivated (but plenty of people will still do it) to just cause chaos for its own sake.
    • Things that might cause "causing chaos/harm for its own sake" to become much more popular than in today's world: maybe AI makes it much easier/cheaper to do?  (seems plausible)  Maybe cheapness/easiness isn't the bottleneck, and it's actually about how likely you are to get caught?  Maybe AI helps with this too, though?
    • Anyways, regardless of whether people are causing chaos for its own sake, I expect an increase in scams and, perhaps just as destructively, an increase in spam across all online platforms which is increasingly difficult to differentiate from genuine human conversation / activity.  This will erode social trust somewhat, although it's hard for me to tell how impactful this might be.  See Astral Codex Ten's "Mostly Skeptical thoughts on the Chatbot Propaganda Apocalypse" for more detail on this.
  • In general it seems pretty hard to solve for the equilibrium here, since human social interaction online and human culture and the overall "agent landscape" of the economy and society, is very complicated!  It definitely seems like there will be some "pollution of the agentic commons", and then obviously we will try to fight back with some mix of cultural adaptation, developing defensive technologies that try to screen out bots, and enacting new laws penalizing new kinds of scams / exploits / etc.
  • If the "chatbot apocalypse" problems get REALLY bad, this could actually have some upside from the perspective of AI notkilleveryoneism -- one plausible sequence of events might go like this:
    • The language models provided by OpenAI, Google, etc, are carefully RLHF'ed and monitored to prevent people from ever using it to create a bot that says racist things, or scams people out of their crypto, or makes pornographic images, or does anything else that seems unsavory or "malbot"-y.
    • To get around these restrictions, people start using lower-quality open-source AI for those unpopular / taboo / unsavory / destructive purposes.  But people mostly still use the OpenAI / Google corporate APIs for most normal AI applications, since those AIs are of higher quality.
    • If the chatbotpocalypse gets bad, government starts restricting the use of open-source AI, perhaps via an escalating series of increasingly draconian measures:
      • Ban the use of certain categories of malbots -- this seems like a straightforwardly good law that we should have today.
      • Start taking down certain tools, websites, etc, used to coordinate and develop AI malbots.  Start arresting malbot developers.  Similar to how governments today go after crypto marketplaces like Silkroad.
      • Ban any use of any open-source AI, for any purpose.  This would annoy a lot of people and destroy a lot of useful value in the crossfire, but it might be deemed necessary if the chatbotpocalypse gets bad enough.  On the bright side, this might be great from an AI notkilleveryoneism perspective, since it would centralize AI capabilities in a few systems with controlled access and oversight.  And it would set a precedent for even stronger restrictions in the future.
      • Make people criminally liable even if there's an open-source AI program running on a computer that they own, which they didn't know about.  (Eg, if I rent a server from amazon and run open-source AI on it, then I could get arrested but also Amazon would be liable as well.  Or if I am just a perfectly average joe minding his own business but then my laptop gets hacked by an AI because I didn't download the latest windows security update, then I could get arrested.)  This would be ridiculously draconian by modern standards, but again it's something I could imagine happening if we were absolutely desperate to preserve the fabric of society against some kind of unstoppable malbot onslaught.
  • To be clear, I don't expect the chatbotpocalypse to be anywhere near bad enough to justify the last two draconian bullet points; I expect "ban certain categories of malbots" and "start arresting malbot developers" to be good enough that society muddles through.
    • Censorship-heavy countries like China might be more eager to ban open-source AI than the US, though.  Similar to how China is more hostile to cryptocurrency than the US.
  • This whole time, I have just been thinking about scams and other kinds of "malbots".  But I think there are probably lots and lots of other ways that less-evil bots could end up "polluting the agentic commons".
    • For instance if bots make it easier to file lawsuits, then maybe the court system gets jammed up with tons of new lawsuits.  And lots of other societal institutions where you are supposed to put in a lot of effort on some task as a costly signal that you are serious enough / smart enough / committed enough, might break when AI makes those tasks much easier to churn out.
    • As described in that slate star codex post, you could have a kind of soft, social botpocalypse where more and more text on the internet is GPT-generated, such that everyone becomes distrustful that maybe random tweets / articles / blog posts / etc were written by AI.  Maybe this has devastating impacts on some particular area of society, like online dating, even if the effect is mild in most places.
    • Maybe having AIs that can autonomously take economic actions (buying stuff online, trading crypto, running small/simple online businesses, playing online poker, whatever) will somehow have a devastating impact on society??  For instance by creating brutally efficient competitive markets for things that are not yet "financialized" and we don't even think of them as markets.
      • Like maybe I start using an AI that spends all day seeking out random freebies and coupons (like "get $300 by opening a checking account at our bank"), and it's so good at getting random freebies that the investment returns from "scamming credit card sign-up bonuses and getting all my food from Blue Apron free trials" that I devote most of my savings towards scamming corporate freebies instead of investing in the stock market.
      • The only consequence of the above idea would be that eventually corporations would have to stop offering such freebies, which would be totally fine.  But it's an example of something that's currently not an efficient market being turned into one.  Maybe this sort of thing could have more devastating impacts elsewhere, eg, if everyone starts using similar tools to sign up for maximal government benefits while minimizing their taxes using weird tax-evasion tricks.
  • Overall, I am skeptical that any individual malbot idea will be too devastating (since it seems like we could neuter most of them using some basic laws + technology + cultural adaptation), but also the space of potential bots is so vast that it seems very hard to solve for the equilibrium and figure out what that process of cultural/legal/technological adaptation will look like.

First time I've had the opportunity to comment "just tax land lol" -- if we're thinking about how to craft an ideal policy situation (which we are doing, by talking about UBI), it shouldn't be too much to posit that UBI would pair best with:

  • Georgism, so that the rent on land is not monopolized by landowning elites, but rather flows mainly to the public purse (perhaps this land rent is the main thing that helps fund the UBI)!  More detail on georgism and how this would work can be found at this series of long but engaging blog posts: https://astralcodexten.substack.com/p/does-georgism-work-is-land-really
  • Unfortunately Georgism would not be a complete solution, because of course land is not the ONLY thing that parasitic elites could seek to monopolize and rent-seek with.  So you'd need an enthusiastic, competent state that could play a bit of consumer-protection whack-a-mole, trying to spot new rent-seeking monopolies and break them up.  Eg, enact YIMBY policies to prevent a monopoly on housing, stimulate competition and free trade in general to prevent monopolies in goods and services, etc.  It would be a dynamic situation, and there would always be a little bit of elite parasitism going on, but the more competent and human-thriving-aligned your government is, the better they'd be able to play whack-a-mole.

That said, on a larger, more philosophical level, if the economic fundamentals of society are naturally super unequal (huge number of powerless people hoping that elites take pity on them and implement an ideal UBI+georgism+etc policy regime, while a tiny portion of the population produces like 99% of all economic value), that is inherently gonna be a more precarious situation than one in which the economic fundamentals are naturally pretty egalitarian (maybe imagine a world where manual labor is in high demand, and pretty much anyone can do manual labor, so wages are naturally high across the society).  The unequal society will have to rely on the stability of political institutions and human willingness to do the right thing; the naturally-equal society gets it for free.

Unfortunately, we don't really get much control over the economic fundamentals of our civilization (which depends on stuff like technology, supply and demand driven by random exogenous factors, etc), so I think crafting an ideal policy situation is the best we can aspire to.

On the downside, if matrix multiplication is the absolute simplest, most interpretable AI blueprint there is -- if it's surrounded on all sides by asynchronous non-digital biological-style designs, or other equally weird alien architectures -- that sounds like pretty bad news. Instead of hoping that we luck out and get a simpler, more interpretable architecture in future generations of more-powerful AI, we will be more likely to switch to something much more inscrutable, perhaps at just the wrong time. (ie, maybe matrix-based AIs help design more-powerful successor systems that are less interpretable and more alien to us.)

In fairness, "biosecurity" is perhaps the #2 longtermist cause area in effective-altruist circles.  I'm not sure how much of the emphasis on this is secretly motivated by concerns about AI unleashing super-smallpox (or nanobots), versus motivated by the relatively normal worry that some malevolent group of ordinary humans might unleash super-smallpox.  But regardless of motivation, I'd expect that almost all longtermist biosecurity work (which tends to be focused on worst-case GCBRs) is helpful for both human- and AI-induced scenarios.

It would be interesting to consider other potential "swiss cheese approach" attempts to patch humanity's most vulnerable attack surfaces:

  • Trying to harden all countries' nuclear weapons control systems against hacking and other manipulation attempts.  (EA also does some work on nuclear risk, although here I think the kinds of work that EA focuses on, like ALLFED-style recovery after a war, might not be particularly helpful when it comes to AI-nuclear-risk in particular.)
  • Trying to "harvest the low-hanging fruit" by exhausting many of the easiest opportunities for an AI to make money online, so that most of the fruit is picked by the time a rouge AI comes along.  Although picking the low-hanging fruit might be very destructive if it mostly involves, eg, committing crimes or scamming people out of their money.  (For better or worse, I think we can expect private actors to be sufficiently motivated to do plenty of AI-assisted fruit-picking without needing encouragement from EA!  Although smarter and smarter AI could probably reach higher and higher fruit, so you'll never be able to truly get it all.)
  • Somehow trying to make the world resistant to super-persuasive ideological propaganda / bribery / scams / other forms of psychological manipulation?  I don't really see how we could defend against this possibility besides maybe taking the same "low-hanging fruit" approach.  But I'd worry that a low-hanging fruit approach would be even more destructive in the "marketplace of ideas" than in the financial markets, making the world even more chaotic and crazy at exactly the wrong time.
  • One simpler attack surface that we could mitigate would be the raw availability of compute on earth -- it would probably be pretty easy for the military of the USA, if they were so inclined, to draw up an attack plan for quickly destroying most of the world's GPU datacenters and semiconductor fabs, using cruise missiles and the like.  Obviously this would seriously ruffle diplomatic feathers and would create an instant worldwide economic crisis.  But I'm guessing you might be able to quickly reduce the world's total stock of compute by 1-2 orders of magnitude, which could be useful in a pinch.  (Idk exactly how concentrated the world's compute resources are.)
    • For a less violent, more long-term and incremental plan, it might be possible to work towards some kind of regulatory scheme whereby major governments maintained "kill switches" that could disable datacenters and fabs within their own borders, plus maybe had cyberattacks queued up to use on other countries' datacenters and fabs.  Analogous to how the NSA is able to monitor lots of the world's internet traffic today, and how many nations might have kill switches for disabling/crippling the nation's internet access in a pinch.
  • Other biosecurity work besides wet-lab restrictions, like creating "Super PPE" and other pathogen-agnostic countermeasures.  This wouldn't work against advanced nanotech, but it might be enough to foil cruder plans based on unleashing engineered pandemics.  
  • Trying to identify other assorted choke-points that might come in handy in a pinch, such as disabling the world's global positioning system satellites in order to instantly cripple lots of autonomous robotic vehicles/drones/etc.
  • Laying the groundwork for a "Vulnerable World Hypothesis"-style global surveillance state, although this is obviously a double-edged sword for many reasons.
  • Trying to promote even really insubstantial, token gestures of international cooperation on AI alignment, in the hopes that every little bit helps -- I would love to see leading world powers come out with even a totally unenforceable, non-binding statement along the lines of "severely misaligned superintelligent AI cannot be contained and must never be built".  Analogous to various probably-insincere but nevertheless-somewhat-reassuring international statements that "nuclear war cannot be won and must never be fought".

I agree with @shminux that these hacky patches would be worth little in the face of a truly superintelligent AI.  So, eventually, the more central problems of alignment and safe deployment will have to be solved.  But along the way, some of these approaches might help might buy crucial time on our way to solving the core problems -- or at least help us die with a little more dignity.

So, perhaps a better statistic might be:

  • $0.15 for cruelty (divide by 1%, multiply by 0.4%, to reflect the true fraction of beef consumption represented by big macs)
  • $0.27 for environmental damages (divide by 1%, multiply by 0.4%)
  • $0.28 for direct subsidies to the meat industry (divide by 1%, multiply by 0.4%)
  • $0.51 for health costs ($71B cost of red meat consumption per year, multiply by 0.4% fraction of red meat attributable to big macs, divide by 550 million big macs sold per year.)

For a total negative-social-externalities-per-big-mac of $1.21?

Of course, some of these estimates might swing wildly depending on key assumptions...

  • the "cruelty" number might go to zero for people who just subjectively say "I don't care about animal cruelty", or might go much higher for EAs who would bid much higher amounts than the average american in a hypothetical utility-auction to end cruel farming practices.
  • I'm a bit suspicious of the environmental damages number being potentially exaggerated.  For example, the "devaluation of real property" seems like it isn't a negative externality, but rather should be fully internalized by farmers managing their own land and setting the prices of their products.  (Unless they are talking about the devaluation of other people's land, eg by the smell of manure wafting over to a neighboring suburb?)
  • As Gerald mentions, maybe the healthcare costs are actually negative if red meat is causing people to die younger and more cheaply.  But it might be best to calculate a QALY metric, valuing lives at $50K per year or whatever is the standard EA number -- this might make the healthcare cost even much larger than the $0.51 per big mac that appears based on healthcare costs.

Personally, I love the idea of trying to tax/subsidize things to account for social externalities.  But of course the trouble is finding some way to assess those externalities which is fair and not subject to endless distortion from political pressure, ideological fads, etc.  (For more on the practical difficulties of theoretically-perfect Pigouvian taxation , see this post by economist Bryan Caplan.)  So I'd be happy to see more discussion of this Big Mac question; I'd encourage you to make a cross-post to the EA Forum!

Load More