(Post published on my personal website. Cross-posted here because it’s quite relevant for this audience!)

Contents

  • Background & Motivation
  • Approach 1: Accelerating Safety Research
  • Approach 2: Augmenting Human Intelligence
  • Approach 3: Changing Regulation
  • Approach 4: Influencing AI Hardware Distribution and Access to Compute
  • Closing

Background & Motivation

We face a ticking time-bomb with AGI. In addition to directly working on the alignment problem, we need to be figuring out how to slow down the clock. That’s what this post is about- ways we can improve our overall position, while continuing to look for that decisively winning move.

 

💡 This post is specifically about ways to change society in order to improve our position re: AGI development. Most of these approaches take large amounts of capital (and/or large amounts of other resources such as influence).

⚠️ Many of these changes have the potential for significant negative side-effects, some of which are discussed below. They are included in this post because they have the potential for a large positive outcome, even with significant risks.

Note about the author: My name is Rafael Cosman and I run a ~80 person VC-backed crypto company called TrustToken. I have worked in AI including briefly at MIRI and care very much about making sure our civilization navigates the AGI issue well. It will ruin my entire day if we don't.

 

Note on credit: many of the ideas presented here have been previously described by other authors and researchers on LessWrong and elsewhere. I do not make any claims to these ideas being originally mine. My presentation of them here may represent some improvements in the description or organization of these ideas, and is partly for my own thinking!

Approach 1: Accelerating Safety Research

Solving the alignment problem may require a once-in-a-generation / Einstein-level researcher to make a key E=MC^2 kind of breakthrough. If so, great! We just need to produce this researcher. The good news is that historically we have produced many game-changing researchers across many fields: Newton, Von Neumann, Tesla, Darwin, Feynman, and many more!

Part of how we accomplished this is by having thousands of researchers going into popular fields like Physics. For every 100 good ones, there is one great one, for every 100 great ones there is one truly game-changing one. If this model is correct, we just need to have thousands of high-potential researchers going into AI Alignment and working on the core problems. This won't be easy, but it's not a particularly difficult world to imagine either.

 

What this approach might look like

  • An Endowed Chair at every major US university for a professor focusing on AI Alignment
  • Top-notch curricula for Undergrads and Masters students to get them to the edge of what's known in AI Alignment so far, and to get them excited about the open problems
  • Additional journals and conferences focusing on this area
  • Hundreds of new PhDs graduating every year with a focus on AI Alignment, or at least complementing capabilities work with a significant amount of real work on alignment
  • Significant financial and reputational incentives for substantial progress on core alignment issues (similar to the Millennium Prize Problems)

 

Difficulties with this approach

  • Funding today is biased heavily towards AI Capabilities rather than AI Alignment
  • Interest from students and Professors is biased today towards AI Capabilities rather than AI Alignment
  • It is difficult to get new researchers to work on fundamental alignment problems, rather than superficial or near-term problems that may be useful but won't solve the crux
  • Much of what these researchers produce could end up being noise rather than signal (although this is probably true in most fields)

 

💡 Conclusion: doing more alignment research seems like a low-hanging fruit with few downsides.

Approach 2: Augmenting Human Intelligence

We could be facing a problem unsolvable by any normal human mind. If the problem is truly that hard, we're still not necessarily screwed- we just need to augment human intelligence, and make sure this happens faster than AI capabilities research (which is going quite quickly). Companies like Neuralink are working towards this, but may not be going fast enough to matter. I suspect that research of this kind could proceed much more quickly outside of the US, but of course there are ethical considerations of performing this kind of research even with fully consenting participants.

 

What this approach might look like

  • Accelerate brain-augmentation research (e.g. jurisdictions outside of the US where this kind of research can be done much faster, of course taking into account the serious ethical considerations)
  • Explore other methods of creating super-intelligent humans, such as using CRISPR to create super-babies (may be too slow because the little tykes need to grow up)
  • Deploy large amounts of human and financial capital into human intelligence augmentation research, which has relatively limited work happening today compared with other major fields
  • Potentially use regulatory influence to unblock and accelerate research in this area, while slowing down AI Capabilities research (see Approach 3)
  • Augment our top people as much as possible

 

Difficulties with this approach

  • Work that involves messing with the human mind is fundamentally slower than e.g. writing software
  • There are ethical considerations for modifying/augmenting the human mind or our genetic code
  • We might be facing challenges that even massively augmented humans cannot solve. If so, at least we went for it!

 

💡 Conclusion: augmenting human intelligence is (today) slow, complex, and accelerating it is ethically fraught. I think it's worth accelerating, but with caution.

Approach 3: Changing Regulation

The right policies, adopted by the right governments, could massively improve our position. Now- changing policies is difficult and can easily backfire, and I should also note that I am personally closer to the Libertarian corner of the political spectrum. So regulation is usually the last tool I would turn to. But just like how nuclear weapons and super-viruses are illegal for me to manufacture, we might need some serious regulation around AI.

The kinds of regulations we could consider:

  • Caps on the amount of compute, dollars, or bytes of data that can be put into training an individual model
  • Minimum amount of funding or headcount an AI Research organization must put towards Safety & Alignment Research
  • Promotion of Alignment Research in our universities and other publicly funded institutions (as part of approach #1)
  • Unblocking & promotion of human augmentation research (as part of approach #2)

 

What this approach might look like

  • Developing a clear set of policies we would like specific governments to adopt
  • Finding lawmakers we could target (or recruiting new ones)
  • Deploying $ and lobbying efforts towards these targets
  • Developing strong marketing messages for the relevant voters

 

Difficulties with this approach

  • Both laws (and advocacy for them) can backfire in various ways
  • Getting laws passed in the US (and many other countries) is very difficult
  • Overregulation, in general, sucks
  • Differential regulation (e.g. laws being passed in the US but not in China) could lead to a situation worse than no regulation
  • Differential Compliance (e.g. DeepMind obeys the law but SketchyAIOrg27 does not) could lead to a situation worse than no regulation

 

💡 Conclusion: some regulatory changes are likely needed, and are inevitable even without our intervention. I recommend we increase engagement, clarify our platform/recommendations, and act as leaders as AI becomes an ever larger social issue.

Approach 4: Influencing AI Hardware Distribution and Access to Compute

It's difficult for someone to make an AGI without a decent pile of GPUs or TPUs. So why not consider buying (or otherwise acquiring influence over) the top GPU manufacturers? Most GPUs globally today are produced by NVIDIA, AMD, or Intel[1] and buying or acquiring an influential stake in these three companies is far from impossible.

Key Companies for AI Hardware

CompanyMarket Cap (as of 06/11/22)
NVIDIA$424 Bn
AMD$154 Bn
Intel$160 Bn
Total$738 Bn

While $738 Bn is a steep price tag, it's not inconceivable that a motivated individual, company, or group could acquire a significant or even majority stake in all three companies. In fact, if one's willing to use leverage, a mere $200 Bn could go a long way here! Now, while I don't personally have $200 Bn to throw around (yet! growth mindset... 😊) this is a perfect example of a sub-pivotal act that significantly changes the game board! It's easy to imagine using a Weak AI to generate $200 Bn while operating in a safe domain, one could then use this capital to significantly change the global supply of GPUs. This wouldn't eliminate competitors attempting to create Strong AI, but it would change the odds significantly.

I should note though, that these companies are likely to become much more valuable as we get closer to AGI (making an investment today a quite interesting idea!) and also that anti-trust laws would likely oppose any rollup of these firms and might also oppose any significant biases on which firms they sell product to.

While none of this is easy, I think this is exactly the kind of challenging but realistic move we should be considering.[2]

 

What this approach might look like

  • Amassing $100Bn+ in capital using a variety of means (including safe and effective deployment of Weak AIs), or working with folks that already have capital on this scale
  • Selecting target companies key in AI Hardware
  • Deploying capital into target companies
  • Influencing the actions of these key companies

 

Difficulties with this approach

  • Anti-trust laws, and laws against anti-competitive behavior could limit these actions (something we could work on under Approach 3)
  • These companies could massively increase in value as we approach AGI (this is both a pro and a con)
  • Certain countries and AI research organizations are likely already making moves to protect their supply chain, and could accelerate those changes if they saw a move like this in the works
  • Altering global GPU distribution or access to compute could have many unintended consequences and repercussions

 

💡 Conclusion: this is a big bet that doesn't do much at small size, and the consequences at large size are complex. Needs further evaluation.

Closing

I think these and many other approaches are worth exploring in parallel! I would love to hear your thoughts, feedback, and comments, including any other approaches you can think of for improving our position on the game board.

My best, and let's figure out how to win! 🚀

-6

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 5:17 PM

Ooof. Rated down to -8 without any comments as to why! If you downvoted, totally legit, just would appreciate if you could share some of your thoughts so I can learn. 

Please comment! Excited to hear everyone’s thoughts and feedback on these ideas.

 

Guidelines: please try to keep it positive and constructive, even when providing critical feedback. But my door is open for anything!

New to LessWrong?