Some comments on the recruiting plan:

  1. I think a highly rational person would have high moral uncertainty at this point and not necessarily be described as "altruistic". For example I consider Eliezer's apparent high certainty in utilitarianism (assuming it's not just a front for PR purposes) as evidence against his rationality. Given a choice between a more altruistic candidate and a more rational candidate, I think SI ought to choose the latter.
  2. Similarly for "deeply committed to AI risk reduction". I think a highly rational person wo
... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post
Showing 3 of 5 replies (Click to show all)

Thanks for this. I'm writing a followup to this post that incorporates the points you've raised here.

2somervta7y I think "trustworthy" here means something along the lines of "committed to the organization/project", in the sense that they're not going to take the ideas/code used in SI conversations and ventures to Google or some other project. In other words, they're not going to be bribed away.
5steven04617y I think a highly rational person would have high moral uncertainty at this point and not necessarily be described as "altruistic". Do you think the correct level of moral uncertainty would place so much probability on egoism-like hypotheses that the behavior it outputs, even after taking into account various game-theoretical concerns about cooperation as well as the surprisingly large apparent asymmetry between the size of altruistic returns available vs. the size of egoistic returns available, doesn't end up behaving substantially more altruistically than a typical human or a typical math genius is likely to behave? It seems implausible to me, but I'm not that confident, and as I've been saying earlier, the topic is weirdly neglected here for one with such high import. Given a choice between a more altruistic candidate and a more rational candidate, I think SI ought to choose the latter. Surely it depends on how much more altruistic and how much more rational.

Building toward a Friendly AI team

by lukeprog 7y6th Jun 201296 comments

24


Series: How to Purchase AI Risk Reduction

A key part of SI's strategy for AI risk reduction is to build toward hosting a Friendly AI development team at the Singularity Institute.

I don't take it to be obvious that an SI-hosted FAI team is the correct path toward the endgame of humanity "winning." That is a matter for much strategic research and debate.

Either way, I think that building toward an FAI team is good for AI risk reduction, even if we decide (later) that an SI-hosted FAI team is not the best thing to do. Why is this so?

Building toward an SI-hosted FAI team means:

  1. Growing SI into a tighter, larger, and more effective organization in general.
  2. Attracting and creating people who are trustworthy, altruistic, hard-working, highly capable, extremely intelligent, and deeply concerned about AI risk. (We'll call these people "superhero mathematicians.")

Both (1) and (2) are useful for AI risk reduction even if an SI-hosted FAI team turns out not to be the best strategy.

This is because: Achieving part (1) would make SI more effective at whatever it is doing to reduce AI risk, and achieving part (2) would bring great human resources to the cause of AI risk reduction, which will be useful to a wide range of purposes (FAI team or otherwise).

So, how do we accomplish both these things?

 

Growing SI into a better organization

Like many (most?) non-profits with less than $1m/yr in funding, SI has had difficulty attracting the top-level executive talent often required to build a highly efficient and effective organization. Luckily, we have made rapid progress on this front in the past 9 months. For example we now have (1) a comprehensive donor database, (2) a strategic plan, (3) a team of remote contractors used to more efficiently complete large and varied projects requiring many different skillsets, (4) an increasingly "best practices" implementation of central management, (5) an office we actually use to work together on projects, and many other improvements.

What else can SI do to become a tighter, larger, and more effective organization?

  1. Hire a professional bookkeeper, implement additional bookkeeping and accounting best practices. (Currently underway.)
  2. Create a more navigable and up-to-date website. (Currently underway.)
  3. Improve our fundraising strategy, e.g. by creating a deck of slides for major donors which explains what we're doing and what we can do with more funding. (Currently underway.)
  4. Create standard policy documents that lower our risk of being distracted by an IRS audit. (Currently underway.)
  5. Shift the Singularity Summit toward being more directly useful for AI risk reduction, and also toward greater profitability—so that we have at least one funding source that is not donations. (Currently underway.)
  6. Spin off the Center for Applied Rationality so that SI is more solely focused on AI safety. (Currently underway.)
  7. Build a fundraising/investment-focused Board of Trustees (ala IAS or SU) in addition to our Board of Directors and Board of Advisors.
  8. Create an endowment to ensure ongoing funding for core researchers.
  9. Consult with the most relevant university department heads and experienced principal investigators (e.g. at IAS and Santa Fe) about how to start and run an effective team for advanced technical research.
  10. Do the things recommended by these experts (that are relevant to SI's mission).

They key point, of course, is that all these things cost money. They may be "boring," but they are incredibly important.

 

Attracting and creating superhero mathematicians

The kind of people we'd need for an FAI team are:

  1. Highly intelligent, and especially skilled in maths, probably at the IMO medal-winning level. (FAI team members will need to create lots of new math during the course of the FAI research initiative.)
  2. Trustworthy. (Most FAI work is not "Friendliness theory" but instead AI architectures work that could be made more dangerous if released to a wider community that is less concerned with AI safety.)
  3. Altruistic. (Since the fate of humanity may be in their hands, they need to be robustly altruistic.)
  4. Hard-working, determined. (FAI is a very difficult research problem and will require lots of hard work and also an attitude of "shut up and do the impossible.")
  5. Deeply committed to AI risk reduction. (It would be risky to have people who could be pulled off the team—with all their potentially dangerous knowledge—by offers from hedge funds or Google.)
  6. Unusually rational. (To avoid philosophical confusions, to promote general effectiveness and group cohesion, and more.)

There are other criteria, too, but those are some of the biggest.

We can attract some of the people meeting these criteria by using the methods described in Reaching young math/compsci talent. The trouble is that the number of people on Earth who qualify may be very close to 0 (especially given the "committed to AI risk reduction" criterion).

Thus, we'll need to create some superhero mathematicians.

Math ability seems to be even more "fixed" than the other criteria, so a (very rough) strategy for creating superhero mathematicians might look like this:

  1. Find people with the required level of math ability.
  2. Train them on AI risk and rationality.
  3. Focus on the few who become deeply committed to AI risk reduction and rationality.
  4. Select from among those people the ones who are most altruistic, trustworthy, hard-working, and determined. (Some training may be possible for these features, too.)
  5. Try them out for 3 months and select the best few candidates for the FAI team.

All these steps, too, cost money.

 

24