Previously: Interview as a researcher, Q&A #1

This is my second Q&A as Executive Director of the Singularity Institute. I'll skip the video this time.

 

Singularity Institute Activities

Bugmaster asks:

...what does the SIAI actually do? You don't submit your work to rigorous scrutiny by your peers in the field... you either aren't doing any AGI research, or are keeping it so secret that no one knows about it... and you aren't developing any practical applications of AI, either... So, what is it that you are actually working on, other than growing the SIAI itself ?

It's a good question, and my own biggest concern right now. Donors would like to know: Where is the visible return on investment? How can I see that I'm buying existential risk reduction when I donate to the Singularity Institute?

SI has a problem, here, because it has done so much invisible work lately. Our researchers have done a ton of work that hasn't been written up and published yet; Eliezer has been writing his rationality books that aren't yet published; Anna and Eliezer have been developing a new rationality curriculum for the future "Rationality Org" that will be spun off from the Singularity Institute; Carl has been doing a lot of mostly invisible work in the optimal philanthropy community; and so on. I believe this is all valuable x-risk-reducing work, but of course not all of our supporters are willing to just take our word for it that we're doing valuable work. Our supporters want to see tangible results, and all they see is the Singularity Summit, a few papers a year, some web pages and Less Wrong posts, and a couple rationality training camps. That's good, but not good enough!

I agree with this concern, which is why I'm focused on doing things that happen to be both x-risk-reducing and visible. 

First, we've been working on visible "meta" work that makes the Singularity Institute more transparent and effective in general: a strategic plan, a donor database ("visible" to donors in the form of thank-yous), a new website (forthcoming), and an annual report (forthcoming).

Second, we're pushing to publish more research results this year. We have three chapters forthcoming in The Singularity Hypothesis, one chapter forthcoming in The Cambridge Handbook of Artificial Intelligence, one forthcoming article on the difficulty of AI, and several other articles and working papers we're planning to publish in 2012. I've also begun writing the first comprehensive outline of open problems in Singularity research, so that interested researchers from around the world can participate in solving the world's most important problems.

Third, there is visible rationality work forthcoming. One of Eliezer's books is now being shopped to agents and publishers, and we're field-testing different versions of rationality curriculum material for use in Less Wrong meetups and classes.

Fourth, we're expanding the Singularity Summit brand, an important platform for spreading the memes of x-risk reduction and AI safety.

So my answer is to the question is: "Yes, visible return on investment has been a problem lately due to our choice of projects. Even before I was made Executive Director, it was one of my top concerns to help correct that situation, and this is still the case today."

 

What if?

XiXiDu asks:

What would SI do if it became apparent that AGI is at most 10 years away?

This would be a serious problem because by default, AGI will be extremely destructive, and we don't yet know how to make AGI not be destructive.

What would we do if we thought AGI was at most 10 years away?

This depends on whether it's apparent to a wider public that AGI is at most 10 years away, or a conclusion based only on a nonpublic analysis.

If it becomes apparent to a wide variety of folks that AGI is close, then it should be much easier to get people and support for Friendly AI work, so a big intensification of effort would be a good move. If the analysis that AGI is 10 years away leads to hundreds of well-staffed and well-funded AGI research programs and a rich public literature, then trying to outrace the rest with a Friendly AI project becomes much harder. After an intensified Friendly AI effort, one could try to build up knowledge in Friendly AI theory and practice that could be applied (somewhat less effectively) to systems not designed from the ground up for Friendliness. This knowledge could then be distributed widely to increase the odds of a project pulling through, calling in real Friendliness experts, etc. But in general, a widespread belief that AGI is only 10 years away would be a much hairier situation than the one we're in now.

But if the basis for thinking AI was 10 years away was nonpublic (but nonetheless persuasive to supporters who have lots of resources), then it could be used to differentially attract support to a Friendly AI project, hopefully without provoking dozens of AGI teams to intensify their efforts. So if we had a convincing case that AGI was only 10 years away, we might not publicize this but would instead make the case to individual supporters that we needed to immediately intensify our efforts toward a theory of Friendly AI in a way that only much greater funding can allow.

 

Budget

MileyCyrus asks:

What kind of budget would be required to solve the friendly AI problem?

Large research projects always come with large uncertainties concerning how difficult they will be, especially ones that require fundamental breakthroughs in mathematics and philosophy like Friendly AI does.

Even a small, 10-person team of top-level Friendly AI researchers taking academic-level salaries for a decade would require tens of millions of dollars. And even getting to the point where you can raise that kind of money requires a slow "ramping up" of researcher recruitment and output. We need enough money to attract the kinds of mathematicians who are also being recruited by hedge funds, Google, and the NSA, and have a funded "chair" for each of them such that they can be prepared to dedicate their careers to the problem. That part alone requires tens of millions of dollars for just a few researchers.

Other efforts like the Summit, Less Wrong, outreach work, and early publications cost money, and they work toward having the community and infrastructure required to start funding chairs for top-level mathematicians to be career Friendly AI researchers. This kind of work costs between $500,000 and $3 million per year, with more money per year of course producing more progress.

 

Predictions

Wix asks:

How much do members' predictions of when the singularity will happen differ within the Singularity Institute?

I asked some Singularity Institute staff members to answer a slightly different question, one pulled from the Future of Humanity Institute's 2011 machine intelligence survey:

Assuming no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of human-level machine intelligence? Feel free to answer ‘never’ if you believe such a milestone will never be reached.

In short, the survey participants' median estimates (excepting 5 outliers) for 10%/50%/90% were:

2028 / 2050 / 2150

Here are five of the Singularity Institute's staff members' responses, names unattached, for the years by which they would assign a 10%/50%/90% chance of HLAI creation, conditioning on no global catastrophe halting scientific progress:

 

  • 2025 / 2073 / 2168
  • 2030 / 2060 / 2200
  • 2027 / 2055 / 2160
  • 2025 / 2045 / 2100 
  • 2040 / 2080 / 2200

 

Those are all the answers I had time to prepare in this round; I hope they are helpful!

New Comment
39 comments, sorted by Click to highlight new comments since: Today at 5:19 AM
[-][anonymous]12y240

Bugmaster said:

You followed that up by saying that SIAI is more interested in "technical problems in mathematics, computer science, and philosophy" than in experimental AI research...you either aren't doing any AGI research, or are keeping it so secret that no one knows about it (which makes it impossible to gauge your progress, if any), and you aren't developing any practical applications of AI, either

The only part of this Q&A that is relevant to Bugmaster's question is:

Our researchers have done a ton of work that hasn't been written up and published yet

Bugmaster asked specifically what SI is doing to solve open technical FAI/AGI problems, but in this Q&A you still haven't unpacked "research" and "work." People want to know what the hell you guys do all day. Yes, there are large inferential distances involved, and yes, most of the research must be kept secret, but you haven't even told us which subproblems you've made progress on. This is a major credibility issue--your mission clearly states that you will perform "AI Reflectivity & Friendly AI Research," yet you seem to be unable to provide any concrete examples.

Our researchers have done a ton of work that hasn't been written up and published yet

Perhaps a solution would be publishing at least titles of "in the works" papers? If it's really a ton this should be an impressive list, and should increase credibility.

Bugmaster asked "what does the SIAI actually do?" and "what is it that you are actually working on, other than growing the SIAI itself?"

Paragraphs 2, 4, 5, 6, and 7 are lists of things that SIAI has been doing.

As for progress on FAI subproblems, that's precisely the part we mostly haven't written up yet, except for stuff forthcoming in the publications I mentioned, which I see as a big problem and am working to solve.

Also, I don't think it's the case that "most" of the research must be kept secret.

[-][anonymous]12y170

I am satisfied with the level of detail you provided for SI's other projects. But you haven't given even the roughest outline of SI's progress on the thing that matters most, actual FAI research. Are these problems so complicated that you can't even summarize them in a few sentences or paragraphs? Frankly, I don't understand why you can't (or won't) say something like, "We've made progress on this, this, and this. Details in forthcoming publications." Even if you were only willing to say something as detailed as, "We fixed some of the problems with timeless decision theory" or "We worked on the AI reflection problem," that would be much more informative than what you've given us. Saying that you've done "a ton of work" isn't really communicating anything.

Fair enough. I'll share a few examples of progress, though these won't be surprising to people who are on every mailing list, read every LW post, or are in the Bay Area and have regular conversations with us.

  • much progress on the strategic landscape, e.g. differential technological development analyses, which you'll see in the forthcoming Anna/Luke chapter and in Nick's forthcoming monograph, and which you've already seen in several papers and talks over the past couple years (most of them involving Carl).
  • progress on decision theory, largely via the decision theory workshop mailing list, in particular on UDT
  • progress in outlining the sub-problems of singularity research, which I've started to write up here.
  • progress on the value-loading problem, explained here and in a forthcoming paper by Dewey.
  • progress on the reflectivity problem in the sense of identifying lots of potential solutions that probably won't work. :)
  • progress on the preference extraction problem via incorporating the latest from decision neuroscience

Still, I'd say more of our work has been focused on movement-building than on cutting-edge research, because we think the most immediate concern is not on cutting-edge research but on building a larger community of support, funding, and researchers to work on these problems. Three researchers can have more of an impact if they create a platform by which 20 researchers can work on the problem than if they merely do research by themselves.

[-][anonymous]12y60

Thank you, this is exactly the kind of answer I was hoping for.

Is the value-loading or value-learning problem the same thing as the problem of moral uncertainty? If no, what am I missing; if yes, why are the official solution candidates different?

Thanks, this is quite informative, especially your closing paragraph:

Still, I'd say more of our work has been focused on movement-building than on cutting-edge research, because we think the most immediate concern is not on cutting-edge research but on building a larger community of support, funding, and researchers to work on these problems.

This makes sense to me; have you considered incorporating this paragraph into your core mission statement ? Also, what are your thresholds for deciding when to transition from (primarily) community-building to (primarily) doing research ?

Also, you mentioned (in your main post) that the SIAI has quite a few papers in the works, awaiting publication; and apparently there are even a few books waiting for publishers. Would it not be more efficient to post the articles and books in question on Less Wrong, or upload them to Pirate Bay, or something to that extent -- at least, while you wait for the meat-space publishers to get their act together ? Sorry if this is a naive question; I know very little about the publishing world.

what are your thresholds for deciding when to transition from (primarily) community-building to (primarily) doing research ?

We're not precisely sure. It's also a matter of funding. Researchers who can publish "platform research" for academic outreach, problem space clarification, and community building are less expensive than researchers who can solve decision theory, safe AI architectures, etc.

Would it not be more efficient to post the articles and books in question on Less Wrong, or upload them to Pirate Bay, or something to that extent -- at least, while you wait for the meat-space publishers to get their act together ?

Like many academics, we generally do publish early drafts of forthcoming articles long before the final version is written and published. Examples: 1, 2, 3, 4.

progress on the preference extraction problem via incorporating the latest from decision neuroscience

I'd love to hear more about what areas you're looking into within decision neuroscience.

For those who are also interested and somehow missed these:
Crash Course in Neuroscience of Motivation
and these two neuroeconomics book reviews.

An example: The subject matter of the second chapter of this book (the three competing systems of motivation) looks to have some implications for the value extraction problem. This is the kind of information about how our preferences work that I imagine we'll use to extrapolate our preferences — or that an AI would use to do the extrapolation for us.

But you haven't given even the roughest outline of SI's progress on the thing that matters most, actual FAI research.

From what I understand they can't do that yet. They don't have enough people to make some actual progress on important problems. They also don't have enough money to hire enough people. So they are concentrating on raising awareness of the issue and persuading people to work on it, respectively contribute money to SI.

The real problem I see is the lack of formalized problems. I perceive it to be very important to formalize some actual problems. Doing so will aid raising money and allow others to work on the problems. To be more specific, I don't think that writing a book on rationality is worth the time it takes to do so when it is written by one of a few people who might be capable of formalizing some important problems. Especially since there are already many books on rationality. Even if Eliezer Yudkowsky is able to put everything the world knows about rationality together in a concise manner, that's nothing that will impress the important academics enough to actually believe him on AI issues. He should have rather written a book on decision theory where he seems to have some genuine ideas.

The real problem I see is the lack of formalized problems.

There was a list of problems posted recently:

To be more specific, I don't think that writing a book on rationality is worth the time it takes to do so when it is written by one of a few people who might be capable of formalizing some important problems. Especially since there are already many books on rationality. Even if Eliezer Yudkowsky is able to put everything the world knows about rationality together in a concise manner, that's nothing that will impress the important academics enough to actually believe him on AI issues.

Rationality is probably a moderately important factor in planetary collective intelligence. Pinker claims that rational thinking + game theory have also contributed to recent positive moral shifts. Though there are some existing books on the topic, it could well be an area where a relatively small effort could produce a big positive result.

However, I'm not entirely convinced that hpmor.com is the best way to go about it...

It turns out that HPMOR has been great for SI recruiting and networking. IMO winners apparently read HPMOR. So do an absurd number of Googlers.

In short, the survey participants' median estimates (excepting 5 outliers) for 10%/50%/90% were: 2028 / 2050 / 2150

Why except outliers when stating the median?

On the first page of the paper about the survey, they say that 5 people believed that there is a less than 90% chance of human-level AI ever being achieved (even barring catastrophes). They did not need to be excluded; you can take the median of a set containing infinities.

$3 million a year is a pittance. Even $3 billion would be a drop in the bucket compared to the costs of solving global warming or world poverty.

[-][anonymous]12y70

Fourth, we're expanding the Singularity Summit brand

What does that mean?

Bigger budget, higher-level speakers, possibly more than one Summit per year (around the world).

one chapter forthcoming in The Cambridge Handbook of Artificial Intelligence,

This part is impressive.

one chapter forthcoming in The Cambridge Handbook of Artificial Intelligence,

This part is impressive.

'The Ethics of Artificial Intelligence' (2011) by Nick Bostrom, Eliezer Yudkowsky

Draft for Cambridge Handbook of Artificial Intelligence, eds. William Ramsey and Keith Frankish (Cambridge University Press, 2011): forthcoming

Will this be peer-reviewed? (I assume so since it's a Cambridge Handbook.) That'd be great, I could finally drop that annoying bit from the Wikipedia article on Yudkowsky.

I think "Complex Value Systems are Required to Realize Valuable Futures" was peer-reviewed before it appeared in AGI-11, if that helps.

Oh, I hadn't noticed that one, thanks. I've added it (although checking, the 'no-peer-reviewed publications' clause seemed to have been cut at some point).

we're field-testing different versions of rationality curriculum material for use in Less Wrong meetups and classes.

I'd be interested in being a guinea pig. I also am the founder and president of the philosophy club of my high school so I have access to other guinea pigs. Where can I get a copy of this rationality curriculum?

I am also interested. I'm going to try to start a lesswrong group at University next year and a curriculum would help me towards my goal of making it as active as possible.

Those interested in field-testing the rationality curriculum should contact anna [at] singinst.org.

Thank you.

People who doubt the high-faluting research will save the world will not be convinced you're doing effective work by teaching the basics (e.g. rationality books) or advertising for SIAI (e.g. Carl saying "We're important"). If there are impressive hard-to-see results, the transparency work is valuable. But SIAI publications are already listed, and they don't look all that awesome (for one thing, I can understand them).

We have three chapters forthcoming in The Singularity Hypothesis, one chapter forthcoming in The Cambridge Handbook of Artificial Intelligence, one forthcoming article on the difficulty of AI, and several other articles and working papers we're planning to publish in 2012.

I look forward to these! Can has summaries?

Your outline of open problems is pretty good.

You understand De Blanc's papers?

The Singularity Hypothesis abstracts are here.

Looks like I do. (Uh, can I say that Peter is much smarter than I and that I will pick lice from his and your backs or whatever I need to do for this not to be a status grab?)

Convergence of expected utility is a grown-up version of his post explaining the basic idea. Understanding details of the proof is work, but not hard.

Ontological crises is about mappings between models of the world to define utility functions over new domains. It maps each state in the old ontology to a probability distribution over states in the new one, which I think is clever and cute. But most of the work to figure how mapping should actually be done (besides the "if it ain't broke don't fix it" rule about isomorphism, and continuity) will be in looking at humans, as the conclusion says.

Did I miss any papers?

Thanks for the abstracts! They seem to center mostly around defending various schools of Singularity predictions, especially the fourth one, plus yours which says "Value is fragile".