If anyone is curious what's going on at SI, it seems as though they've started publishing monthly progress reports on their blog. The latest was published less than a week ago for the month of February:

http://singinst.org/blog/

Thought it might be a good idea to take a minute to positively reinforce the people at SI for their work. Here on Less Wrong, it seems as though we spend a fair amount of time criticizing what they're working on, which is of course valuable. But on the whole, I think there is a very good case they're doing very important and beneficial work. So, go SI!

New Comment
23 comments, sorted by Click to highlight new comments since: Today at 2:00 PM

From the blog:

Unpublished research: This month our researchers did unpublished research on info-computationalism, connectomics, technological unemployment, mathematical psychology, formal philosophy, decision analysis, human motivation, algorithmic information theory, the neuroscience of concept representation, nuclear risk, the Riemann hypothesis, and rates of scientific progress.

That sounds quite fascinating; making significant progress on at least some of these topics would not only cover the researchers in fame and glory, but also (and more importantly) significantly improve the future of humanity as a whole. So... when will this research be published ?

[-][anonymous]12y70

the Riemann hypothesis

Man, that thing was proved a long time ago.

You ought to make your sarcasm a bit more obvious...

[-][anonymous]12y70

I was hoping that the original claim of doing unpublished research on the Riemann hypothesis was a joke.

Most of this research takes the form of figuring out what is already known about these topics, so that this knowledge can be applied to the questions we are concerned to address (about AI risk, etc.). From this paragraph, the one topic where we plausibly know what science doesn't know yet is in the field of nuclear risk.

figuring out what is already known about these topics

This is called literature review, and is not really research in a topic, only a (necessary) prologue to one. You may want to be more careful in your terminology, if you want to be taken seriously. "Research in the Riemann hypothesis" undertaken by a non-expert in the relevant sub-field of math raises a huge crackpot red flag. On the other hand, "Analysis of applicability of the RH to the provability of AI friendliness" sounds somewhat more reasonable.

Hmmm. I was pretty sure "research" usually involved studying the existing the literature and/or making original contributions. When someone makes original contributions to a field, this is specified by calling it "original research."

You might be technically right, but "research" does suggest going in the direction of original research, so it's better to clarify by saying "we reviewed literature on X" or something like that instead of "we researched X". Also, agreed with shminux about triggering loud crackpot alarm with "we researched Riemann hypothesis", it might even be a good idea to go back and edit this out of the report.

In particular I'd say "did unpublished research on topic X" sounds like "did research that might be published (i.e. is valuable and original)" whereas "researched topic X" sounds more like "read about things."

Specifically, I'd point out that the monthly reports read like a sort of 'achievements' post, where one highlights the best greatest stuff done that month; it's greater/better to do original research than just regular research or studying, so in that context, one expects the former rather than the latter.

In the sciences, "research" always means "original research". Only in fields like philosophy can a Ph.D. thesis consist mostly of a review of what everyone else has said on a subject.

Ah, so would it be fair to say that your research is much closer to engineering than it is to science ? True, there's possibly less fame and glory in engineering than in science, but not that much less (and the financial benefits are way better).

Have you considered implementing some of this research, monetizing it, then financing SIAI with the proceeds ?

If our goal was to have current SI staff make lots of money, there are much better ways to do this than to monetize research on, say, nuclear risk. The reason SI staffers are at SI is because we need them to do SI work and not be among the much larger group of people who, if they share our beliefs and values, should be working in high-income careers and donating.

If our goal was to have current SI staff make lots of money, there are much better ways to do this than to monetize research on, say, nuclear risk.

This isn't an either-or proposition, though. Sure, nuclear risk and formal philosophy may not be huge money-makers, but what about info-computationalism, connectomics, mathematical psychology, decision analysis, human motivation, algorithmic information theory, the neuroscience of concept representation, not to mention the Riemann hypothesis ? The algorithmic information theory alone should have massive practical applications, assuming I understand the term correctly.

Plus, there's still all that fame to consider. If you made significant progress towards solving something like the Riemann hypothesis or mathematical psychology, you would, at the very least, make a lot of smart people look up and notice you in a very positive light. You could then attract their talents toward SIAI... At which point (now that I think about it) the ability to offer them a nice salary would come in pretty handy.

As it happens, we recently have been seriously discussing doing one of the things you mention, but I shall not reveal which. And it wouldn't be about the money per se, but about improving our ability to recruit the FAI team members we need. That's our major bottleneck. The monetary cost of running an FAI team is trivial in terms of world GDP — something like $5-$20 million per year. But there are a fixed number of young John Conways in the world who also can be persuaded that Friendly AI is the most important thing they can do with their life, and that number looks to be frighteningly close to 0. Besides, if we solve the recruiting problem, I don't think we'll have trouble getting a few billionaires to fund 9 young John Conways doing Friendly AI research. We just need enough money to find those young John Conways and become the kind of organization that can productively use them.

...for the FAI team part of SI's plans, that is. Of course we also engage in movement-building, academic outreach, etc.

In theory it doesn't seem like you'd have to persuade them that FAI was the most important thing they could do with their life. Presumably there are a few young John Conways at Google, but I doubt any see Google as the most important thing they could do with their life. In other words, you might just need salary, visibility, and prestige comparable to current young John Conway employers.

For instance, what if there was an FAI research team affiliated with some prestigious university that was getting a moderate amount of positive press coverage?

As it happens, we recently have been seriously discussing doing one of the things you mention, but I shall not reveal which.

Why not ?

And it wouldn't be about the money per se, but about improving our ability to recruit the FAI team members we need.

As John_Maxwell_IV points out below, this is a problem you can solve with money.

More specifically, young John Conways would consider donating their talents to an organization for two primary reasons:

1). It's really really awesome, or
2). It's really really lucrative.

By "awesome", I don't mean something like "you get to shoot nerf guns at work !", but rather something like, "you get to solve interesting problems at the forefront of human knowledge", or "you get to improve the world in a significant way".

Approach #1 won't work for you, because so far the SIAI has not accomplished anything truly world-changing (or even discipline-changing); nor are you planning on accomplishing anything like that in the near future (at least, not publicly), preferring to focus instead on academic outreach, etc. Sure, you have plans to work on such things eventually, but you need to attract that John Conway now. Ideally, he might want to join you simply because he believes in the cause, but, as you said, the number of such people in the world may be 0.

So, you're left with option #2: money. Look at it this way: you're doing all that applied research already, why let it go to waste when you can use it to bootstrap your entire pipeline in record time ?

When #2 happens at SI, it doesn't look like SI making money. It looks more like Michael Vassar stepping down as President at Singularity Institute and hiring lots of rationalists to start a potentially very disruptive company.

If Personalized Medicine succeeds and becomes a multi-billion dollar company, it would almost definitely fund an FAI team. Which is great and not at all impossible or even unlikely, but it's not going to happen in record time.

Eh, there's too much for me to explain, here. (Which is not to say you're wrong.)

I welcome you to pick up this thread again when I get around to discussing building an FAI team in this series.

Hmm - I see that http://singinst.org/aboutus/team still says (incorrectly) that Michael Vassar is president.

I went ahead and removed Michael Vassar from the team page, for now. The new "president" (in the sense of providing overall leadership) is Luke Muehlhauser (lukeprog), though his title is "Executive Director". In the non-profit world there are a number of fuzzy connotations these terms have, it's sort of complicated.

Who's the new president?