HD Video link.

MP3 version.

Transcript below.

 

Intro

Hi everyone. I’m Luke Muehlhauser, the new Executive Director of Singularity Institute.

Literally hours after being appointed Executive Director, I posted a call for questions about the organization on the Less Wrong.com community website, saying I would answer many of them on video — and this is that video.

I’m doing this because I think transparency and communication are important.

In fact, when I began as an intern with Singularity Institute, one of my first projects was to spend over a hundred hours working with everyone in the organization to write its first strategic plan, which the board ratified and you can now read on our website.

When I was hired as a researcher, I gave a long text-only interview with Michael Anissimov, where I answered 30 questions about my personal background, the mission of Singularity Institute, about our technical research program, and about the unsolved problems we work on, and also about the value of rationality training.

After becoming Executive Director, I immediately posted that call for questions — a few of which I will now answer.

 

Staff Changes

First question. Less Wrong user ‘wedrifid’ asks:

The staff and leadership at [Singularity Institute] seem to be undergoing a lot of changes recently. Is instability in the organisation something to be concerned about?

On this, I should address specific staff changes that wedrifid is talking about. At the end of summer 2011, Jasen Murray — who was running the visiting fellows program — resigned in order to pursue a business opportunity related to his passion for improving people’s effectiveness. At that same time, I was hired as a researcher after working as an intern for a few months, and Louie Helm was hired as Director of Development after having done significant volunteer work for Singularity Institute for even longer than that. Carl Shulman was also hired as a researcher at this time, and had also done lots of volunteer work before that, including publishing papers like “Arms Control and Intelligence Explosions,” “Implications of a Software-€Limited Singularity,” and “Basic AI Drives and Catastophic Risks," and maybe some others

Another change is that our President, Michael Vassar, is launching a personalized medicine company that we’re all pretty excited about. It has a lot of promise, so we’re excited to see him do that. He’ll still be retaining the title of President because he will, really, continue to do quite a lot of good work for us — networking and spreading our mission wherever he goes. But he will no longer take a salary from Singularity Institute, and that was his idea, several months ago.

But we needed somebody to run the organization, and I was the favorite choice for the job. 

So, should you be worried about instability? Well... I'm excited about the way the organization is taking shape, but I will say that we need more people. In particular, our research team took a hit when I moved from Researcher to Executive Director. So if you care about our mission and you can work with us to write working papers and other documents, you should contact me! My email is luke@intelligence.org.

And I’ll say one other thing. Do not fall prey to the sin of underconfidence. When I was living in Los Angeles I assumed I wasn’t special enough to apply even as an unpaid visiting fellow, and Louie Helm had to call me on Skype and talk me into it. So I thought “What the hell, it can’t hurt to contact Singularity Institute,” and within 9 months of that first contact I went from intern to researcher to Executive Director. So don't underestimate your potential — contact us, and let us be the ones who say "No."

And I suppose now would be a good time to answer another question, this one asked by ‘JoshuaZ’, who asks:

Are you concerned about potential negative signaling/status issues that will occur if [Singularity Institute] has as an executive director someone who was previously just an intern?

Not really. And the problem isn’t that I used to be an unpaid Visiting Fellow, it’s just that I went from Visiting Fellow to Executive Director so quickly. But that's... one of the beauties of Singularity Institute. Singularity Institute is not a place where you need to “pay your dues,” or something. If you’re hard-working and competent and you get along with people and you’re clearly committed to rationality and to reducing existential risk, then the leadership of the organization will put you where you can do the most good and be the most effective, regardless of irrelevant factors like duration of employment.

 

Rigorous Research

Next question. Less Wrong user ‘quartz’ asks:

How are you going to address the perceived and actual lack of rigor associated with [Singularity Institute]?

Now, what I initially thought quartz was talking about was Singularity Institute’s relative lack of publications in academic journals like Risk Analysis or Minds and Machines, so let me respond to that interpretation of the question first.

Luckily, I am probably the perfect person to answer this question, because when I first became involved with Singularity Institute this was precisely my own largest concern with Singualrity Institute, but I changed my mind when I learned the reasons why Singularity Institute does not push harder than it does to publish in academic journals.

So. Here’s the story. In March 2011, before I was even an intern, I wrote a discussion post on Less Wrong called ‘How [Singularity Institute] could publish in mainstream cognitive science journals.’ I explained in detail not only the right style is for mainstream journals, but also why Singularity Institute should publish in mainstream journals. My four reasons were:

 

  1. Some donors will take Singularity Institute more seriously if it publishes in mainstream journals.
  2. Singularity Institute would look a lot more credible in general.
  3. Singularity Institute would spend less time answering the same questions again and again if it publishes short, well-referenced responses to such questions.
  4. Writing about these problems in the common style... will help other smart researchers to understand the relevant problems and perhaps contribute to solving them.

 

Then, in April 2011, I moved to the Bay Area and began to realize why exerting a lot of effort to publish in mainstream journals probably isn’t the right way to go for Singularity Institute, and I wrote a discussion post called ‘Reasons for [Singularity Institute] to not publish in mainstream journals.’

What are those reasons?

The first one is that more people read, for example, Yudkowsky’s thoughtful blog posts or Nick Bostrom’s pre-prints from his website... than the actual journals.

The other reason is that in many cases, most of a writer’s time is invested after the article is accepted to a journal. Which means that most of the work comes after you’ve done the most important part and written up all the core ideas. Most of the work is tweaking. Those are dozens and dozens and dozens of hours not spent on finding new safety strategies, writing new working papers, etc.

A third reason is that publishing in mainstream journals requires you to jump through lots of hoops, like reviewer bias and the normal aversion to stuff that sounds weird.

A fourth reason to not publish so much in mainstream journals is that publishing in mainstream journals requires a pretty large delay in publication, somewhere between 4 months to 2 years.

So: If you’re a mainstream academic seeking tenure, publishing in mainstream journals is what you need to do, because that’s how the system is set up. If you’re trying to solve hard problems very quickly, publishing in mainstream journals can sometimes be something of a lost purpose.

If you’re trying to hard solve problems in mathematics and philosophy, why would you spend most of your limited resources tweaking sentences rather than getting the important ideas out there for yourself or others to improve and build on? Why would you accept delays of 4 months to 2 years? 

At Singularity Institute, we’re not trying to get tenure. We don’t need you to have a Ph.D. We don’t care if you work at Princeton or at Brown Community College. We need you to help us solve the most important problems in mathematics, computer science, and philosophy, and we need to do that quickly.

That said, it will sometimes be worth it to develop a working paper into something that can be published in a mainstream journal, if the effort required and the time delay are not too great.

But just to drive my point home, let me read from the opening chapter of the new book Reinventing Discovery, by Michael Nielsen, the co-author of the leading textbook on quantum computation. It's a really great passage:

Tim Gowers is not your typical blogger. A mathematician at Cambridge University, Gowers is a recipient of the highest honor in mathematics, the Fields Medal, often called the Nobel Prize of mathematics. His blog radiates mathematical ideas and insight.

In January 2009, Gowers decided to use his blog to run a very unusual social experiment. He picked out an important and difficult unsolved mathematical problem, a problem he said he’d “love to solve.” But instead of attacking the problem on his own, or with a few close colleagues, he decided to attack the problem completely in the open, using his blog to post ideas and partial progress. What’s more, he issued an open invitation asking other people to help out. Anyone could follow along and, if they had an idea, explain it in the comments section of the blog. Gowers hoped that many minds would be more powerful than one, that they would stimulate each other with different expertise and perspectives, and collectively make easy work of his hard mathematical problem. He dubbed the experiment the Polymath Project.

The Polymath Project got off to a slow start. Seven hours after Gowers opened up his blog for mathematical discussion, not a single person had commented. Then a mathematician named Jozsef Solymosi from the University of British Columbia posted a comment suggesting a variation on Gowers’s problem, a variation which was easier, but which Solymosi thought might throw light on the original problem. Fifteen minutes later, an Arizona high-school teacher named Jason Dyer chimed in with a thought of his own. And just three minutes after that, UCLA mathematician Terence Tao—like Gowers, a Fields medalist—added a comment. The comments erupted: over the next 37 days, 27 people wrote 800 mathematical comments, containing more than 170,000 words. Reading through the comments you see ideas proposed, refined, and discarded, all with incredible speed. You see top mathematicians making mistakes, going down wrong paths, getting their hands dirty following up the most mundane of details, relentlessly pursuing a solution. And through all the false starts and wrong turns, you see a gradual dawning of insight. Gowers described the Polymath process as being “to normal research as driving is to pushing a car.” Just 37 days after the project began Gowers announced that he was confident the polymaths had solved not just his original problem, but a harder problem that included the original as a special case. He described it as “one of the most exciting six weeks of my mathematical life.” Months’ more cleanup work remained to be done, but the core mathematical problem had been solved.

That is what working for rapid progress on problems rather than for tenure looks like.

And here’s the kicker. We’ve already done this at Singularity Institute! This is what happened, though not quite as fast, when Eliezer Yudkowsky made a few blog posts about open problems in decision theory, and the community rose to the challenge, proposed solutions, and iterated and iterated. That work continued with a decision theory workshop and a mailing list that is still active, where original progress in decision theory is being made quite rapidly, and with none of it going through the hoops and delays of publishing in mainstream journals.

Now, I do think that Singularity Institute needs to publish more research, both in and out of mainstream journals. But most of what we publish should be blog posts and working papers, because our goal is to solve problems quickly, not to wait 4 months to 2 years to go through a mainstream publisher and garner tenure and prestige and so on.

That said, I’m quite happy when people do publish on these subjects in mainstream journals, because prestige is useful for bringing attention to overlooked topics, and because hopefully these instances of publishing in mainstream journals are occurring when it isn’t a huge waste of time and effort to do so. For example, I love the work being done by our frequent collaborators at the Future of Humanity Institute at Oxford, and I always look forward to what they're doing next.

Now, back to quartz's original question about rigorous research. I asked for clarification on what quartz meant, and here's what he said:

In 15 years, I want to see a textbook on the mathematics of FAI that I can put on my bookshelf next to Pearl's Causality, Sipser's Introduction to the Theory of Computation and MacKay's Information Theory, Inference, and Learning Algorithms. This is not going to happen if research of sufficient quality doesn't start soon.

Now, that sounds wonderful, and I agree that the community of researchers working to reduce existential risks, including Singularity Institute, will need to ramp up their research efforts to achieve that kind of goal.

I will offer just one qualification that I don't think will be very controversial. I think most people would agree that if a scientist happened to create a synthetic virus that was airborne and could kill hundreds of millions of people if released into the wild, we wouldn't want the instructions for creating that synthetic virus to be published in the open for terrorist groups or hawkish governments to use. And for the same reasons, we wouldn't want a Friendly AI textbook to explain how to build highly dangerous AI systems. But excepting that, I would love to see a rigorously technical textbook on friendliness theory, and I agree that friendliness research will need to increase for us to see that textbook be written in 15 years. Luckily, the Future of Humanity Institute is putting a special emphasis on AI risks for the next little while, and Singularity Institute is ramping up its own research efforts.

But the most important thing I want to say is this. If you can take ideas and arguments that already exist in blog posts, emails, and human brains (for example at Singularity Institute) and turn them into working papers or maybe even journal articles, and you care about navigating the Singularity successfully, please contact me. My email address is luke@intelligence.org. If you're that kind of person who can do that kind of work, I really want to talk to you.

I’d estimate we have something like 30-40 papers just waiting to be written. The conceptual work has been done, we just need more researchers who can write this stuff up. So if you can do that, you should contact me: luke@intelligence.org.

 

Friendly AI Sub-Problems

Next question. Less Wrong user ‘XiXiDu’ asks:

If someone as capable as Terence Tao approached [Singularity Institute], asking if they could work full-time and for free on friendly AI, what would you tell them to do? In other words, are there any known FAI sub-problems that demand some sort of expertise that [Singularity Institute] is currently lacking?

Terence Tao is a mathematician at UCLA who was a child prodigy and is considered by some people to be one of the smartest people on the planet. He is exactly the kind of person we need to successfully navigate the Singularity, and in particular to solve open problems in Friendly AI theory.

I explained in my text-only interview with Michael Anissimov in September 2011 that the problem of Friendly AI breaks down into a large number of smaller and better-defined technical sub-problems. Some of the open problems I listed in that interview are the ones I’d love somebody like Terence Tao to work on. For example:

How can an agent make optimal decisions when it is capable of directly editing its own source code, including the source code of the decision mechanism? How can we get an AI to maintain a consistent utility function throughout updates to its ontology? How do we make an AI with preferences about the external world instead of about a reward signal? How can we generalize the theory of machine induction —€” called Solomonoff induction â— so that it can use higher-order logics and reason correctly about observation selection effects? How can we approximate such ideal processes such that they are computable?

(That was a quote from the text-only interview.)

But even before that, we’d really like to write up explanations of these problems in all their technical detail, but again that takes researchers and funding and we’re short on both. For now, I’ll point you to Eliezer’s talk at Singularity Summit 2011, which you can Google for.

But yeah, we have a lot of technical problems that we'd like to clarify the nature of so that we can have researchers working on them. So we do need potential researchers to contact us.

I loved watching Batman and Superman cartoons when I was a kid, but as it turns out, the heroes who can save the world are not those who have incredible strength or the power of flight. They are mathematicians and computer scientists. 

Singularity Institute needs heroes. If you are a brilliant mathematician or computer scientist and you want a shot at saving the world, contact me: luke@intelligence.org.

I know it sounds corny, but I mean it. The world needs heroes.

 

Improved Funding

Next, Less Wrong user ‘XiXiDu’ asks:

What would [Singularity Institute] do given various amounts of money? Would it make a difference if you had 10 or 100 million dollars at your disposal...?

Yes it would. Absolutely. If Bill Gates decided tomorrow that he wanted to save not just a billion people but the entire human race, and he gave us 100 million dollars, we would hire more researchers and figure out the best way to spend that money. That's a pretty big project in itself.

But right now, my bet on how we’d end up spending that money is that we would personally argue for our mission to each of the world’s top mathematicians, AI researchers, physicists, and formal philosophers. The Terence Taos and Judea Pearls of the world. And for any of them who could be convinced, we’d be able to offer them enough money to work for us. We’d also hire several successful Oppenheimer-type research administrators who could help us bring these brilliant minds together to work on these problems.

As nice as it is to have people from all over the world solving problems in mathematics, decision theory, agent architectures, and other fields collaboratively over the internet, there are a lot of things you can make move faster when you bring the smartest people in the world into one building and allow them to do nothing else but solve the world's most important problems.

 

Rationality

Next. Less Wrong user ‘JoshuaZ’ asks:

A lot of Eliezer's work has been not at all related strongly to FAI but has been to popularizing rational thinking. In your view, should [Singularity Institute] focus exclusively on AI issues or should it also care about rational issues? In that context, how does Eliezer's ongoing work relate to [Singularity Institute]?

Yes, it’s a great question. Let me begin with the rationality work.

I was already very interested in rationality before I found Less Wrong and Singularity Institute, but when I first encountered the arguments about intelligence explosion, one of my first thoughts was, “Uh-oh. Rationality is much more important than I had originally thought.”

Why? Intelligence explosion is a mind-warping, emotionally dangerous, intellectually difficult, and very uncertain field in which we don’t get to do a dozen experiments so that reality can beat us over the head with the correct answer. Instead, when it comes to intelligence explosion scenarios, in order to get this right we have to transcend the normal human biases, emotions, and confusions of the human mind, and make the right predictions before we can run any experiments. We can't try an intelligence explosion and see how it turns out.

Moreover, to even understand what the problem is, you’ve got to get past a lot of usual biases and false but common beliefs. So we need a more sane world to solve these problems, and we need a saner world to have a larger community of support for addressing these issues.

And, Eliezer’s choice to work on rationality has paid off. The Sequences, and the Less Wrong community that grew out of them, have been successful. We now have a large and active community of people growing in rationality and spreading it to others, and a subset of that community contributes to progress on problems related to AI. Even Eliezer’s choice to write a rationality fanfiction, Harry Potter and the Methods of Rationality, has — contrary to my expectations — had quite an impact. It is now the most popular Harry Potter fan fiction, I think, and it was responsible for perhaps ¼ or ⅕ of the money raised during the 2011 summer matching challenge, and has brought several valuable new people into our community. Eliezer’s forthcoming rationality books might have a similar type of effect.

But we understand that many people don’t see the connection between rationality and navigating the Singularity successfully the way that we do, so in our strategic plan we explained that we’re working to spin off most of the rationality work to a separate organization. It doesn’t have a name yet, but internally we just call it ‘Rationality Org.’ That way, Singularity Institute can focus on Singularity issues, and the Rationality Org (whatever it comes to be called) can focus on rationality, and people can support them independently. That’s something else Eliezer has been working on, along with a couple of others.

Of course, Eliezer does spend some of his time on AI issues, and he plans to return full-time to AI once Rationality Org is launched. But we need more talented researchers, and other contributions, in order to succeed on AI. Rationality has been helpful in attracting and enhancing a community that helps with those things.

 

Changing Course

Next. Less Wrong user ‘JoshuaZ’ asks:

...are there specific sets of events (other than the advent of a Singularity) which you think will make [Singularity Institute] need to essentially reevaluate its goals and purpose at a fundamental level?

Yes, and I can give a few examples that I wrote down.

Right now we’re focused on what happens when smarter-than-human intelligence arrives, because the evidence available suggests to us that AI will be more important than other crucial considerations. But suppose we made a series of discoveries that made it unlikely that AI would arrive anytime soon, but very likely that catastrophic biological terrorism was only a decade or two away, for example. In that situation, Singularity Institute would shift its efforts quite considerably.

Another example: If other organizations were doing our work, including Friendly AI, and with better efficiency and scale, then it would make sense to fold Singularity Institute and transfer resources, donors, and staff to these other, more efficient and effective organizations.

If it could be shown that some other process was much better at mobilizing efforts to address core issues, for example if Giving What We Can (an organization focused on optimal philanthropy) continues doubling each year and spinning off large numbers of skilled people to work on existential risk reduction (as one of the targets of optimal philanthropy), then focus there for a while could make sense — or at least it might make sense to strip away outreach functions from [Singularity Institute], perhaps leaving a core FAI team, and leave outreach to the optimal philanthropy community or something like that.

So, those are just three ways that things could change or we could make some discoveries, and that would radically shift the strategy that we have at Singularity Institute.

 

Experimental Research

Next. User ‘XiXiDu’ asks:

Is [Singularity Institute] willing to pursue experimental AI research or does it solely focus on hypothetical aspects?

Experimental research would, at this point, be a diversion from work on the most important problems related to our mission, which are technical problems in mathematics, computer science, and philosophy. If experimental research becomes more important than those problems in math, computer science, and philosophy, and if we had the funding available to do experiments, we would do experimental research at that time, or fund somebody else to do it. But those aren't the most important or most urgent problems that we need to solve.

 

Winning Without Friendly AI

Next. Less Wrong user ‘Wei_Dai’ asks:

Much of [Singularity Institute’s] research [is] focused not directly on [Friendly AI] but more generally on better understanding the dynamics of various scenarios that could lead to a Singularity. Such research could help us realize a positive Singularity through means other than directly building an [Friendly AI].

Does [Singularity Institute] have any plans to expand such research activities, either in house, or by academia or independent researchers?

The answer to that question is 'Yes'.

Singularity Institute does not put all its eggs in the ‘Friendly AI’ basket. Intelligence explosion scenarios are complicated, the future is uncertain, and the feasibility of many possible strategies is unknown and uncertain. Both Singularity Institute and our friends at Future of Humanity Institute at Oxford have done quite a lot of work on these kinds of strategic considerations, things like differential technological development. It’s important work, so we plan to do more of it.

Most of this work, however, hasn’t been published. So if you want to see it published, put us in contact with people who are good at rapidly taking ideas and arguments out of different people's heads and putting them on paper. Or maybe you are that person! Right now we just don’t have enough researchers to write these things up as much as we'd like. So contact me: luke@intelligence.org.

 

Conclusion

Well, that’s it! I'm sorry I can’t answer all the questions. Doing this takes a lot more work than you might think, but if it is appreciated, and especially if it grows and encourages the community of people who are trying to make the world a better place and reduce existential risk, then I may try to do something like this — maybe without the video, maybe with the video — with some regularity.

Keep in mind that I do have a personal feedback form at tinyurl.com/luke-feedback, where you can send me feedback on myself and Singularity Institute. You can also check the Less Wrong page that will be dedicated to this Q&A and leave some comments there.

Thanks for listening and watching. This is Luke Muehlhauser, signing off.

New Comment
124 comments, sorted by Click to highlight new comments since: Today at 9:58 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

How are you going to address the perceived and actual lack of rigor associated with [Singularity Institute]?

I upvoted this question originally, and while I appreciate your response, I don't feel you addressed what, for me, is the crux of the matter. If the SIAI is so focussed on "solving the most important problems in mathematics, computer science, and philosophy", then where is the progress?

The worry is that the SIAI is seen as somewhere where people pontificate endlessly about the problem, without actually doing useful work towards the solution. It is important to raise awareness of the dangers of an UFAI situation, but you're claiming that you also want the SIAI to be more than that.

But it's hard to take that seriously when there is so little evidence of problems actually getting solved, particularly the hard ones in mathematics and computer science. Eliezer's TDT draft is a step in the right direction, as it's at least evidence that some work is getting done, but it's the sort of thing I'd like to see much, much more of. In addition, it could do with tightening up, and I think the rigour of submitting it to an actual academic journal would be extremely helpful. Eve... (read more)

As long as the SIAI continues to not publish, or otherwise make available, credible documents indicating rigorous progress it is going to be perceived as lacking in rigour. And those of us who aren't privy to what is actually going on in there may worry that this indicates an actual lack of rigour.

I couldn't agree more.

This is why I talk almost non-stop within Singularity Institute about how we need to be publishing the research that we're doing. It's why I've been trying to squeeze in hours (around helping with the Summit and now being Executive Director) that allow me to author and co-author papers that summarize the current state of research, like 'The Singularity and Machine Ethics' and many others that are in progress: 'Intelligence Explosion: Evidence and Import', 'How to Do Research That Contributes Toward a Positive Singularity', and Open Problems in Friendly Artificial Intelligence. Granted, only the last one could constitute significant research progress, but one reason it's hard to make research progress is that not even the basics have been summarized with good form and clarity anywhere, so I'm first working on these kinds of "platform" documents as enabler... (read more)

6bryjnar12y
So it sounds like your answer is: "Publishing research would help, and we're working on it." That's great! It's just good that you've got a plan. After all, the question was "How are you going to address the perceived lack of rigour".
4lukeprog12y
Correct!
6XiXiDu12y
Even if they were to make some actual progress, most of it would probably be regarded too dangerous to be released. Therefore I predict that you won't see much more of it ever. Indeed! Think about it this way, if Less Wrong would have been around for 3000 years and the field of academic philosophy would have been founded a few years ago then most of it would probably be better than Less Wrong..
5bryjnar12y
I'm not sure how true this is, but suppose it is. Then it seems to me that the SIAI has got a problem. They need people to take them seriously, in order to attract funding and researchers, but they can't release any evidence that might make people take them seriously, as it's regarded as "too dangerous". Dilemma. Secrecy and a perceived lack of rigour seem likely to go hand in hand. And for those of us outside the SIAI, who are trying to decide whether to take it seriously, said secrecy also makes it seem likely that there is an actual lack of rigour. Perhaps this just demonstrates that any organization seriously aiming to make FAI has to be secretive, and hence have a bad public image. Which would be interesting. But in that case, the answer to the original question may just be: "We can't really, because it would be too dangerous", which would at least be something. And perhaps, just perhaps, LW might have something to learn from that older sibling... I appreciate the desire to declare all past philosophy diseased and start again from nothing, but I think it's misguided. Even if you don't like much of contemporary philosophy, modern-day philosophers are often well-trained critical thinkers, and so a bit of attention from them might help shape things up a bit.
3lukeprog12y
I'm not sure that "most of it" is too dangerous to be released. There is quite a lot of research that can be done in the open. If there wasn't, we wouldn't be trying to write a document like Open Problems in Friendly AI for the public.
4SilasBarta12y
You've managed to come up with excuses for not posting something as rudimentary as statistics that would substantiate your claims of success for rationality bootcamps. "That would take too much time!" -> So a volunteer can do it for you. -> "But it's private so we can't release it." -> So anonymize it. -> "That takes too much work too." -> Um? -> "Hey, our alums dress nicely now, that should be enough proof." Frankly, that doesn't bode well.
3dlthomas12y
It seems that signaling rigor in hidden domains through a policy of rigor in open domains would be appropriate, and possibly sufficient. It may be expensive, but hopefully the domains addressed would still be of some benefit.
3Manfred12y
That seems unlikely - well, the being too dangerous, not sure about the regarding. The philosophy of digitizing human preferences seems particularly releasable to me, but depending on how you break the causes of unFAI into malice/stupidity, it can be a good idea to release pretty much anything that's easier to apply to FAI than to unFAI.
3wedrifid12y
I'd be surprised. There is plenty left that I would expect Eliezer to consider releaseable.
2XiXiDu12y
Carl Shulman wrote that Eliezer is reluctant to release work that he thinks is relevant to building AGI. Think about his risk estimations of certain game and decision theoretic thought experiments. What could possible be less risky than those thought experiments while still retaining enough rigor that one would be able to judge if actual progress has been made?

Carl Shulman wrote that he is reluctant to release work that he thinks is relevant to building AGI.

(Suggest substituting "Eliezer" for "he" in the above sentence.)

There is plenty of work that could be done and released that is not directly about AGI construction or the other few secrecy requiring areas.

0XiXiDu12y
Right, the friendly AI problem is incredible broad. I wish there was a list of known problems that need to be solved. But I am pretty sure there is a large category of problems that Eliezer would be reluctant to even talk about.

Ask and you shall receive.

Here is one of several emails I've now received in response to my repeated request that potential research collaborators contact me (quoted with permission):

My name is [name]. I am a first year student at [a university] majoring in pure math... I am rather intelligent; I estimate my score on the recent Putnam contest to be thirty, and the consensus is that the questions were of above average difficulty this year. I really care about the Singularity Institute's mission; I have been a utilitarian since age 11, before I knew that the idea had a name and I have cared about existential risk since at least age twelve, when I wrote a short piece on why prevention of the heat death was the greatest moral imperative for humankind (I had come up with the idea of what was essentially a Brownian ratchet years before I read the proof of the H-theorem showing the irreversible increase in entropy).

I want to help with the theory of friendly AI. I currently think that I could work directly on the problem but if my comparative advantage is elsewhere I would like to know that... I would be interested in participating in a rationality camp, the Visiting fellows program or anything else that could help the Singularity Institute.

Keep 'em coming, people!

For the love of the flying spaghetti monster, can you please, please stop saying "at Singularity Institute", "within Singularity Institute", et cetera?

As has been explained before, this is annoying, grating, and just plain goofy. It makes you sound like a fly-by-night commercial outfit run by people who don't quite speak English. In my estimation it's about 2:1 evidence that SI* is a scam.

Now, as you know, my prior on the latter hypothesis is pretty low. But this is nevertheless a serious issue. We're talking about how serious your organization sounds, at the 5-second level. And at this point it's also a meta-issue, having to do with whether you (all) listen to criticism. Because, in light of the discussion linked above, you would at the very least need a damn good reason to continue this practice in the face of some rather compelling criticism. As in, "we did a focus group study last year which showed that omitting the definite article would likely result in a 5% increase in donations". As far as I know, you have no such good reason. Indeed, the only reasoning anyone at SI* has offered for this at all is contained in a comment by Louie whose score is ... (read more)

8wedrifid12y
I have to confirm that this in particular is a significant issue. Until he redeemed himself Luke's reply had me updating towards writing him off as another person with too much status/ego to hear correctly.
4gwern12y
I don't think I have ever been so dismayed to see a comment at +15 and no less than 11 children comments. WTF, people. BS. (Here, let me indulge in some anecdotage - 800 Verbal on the SAT etc, also what I would consider my greatest skill - and it doesn't bother me in the least. That cancel out your 'Bayesian information'? Good grief.) Your entire comment is sheer pedantry of the worst kind, that I'd expect on Reddit and not LessWrong.

For what it is worth, komponisto's basic point without the egotism is essentially correct. The dropping of the definite article sounds incredibly awkward and does signal either a scam or general incompetence. I don't understand what they are thinking. The self-congratulatory puffery that is the second half of the comment doesn't reduce the validity of the central point.

8komponisto12y
Said "puffery" has now been removed. My own mental context for those remarks was evidently quite different from that in which they were seen by others. (Though no one actually complained until gwern, quite a while after the comment was posted.)

Said "puffery" has now been removed. My own mental context for those remarks was evidently quite different from that in which they were seen by others. (Though no one actually complained until gwern, quite a while after the comment was posted.)

It is amazing how much difference one antagonistic reader can make to how a statement is interpreted by others. Apart from the priming it makes you a legitimate target.

8komponisto12y
Quite so. This "bandwagon" behavior is disturbing, and has the unfortunate consequence of incentivizing one to reply to hostile comments immediately (instead of taking time to reflect), to fend off the otherwise inevitable karma onslaught.
1gwern12y
Yes, I found Asch's Conformity Experiment pretty amazing too.
9wedrifid12y
I support the grandparent. Your condemnation here barely makes any sense and is unjustifiably rude. I am rather shocked that kompo needed to make the comment. The subject had come up recently and more than enough explanation had been given to SIAI public figures of how to not sound ridiculous and ignorant while using the acronym.
-1gwern12y
Logically ruder than claiming one's dislike is 'Bayesian evidence'? Since when do we dress up our linguistic idiosyncrasies in capitalized statistical drag? Is there any evidence at all that this is a meaningful change, that it really makes one sound 'ridiculous and ignorant'?
7Vladimir_Nesov12y
Own dislike is clearly some evidence of others' dislike, the relevant question is how much evidence. Votes add more evidence.
5wedrifid12y
1. I said unjustifiably rude, not logically rude (although now you are being the latter as well). 2. There was nothing logically rude about kompo claiming his own expertise as evidence. It does come across as somewhat arrogant and leaves kompo vulnerable to status attack by anyone who considers him presumptive but even if his testimony is rejected "logical rudeness" still wouldn't come into it at all. Don't try to "dress up" corrections about basic misuse of English as personal idiosyncrasies of komponisto. He may care about using language correctly more than most but the usage he is advocating is the standard usage.
-8gwern12y
-3XiXiDu12y
* The SIAI is located in the U.S. under the jurisdiction of the FBI. * SIAI is located in U.S. under the jurisdiction of FBI.
7komponisto12y
Neither. What you want is: * SIAI is located in the U.S., under the jurisdiction of the FBI.
0wedrifid12y
When the entire point of quoting a statement is to question whether or not "the" should be used you can't go around truncating like that! (Are you being disingenuous or is that just a mistake?) Notice the difference in how an added 'the' would sound now? Incidentally: Think "MIT" or "NASA" instead of "FBI".
3XiXiDu12y
I have now removed the quote completely. I was planning on writing something else first that was more relevant to the quote. Sorry. There might be some sort of rules that govern when it is correct to use "the" and when it is wrong. But ain't those rules fundamentally malleable by the perception of people and their adoption of those rules? An interesting example is the German word 'Pizza' (which happens to mean the same as the English word, i.e. the Neapolitan cuisine). People were endlessly arguing about how the correct plural form of 'Pizza' is 'Pizzen'. Yet many people continued to write 'Pizzas' instead. What happened a few years ago is that the Duden (the prescriptive source for the spelling of German) included 'Pizzas' as a secondary but correct plural form of the word 'Pizza'. So why did people ever bother to argue in the first place? German, or English for that matter, would have never evolved in the first place if thousands of years ago people would have demanded that all language be frozen at that point of time and only the most popular spelling be regarded as correct. Not that I have a problem with designing an artificial language or improving an existing language. Just some thoughts.
4komponisto12y
The rules may not necessarily be simple, however. In the worst-case scenario, they may simply consist of lists of cases where it is one way and cases where it is the other. (As you no doubt realize, the same issue also comes up in German: why is it "Deutschland, Österreich, und die Schweiz" instead of "Deutschland, Österreich, und Schweiz" or "das Deutschland, das Österreich, und die Schweiz"?) Yes, and the exact same thing could be said about any human signaling pattern, not just those that concern language. But don't make the mistake of thinking that this is a Fully General Counterargument against any claim about the meaning of a particular signaling pattern in a particular context at a particular time. It isn't as if everything eventually becomes accepted. Language changes, but it doesn't descend into entropy: in the future, there will still be patterns that are "right" and others that are "wrong", even if these lists are different from what they are now. Not only will some things that are "wrong" now become "right" in the future, but the reverse will also happen: expressions that are "right" now will become "wrong" later. From what I understand, linguists actually consider "-s" the regular manner of plural formation in modern German, despite the fact that only a minority of words use it, because it is the default used for new words. (So the dispute you mention is perhaps really about how "new" the word "Pizza" is felt to be.)
5komponisto12y
I'll return the favor and express my own dismay that the parent has been voted up to +3, while wedrifid's comments haven't been voted up to +10 where they deserve to be. Your comment is sanctimony of the worst kind. Attempting to seize the "moral high ground" at the expense of someone who makes an honest expression of feeling is an all-too-familiar status strategy, and not one that earns any respect from me. Ironically, the point about the typical mind fallacy, as expressed in Yvain's original post on it, applies with full force to the parent, insofar as you have apparently failed to grasp that others could be seriously bothered by something that doesn't bother you. (I find it regrettable that I am in a hostile exchange with you, since I have found many of your writings here and on your own site interesting and valuable.)
-3gwern12y
I am being sanctimonious about your 'honest expression of feeling'? Let me quote from you again: You have gone way beyond an 'honest expression of feeling'. You have successively claimed arrogantly high linguistic abilities, abused badly important terminology worse than any post like 'Rational toy buying', you have directly condescended to Luke (who is a better writer than you, IMO, even if not fluent in X languages), you have claimed this tiny verbal distinction brings disrepute upon the SIAI and anything connected, called it evidence for a scam, and finish by insulting everyone involved who does not think as you do. And I will note that despite a direct request to wedrifid for any random grammarian or language maven reference, none has been provided, despite the fact that you can find recommendations for and against any damn grammatical point (because there is no fact of the matter). So not only are you engaged in ridiculous accusations on something that is manifestly not worth arguing about, you may not even be right.
5komponisto12y
I don't understand why you are seeking to escalate a conflict that I specifically tried to de-escalate above (see last sentence of grandparent). I disagree with the above in the strongest possible terms, resent the insults and hostile tone, and take severe exception to the fallacious appeals to emotion, strawman arguments, and question-begging. Point by point: No I have not. My original comment reflects my feelings entirely accurately. There is no posturing or exaggeration involved (for what purpose I can't even imagine). I said exactly what I thought, no more, no less. This statement of yours about my "going way beyond" is completely false on its face and must be interpreted as some kind of rhetorical way of saying that you are offended by how strongly I feel. If that was what you meant, that is what you should have said. I do not consider the level of linguistic ability I claimed to be "arrogantly high". Just high enough for me to be worth listening to, rather than ignored like I was the last time this issue came up. That was the context of this remark about linguistic ability (of which I had omitted all mention on the previous occasion). Note that "worth listening to" is not the same as "worthy of being unconditionally obeyed". Perhaps if I had claimed the latter, that would have been "arrogant". Note also that several specifically non-arrogant disclaimers were inserted: "It's a bit embarrassing to admit this..."; "My ability in this area isn't perfect..."; "it's overrideable". Apparently you didn't notice these, despite having quoted one of them yourself. Nonsense. You are free to disagree with my claims about whether X is Bayesian evidence of Y (I assume that is what you are referring to here), but the mere fact that you disagree with such a claim does not make the claim an abuse of terminology. An abuse of terminology would be if I used the term despite not actually meaning "X is more likely if Y is true than if Y is false"; but that is exactly what I mea
5drethelin12y
if you don't see what's wrong with claiming that your opinion on a linguistic matter is basis for significant Bayesian update, especially in the style that you did, then that significantly lowers any update I would make based on your communication skills. I strongly think that "the singularity institute" sounds better, but you're making me sad to agree with you.
0[anonymous]12y
0komponisto12y
This is a cheap shot. (1) I have not made any claim to superior "communication skills". Those are highly complex and involve many smaller abilities. The most I did was make a claim to (a certain kind of) superior language skills in order to draw attention to an explicit argument I had given that had been ignored. (2) Compare the following: For what class of adjective do you regard this as a general template for a sound argument?
4drethelin12y
I'll let you in on a secret. IN THE STYLE YOU DID was a part of what I said, and it was an important part. Claiming to be wise enough that what you think should make other people significantly change their point of view is OBVIOUSLY arrogant. What is so hard to understand about that? Adding lines like "I'll let you in on a secret" makes you come off significantly worse. Your style of communication is dismissive of any contrary opinion, insulting, and ridiculously pompous. If you can't see this, my opinion of your language skills HAS to go down based on them being a subset of being able to understand communication. Your dislike of singularity institute is clearly based on what you think that phrasing communicates, and yet you can't seem to understand why people might dislike your own communications. The class of adjective is irrelevant. What's wrong with that claim is not whether or not it is true or useful, but how well it persuades. And a flat statement saying you should update on my beliefs, when we are specifically talking about whether to update beliefs based on how something is said, is unconvincing and annoying.

I have now edited the comment, removing what I understand to have been the most offensive passage.

9komponisto12y
Thank you for the feedback. Let me now try to reply to some of your points, in order to help you and anyone else reading better understand where I am coming from. (I don't intend these replies as rejections of the information you've offered about your own perspective.) I was only claiming to be "wise enough" to have my point of view taken into account. Not all Bayesian updates are large updates! Now, of course, in this particular case, I did think a large update was warranted; but I didn't expect that large update to be made on the basis of my authority, I expected it to be made on the basis of my arguments. That seems bizarre, unless you interpreted it as sarcasm. But it wasn't sarcasm: I spelled out in the next sentence that I was actually embarrassed to be making the admission! Another strange thing about the reaction to this is that I didn't actually claim my "single greatest skill" was actually all that great. I just said it was the greatest skill I had. It could perhaps be quite bad, with all the other skills simply being even worse. The only comparison was with my own other skills, not the skills of other people. What I was saying was "if you ever listen to me on anything, listen to me on this!". This feels to me like I'm being interpreted uncharitably. My statement was highly specific and limited in scope. It was not in any sense a "flat" statement; it was fairly narrowly circumscribed.
2prase12y
A data point: doesn't seem bizarre to me. Whether I interpret it as (a specific type of) sarcasm I'm not sure. Sarcasm needn't hinge only on the contradiction between the literal and factual meaning of "secret", but also on the contradiction between a relatively familiar / seemingly friendly phrase and the general expression of disagreement.
4komponisto12y
The phrase was intended to be friendly, precisely in order to mitigate the general expression of disagreement!
1Emile12y
Data point: It didn't come off that way to me either, I found it sounded condescending. I agree that "at Singularity Institute" sounds weird, but I also know that judgement on what sounds weird or what connotations come up - including things like "I'll let you in on a secret" - vary a lot from person to person, even among people from the same language and country and background.
-2gwern12y
Irrelevant to me. A bad comment is a bad comment. Our past and future interactions do not matter to me. To the extent I comprehend our interaction, it is me commenting on and discussing your Knox materials and you silently reading whatever you read of me; even if I were selfishly concerned about future interactions, I doubt I would value it at very much - you will continue to discuss Knox or not regardless of whether you are angry with me. If you really do form an ugh-field just over this discussion, you should work on that. Bad habit to have. You stand by everything you said, the personal attacks and absurd inferences, and feel this is perfectly honest? That this violates no LW norms of communication? That all this is perfectly acceptable? You feel that there is no problem with saying all that, because hey, you actually thought it? WE ARE NOT OPERATING ON CROCKER'S RULES. I will repeat this; we operate on a number of norms where we do not accuse, in an inflammatory way, someone of making the SIAI look like incompetent crooks simply because we 'feel honestly' this way. WE ARE NOT OPERATING ON CROCKER'S RULES. Some of us do, but not lukeprog or anyone I've noticed in these threads. Interesting that you were ignored, you say. I wonder why you weren't ignored this time? Gee, maybe it has something to do with how you expressed it this time? But no, you were merely honestly expressing your feelings! (I guess you were being dishonest last time, since I don't see any other way to differentiate the two posts.) 'I could be mistaken, and my ability in this area isn't perfect (embarrassingly), but I think your mother is a whore.' You're offended? But I just included 3 disclaimers that you blessed as effective! Lamentably, disclaimers no longer work in English due to abuse. I believe Robin Hanson has written some interesting things on disclaimers. If you are not speaking in a logical or mathematical mode, don't expect disclaimers to be magic pixy dust which will insta
3wedrifid12y
You are aggressively and publicly trolling a prominent member when he is not being hostile. You should not anticipate the negative consequences of that to be limited to his own perception. You seem to be willfully sabotaging your own reputation. I don't understand why. He didn't do anything of the sort. Which seems to be applicable to you, and not kompo at all. Saying that a particular behavior gives a terrible signal is not a personal attack. The following, what kompo actually said, is not a norm violation:
2[anonymous]12y
I don't know. I gained more respect for gwern after reading his comment.
5wedrifid12y
Pardon me: "... with the obvious exception of the other person who has also been heavily downvoted for abusing komponisto in the same context"
2[anonymous]12y
What's abusive about it? It seemed to me like a straightforward error, but the presentation was admittedly bad. I was tired and possibly inebriated; so it goes. Nobody lost many hedons over it. On the gripping hand, gwern doesn't even talk about komponisto's tacit conflation of karma with correctness, or that of total karma with total number of people approving. I don't even agree with gwern on the issue at hand, as I said before. I gained respect for him because it takes a great deal of nerve to write such a thing, and I think that's admirable. Or maybe my model of gwern is more accurate than yours? I don't know. EDIT: Rereading that thread, I notice drethelin did succeed in convincing komponisto of a related point. As I expected, it took more writing than I was interested in doing at the time. Props to it, as well.
0gwern12y
For the same reason people in other articles rail against the 'Rational Xing' meme - because komponisto's sort of comment is the sort of thing I do not want to see spread at all. I do not want to see people browbeating lukeprog or anyone with wild claims about their unproven opinion being 'Bayesian evidence', or all the other pathologies and dark arts in that comment which I have pointed out. If I fail to convince people as measured by karma points, well, whatever. You win some and you lose some - for example, I was expecting my last comment attacking the Many Worlds cultism here to be downvoted, but no, it was highly upvoted. As they say about real karma, it balances out. If my reputation is damaged by this, well, whatever. Whatever can be destroyed by the truth should be, no? I think I am right here and if I do not give an 'honest expression of my feelings', I am manipulating my reputation. And if it is so flimsy a thing that a small flamewar over one of the obscurest grammatical points I have seen can damage it, then it wasn't much of a reputation at all and I shouldn't engage in sunk cost fallacy about it. Ah, an excellent reply. To many many questions - 'no'. I see. Tu quoque! Yeah, whatever. I already dealt with this BS with the disclaimers and other stuff. By the way, komponisto has not produced the slightest shred of evidence for that ratio. Is 'making stuff up' not a norm violation on LW these days? And by the way, you haven't provided any citations for the linguistic point in contention, despite my direct unambiguous challenge several days ago. How many times will I have to ask you and komponisto about this before you finally dig up something - an Internet grammarian or anything saying you are right about how to refer to the SIAI and its myriad connexions? I think this makes 4, which alone earns you two my downvotes.
0wedrifid12y
I most certainly haven't. The "challenge" in question was a logically rude - and blatantly disingenuous - attempt to spin the context such that I am somehow obliged to provide citations or else your accusation that komponisto is "dressing up [his] linguistic idiosyncrasies in capitalized statistical drag" is somehow valid - rather than totally out of line. I am actually somewhat proud that after I wrote a response to that comment at the time you made it I discarded it rather than replying - there wasn't anything to be gained and so ignoring it was the wiser course of action. I was also pleasantly surprised that the community saw through your gambit and downvoted you to -4. In most environments that would have worked for you - people usually reward clever use of spin and power moves like that yet here it backfired.
3gwern12y
If his preference is only his preference, why do we care? We should do nothing to cater to one person's linguistic whims. If we care because his preference may be shared by the LW community, 10 or 15 upvotes are not enough to indicate a community-wide preference, and likewise nothing should be done. If we care because his preference is descriptively correct and common across many English-speaking communities beyond LW, then a failure to provide citations is a failure to provide proof, and likewise nothing should be done. This is another kind of comment I dislike. Karma should be discussed as little as possible. Goodhart's law, people! The more you discuss karma and even give it weight, the more you destroy any information it was conveying previously. Please don't do that; I like being able to sort by karma and get a quick ranking of what comments are good.
7[anonymous]12y
They aren't? I perceive that as a fairly large score and practically the second-highest range a comment ever gets, short of the >40 karma of a particularly clever pun or Yvain comment. (That doesn't justify catering to the whim, but I'd take it seriously at least.)
-5gwern12y
0[anonymous]12y
They don't? I perceive that as a fairly large score and practically the second-highest range a comment ever gets, short of the >40 karma of a particularly clever pun or Yvain comment. (That doesn't justify catering to the whim, but I'd take it seriously at least.)
-14[anonymous]12y
2Jonathan_Graehl11y
I also mildly agree with using a determiner for names of organizations that end in "Company" "Institute" "Organization" etc, and also don't mind treating the acronym version as you would without knowing the expansion. I don't think it's a full bit of scam-signal, though. Some weak (top prescriptivist result in Google) evidence: http://writing.umn.edu/sws/assets/pdf/quicktips/articles_proper.pdf (although it contradicts komponisto and me in advising that determiner choice should be the same as the expanded version (while simultaneously advising "an SDMI" and not "a SDMI" presumably because read aloud, "an S" is "an ess")
1jimrandomh12y
Actually, I think this is a linguistic corner case in whether you ought to use the word "the", and some speakers/dialects will fall on either side. Consider: She works at the institute. * She works at institute. She works at SingInst. * She works at the SingInst. ? She works at the Singularity Institute ? She works at Singularity Institute (* denotes a sentence that is incorrect to all speakers and ? denotes a sentence that is incorrect to some speakers but not all.) If Singularity Institute parses as a modified noun, then it should have an article. If it parses as a name, then it shouldn't. You can force it to be a name by either compressing it into something that isn't a regular word (SingInst), or by adding something that's incompatible with regular words. Compare: He will attend the Singularity Summit. ? He will attend Singularity Summit. He will attend Singularity Summit 2012. * He will attend the Singularity Summit 2012. And that's the entire fact of the matter. From a linguistics perspective, whether a sentence is grammatically correct or incorrect depends solely on the intuition of native speakers; and if native speakers disagree, then it must be a dialect difference. Arguing what is "correct" in a speaker-independent sense is meaningless and unproductive.
-4shokwave12y
The recent attention on this discussion compels me to point out that absolutely does not follow from at all. Like, "time to question whether you are intimately familiar with Bayes Thereom". I assumed you were, because you spoke of evidence likelihoods and Bayesian evidence in favour of propositions: but now I fear those are just locally high-status words you were using, because when you take a low prior and update on 2:1 evidence you are left with a low prior. And if you have a low prior for it being a scam, you don't embellish on it being a scam! I am reminded of the double illusion of transparency. I assumed when people talked about Bayesian evidence they had done calculations.
6wedrifid12y
You seem to be confused. It isn't supposed to follow - it is meant as a contrast! Kompo estimated 1 bit of evidence of crackpotness is embedded in prominent misuse of language. He then reaffirms that despite this he isn't saying that singinst is a crackpot institute... that is what declaring a one bit update on a very low prior means and there is no evidence suggesting that kompo intended anything else. He is making a general gesture of respect to the institute so it is clear that he isn't using this issue as an excuse to insult the institute itself. He knows this, has used bayesian reasoning correctly in the past in his posts and has not made a mistake here.
-22lukeprog12y

Luke: I appreciate your transparency and clear communication regarding SingInst.

The main reason that I remain reluctant to donate to SingInst is that I find your answer (and the answers of other SingInst affiliates who I've talked with) to the question about Friendly AI subproblems to be unsatisfactory. Based on what I know at present, subproblems of the type that you mention are way too vague for it to be possible for even the best researchers to make progress on them.

My general impression is that the SingInst staff have insufficient exposure to technical research to understand how hard it is to answer questions posed at such a level of generality. I'm largely in agreement with Vladimir M's comments on this thread.

Now, it may well be possible to further subdivide and sharpen the subproblems at hand to the point where they're well defined enough to answer, but the fact that you seem unaware of how crucial this is is enough to make me seriously doubt SingInst's ability to make progress on these problems.

I'm glad to see that you place high priority on talking to good researchers, but I think that the main benefit that will derive from doing so (aside from increasing awareness of AI risk) will be to shift SingInst staff member's beliefs in the direction of the Friendly AI problem being intractable.

I find your answer... to the question about Friendly AI subproblems to be unsatisfactory. Based on what I know at present, subproblemz of the type that you mention are way too vague for it to be possible for even the best researchers to make progress on them.

No doubt, a one-paragraph list of sub-problems written in English is "unsatisfactory." That's why we would "really like to write up explanations of these problems in all their technical detail."

But it's not true that the problems are too vague to make progress on them. For example, with regard to the sub-problem of designing an agent architecture capable of having preferences over the external world, recent papers by (SI research associate) Daniel Dewey, Orseau & Ring, and Hibbard each constitute progress.

My general impression is that the SingInst staff have insufficient exposure to technical research to understand how hard it is to answer questions posed at such a level of generality.

I doubt this is a problem. We are quite familiar with technical research, and we know how hard it is for, in my usual example of what needs to be done to solve many of the FAI sub-problems, "Claude Shannon to ju... (read more)

3Vladimir_Nesov12y
Believing problem intractable isn't a step towards solving the problem. It might be correct to downgrade your confidence in a problem being solvable, but isn't in itself a useful thing if the goal remains motivated. It mostly serves as an indication of epistemic rationality, if indeed the problem is less tractable than believed, or perhaps it could be a useful strategic consideration. Noticing that the current approach is worse than an alternative (i.e. open problems are harder to communicate than expected, but what's the better alternative that makes it possible to use this piece of better understanding?), or noticing a particular error in present beliefs, is much more useful.
2multifoliaterose12y
I agree, but it may be appropriate to be more modest in aim (e.g. by pushing for neuromorphic AI with some built-in safety precautions even if achieving this outcome is much less valuable than creating a Friendly AI would be).
4Vladimir_Nesov12y
I believe it won't be "less valuable", but instead would directly cause existential catastrophe, if successful. Feasibility of solving FAI doesn't enter into this judgment.
0multifoliaterose12y
I meant in expected value. As Anna mentioned in one of her Google AGI talks there's the possibility of an AGI being willing to trade with humans to avoid a small probabity of being destroyed by humans (though I concede that it's not at all clear how one would create an enforceable agreement). Also a neuromorphic AI could be not so far from a WBE. Do you think that whole brain emulation would directly cause existential catastrophe?
9Vladimir_Nesov12y
Huh? I didn't mean opportunity cost, but simply that successful neuromorphic AI destroys the world. Staging a global catastrophe does have lower expected value than protecting from global catastrophe (with whatever probabilities), but also lower expected value than watching TV. Indirectly, but with influence that compresses expected time-to-catastrophe after the tech starts working from decades-centuries to years (decades if WBE tech comes early and only slow or few uploads can be supported initially). It's not all lost at that point, since WBEs could do some FAI research, and would be in a better position to actually implement a FAI and think longer about it, but ease of producing an UFAI would go way up (directly, by physically faster research of AGI, or by experimenting with variations on human brains or optimization processes built out of WBEs). The main thing that distinguishes WBEs is that they are still initially human, still have same values. All other tech breaks values, and giving it power makes humane values lose the world.
2multifoliaterose12y
I was saying that it could be that with more information we would find that 0 < EU(Friendly AI research) < EU(Pushing for relatively safe neuromorphic AI) < EU(Successful construction of a Friendly AI). even if there's a high chance that relatively safe neuromorphic AI would cause global catastrophe and carry no positive benefits. This could be the case if Friendly AI research sufficiently hard. I think that given the current uncertainty about the difficulty of friendly AI research would have to be extremely confident that relatively safe neuromorphic AI that would cause global catastrophe to rule this possibility out. Agree with this I think that I'd rather have an uploaded crow brain have its computational power and memory substantially increased and then go FOOM than have an arbitrary powerful optimization process; just because a neuromorphic AI wouldn't have values that are precisely human doesn't mean it wouldn't be totally devoid of value from our point of view.
4Vladimir_Nesov12y
I expect it would; even a human whose brain was meddled with to make it more intelligent is probably a very bad idea, unless this modified human builds a modified-human-Friendly-AI (in which case some value drift would probably be worth protection from existential risk) or, even better, a useful FAI theory elicited Oracle AI-style. The crucial question here is the character of FOOMing, how much of initial value is retained.

Another change is that our President, Michael Vassar, is launching a personalized medicine company that we’re all pretty excited about.

I only read about that now. The president of the Singularity Institute believes that he should rather spend his time on personalized medicine?

I don't think it likely that Vassar strictly prefers medicine to the singularity. Much more likely he can do almost all of the work he does for SingInst when he's with the other company, the work he can't do can be done by someone else just as well (or better, or that work isn't so important), and the extra benefits he can bring outweigh the negatives of reducing committed time.

If he does genuinely think medicine is more important, that's a failing of Michael Vassar, not of SingInst.

(And a success on the part of SingInst in letting him do that, instead of demanding committment).

So, I disagree with your connotations.

The company could generate profit to help fund SingInst and give evidence that the rationality techniques that Vassar, etc. use work in a context with real world feedback. This in turn could give evidence of them being useful in the context of x-risk reduction where empirical feedback is not available.

3curiousepic12y
Does anyone know if this is the intention?
6Vladimir_Nesov12y
(I believe it's the org that announced the prize recently discussed on LW.)
0timtyler12y
It actually looks like 4 SingInst folk are involved. Networking.

A notable (ommited?) reason to publish is peer review. External peer review might be too costly for most items like Luke mentioned, but perhaps creating an internal peer review network between SIAI and FHI and some other people might be a useful compromise.

1lukeprog12y
Yes, we do this. This is one benefit of the research associates program, for example.
6Dr_Manhattan12y
Make this explicit - the aim is not only to produce high quality output but also to signal that this is high quality output. Mark papers as "reviewed by X" or something. Curious if you guys found anonymous reviews useful.

I think most people would agree that if a scientist happened to create a synthetic virus that was airborne and could kill hundreds of millions of people if released into the wild, we wouldn't want the instructions for creating that synthetic virus to be published in the open for terrorist groups or hawkish governments to use.

Some say this has already happened. (I am somewhat cheered that the general reaction was "WHAT THE HELL, HERO?")

I dig the 3 day mustache. +1

Here's a discussion of journal publishing versus preprints on John Baez's Google+. (Started with dodgy publishers, but read the comments.)

He is (and I am) surprised that more scientists don't use arXiv or something arXiv-like, whereas it's pretty much the standard way to quickly stake out credit in physics.

I wonder if there's a place for particularly rigorous SI papers on arXiv or somewhere similar.

I see some skeptics of the singularity, and analyse the arguments, but there is something I cannot deny: lukeprog( and others) are really trying to solve FAI. Even if in the near future we begin to realize and encounter some evidence in favor of another risk, the compreension of fragility lead us to modify our priorities.

And, Eliezer’s choice to work on rationality has paid off. The Sequences, and the Less Wrong community that grew out of them, have been successful.

While 38.5% of all people that know about Less Wrong have read at least 75% of the Sequences only 16.5% think that unfriendly AI is the most worrisome existential risk. How do you know that those 16.5% wouldn't believe you anyway, even without the work on rationality, e.g. by writing science fiction?

7multifoliaterose12y
One doesn't need to know that hundreds of people have been influenced to know that Eliezer's writings have had x-risk reduction value; if he's succeeded in getting a handful of people seriously interested in x-risk reduction relative to the counterfactual his work is of high value. Based on my conversations with those who have been so influenced, this last point seems plausible to me. But I agree that the importance of the sequences for x-risk reduction has been overplayed.
6JStewart12y
As one of the 83.5%, I wish to point out that you're misinterpreting the results of the poll. The question was: "Which disaster do you think is most likely to wipe out greater than 90% of humanity before the year 2100?" This is not the same as "unfriendly AI is the most worrisome existential risk". I think that unfriendly AI is the most likely existential risk to wipe out humanity. But I think that an AI singularity is likely farther off than 2100. I voted for an engineered pandemic, because that and nuclear war were the only two risks I thought decently likely to occur before 2100, though a >90% wipeout of humanity is still quite unlikely. edit: I should note that I have read the sequences and it is because of Eliezer's writing that I think unfriendly AI is the most likely way for humanity to end.

HD Video link. (I can't get embedding on Less Wrong to work.)

Use the old embed code instead.

3Vladimir_Nesov12y
Fixed.
3lukeprog12y
Thanks!
0[anonymous]12y
Which code?

One of the reasons given against peer review is that it takes a long time to articles to be published after acceptance. Is it not possible to make them available on your own website before they appear in the article? (I really have barely any idea how these things work; but I know that in some fields you can do this.)

I think most people would agree that if a scientist happened to create a synthetic virus that was airborne and could kill hundreds of millions of people if released into the wild, we wouldn't want the instructions for creating that synthetic virus to be published in the open for terrorist groups or hawkish governments to use. And for the same reasons, we wouldn't want a Friendly AI textbook to explain how to build highly dangerous AI systems. But excepting that, I would love to see a rigorously technical textbook on friendliness theory, and I agree that f

... (read more)
7lukeprog12y
Friendly AI theory isn't just about the problem of friendliness content, but also about the kind of AI architecture that is capable of using friendliness content. But many kinds of progress on that kind of AI architecture will be progress toward AGI that can take arbitrary goals, almost all of which would be bad for humanity.

But right now, my bet on how we’d end up spending that money is that we would personally argue for our mission to each of the world’s top mathematicians, AI researchers, physicists, and formal philosophers.

Is it known why they currently aren't working on FAI?

First thoughts:

1) Do they judge that they are having a bigger impact on the world doing what they are currently doing?

1a) Because they think it's more important?

1b) Because they think they have a comparative advantage in their current field, and that this outweighs the fact that FAI is more importan... (read more)

Well, my prior for someone on the internet who's asking for money being scam is no less than 99% (and I do avoid pascal mugging by not taking strings from such sources as proper hypotheses), and I think that is a very common prior, so there better be good evidence that it isn't scam - a panel of accomplished scientists and engineers, working to save the world, etc etc. think something on the scale of IPCC. rather than some weak evidence that it is scam, and something even less convincing than e.g. Steorn's perpetual motion device.

Scamming works best by sel... (read more)

How can we generalize the theory of machine induction - called Solomonoff induction - so that it can use higher-order logics and reason correctly about observation selection effects?

I don't really understand. What's with the higher-order logic? Solomonoff induction already uses a Turing-complete reference machine. There's nothing "higher" than that.

I don't think observation-selection effects need particularly special treatment with a dedicated reference machine. The conventional approach would be to simply let the agent see the world. That... (read more)

3endoself12y
Have you read this thread?
1timtyler12y
So: I am not too worried about the universe being uncomputable. On the race to superintelligence, there are more pressing things to worry about than such possibilities - and those interested in winning that race should prioritise their efforts - with things like this being at the bottom of the heap - otherwise they are more likely to fail. I don't think that Solomonoff induction has a problem in this area - but it is a plauisble explanation of what the reference to "higher-order logic" referred to.

I feel like this should should have been a top level post. Unless you specifically avoid using that for SingInst business.

1[anonymous]12y
I think that's the reason. Remember there was a period where SIAI and the whole topic of (u)FAI where temporarily tabooed for the sake of the health of the rationalist community.
8hankx778712y
evidently less wrong lacks a sense of humor :P
-12CallMeSIR12y