9 months ago, LessWrong Netherlands sat down to brainstorm on Actually Trying to make a difference in AI Safety.

Knowing AIS is talent-constrained, we felt that academia wasn’t fit to give the field the attention it deserves.

So we decided to take matters in our own hands, and make the road to AI safety excellence as easy as possible.

Fast forward 9 months, and project RAISE has finished it’s first pilot lesson

We are of course grateful for all the volunteers that extended a helping hand. But to produce lesson material of the highest quality, we must professionalize.

That is why RAISE is seeking funds to establish itself as the next AIS charity.

Our vision

As quoted from here:

Within the LW community there are plenty of talented people that bear a sense of urgency about AI. They are willing to switch careers to doing research, but they are unable to get there. This is understandable: the path up to research-level understanding is lonely, arduous, long, and uncertain. It is like a pilgrimage.

One has to study concepts from the papers in which they first appeared. This is not easy. Such papers are undistilled. Unless one is lucky, there is no one to provide guidance and answer questions. Then should one come out on top, there is no guarantee that the quality of their work will be sufficient for a paycheck or a useful contribution.

The field of AI safety is in an innovator phase. Innovators are highly risk-tolerant and have a large amount of agency, which allows them to survive an environment with little guidance or supporting infrastructure. Let community organisers not fall for the typical mind fallacy, expecting risk-averse people to move into AI safety all by themselves.

Unless one is particularly risk-tolerant or has a perfect safety net, they will not be able to fully take the plunge.

Plenty of measures can be made to make getting into AI safety more like an “It’s a small world”-ride:

- Let there be a tested path with signposts along the way to make progress clear and measurable.

- Let there be social reinforcement so that we are not hindered but helped by our instinct for conformity.

- Let there be high-quality explanations of the material to speed up and ease the learning process, so that it is cheap.

What we have done so far

The study group

The bulk of our work has been to take a subfield of AIS (corrigibility), gather all the papers we know of, and turn them into a set of scripts. We have devised an elaborate process for this involving summaries and mind maps. Another strategy would have been to simply copy the structure of existing papers, like in this early iteration, but we think of it as a feature that ideas are individually recompiled and explained. Crafting a good explanation is a creative process: it adds "shortcut" inferences. And so we did.

For the most part, it’s been a success. Since its inception the group has created 9 summaries, 4 mind maps, 12 lecture scripts and 4 paper presentation recordings. It's already a rich store of material to draw from. The scripts are now being converted to lectures.

A software platform

I have met a local EA who runs a platform for teaching statistics, and it’s a close match to our needs. We may use it for free, and the devs are responsive to our feature requests. It will include a white-label option, and the domain name will be changed to something more neutral.

Filming

We enlisted Robert Miles (who you might know from his Youtube channel) to shoot our lectures, and I visited him in London to help build a light board. The light board was a welcome solution to the problem of setup, in which we put considerable thought.

Prototype lesson

These developments culminated in our most tangible output: a prototype lesson. It shows a first glimpse of what the course will eventually look like.

What funding will change

Reduce turnover

While it has proved possible to run entirely on volunteers, it has also been a hurdle. Without the strong accountability mechanism of a contract, we have been suffering from high turnover. This has created some problems:

- The continuous organisational effort required to on-board new volunteers, distracting the team from other work

- An inability to plan ahead too far, not knowing what the study group attendance over a given period would be

- Quality control being somewhat intractable because it takes time to assess the general quality of someone's work

Of the capital we hope to receive, one of it's main allocations will be hiring a content developer. They will oversee the course content development process with a team of volunteers that have proven (or promise) high dedication. Given the increased time spent and reduced overhead, we expect this setup to gain a lot more traction. See the pamphlet here.

(Strongly) increase course quality

With higher net attentional resources coming from hiring someone, and reducing turnover by separating out loyal volunteers, we can do quality control. We will also benefit more from learning effects for the same reasons: with a core team that spends a lot of focused time on crafting good explanations, they might actually get uniquely good at it (that is, better than anyone who didn't do dedicated practice).

(Strongly) increase course creation speed

Right now, the amount of work that goes into creating content is about 4 hours per volunteer per week. As we learned, this is enough to compile a prototype lesson over the course of roughly 3 months. It is reasonable to assume that this time will go down with further iterations (not having to do much trailblazing) and the figure is somewhat misleading because, for about 6 or 7 more lessons, roughly 60% of the work has been done. Still the speed doesn't have the order of magnitude that we would prefer. At this rate, we will be done with corrigibility in about 6 months, and the whole of AI safety in 5+ years. This doesn't seem acceptable. The speed that we prefer, provided that it doesn't hurt quality, is about one unit (like corrigibility) per (at most) 3 months, and the whole course in (at most) 2 years.

Allow us to broaden our set of strategies

The ultimate vision of RAISE isn't a course, it's a campus. Our goal is to to facilitate the training of new AIS researchers by whatever means necessary.

But we can only do as much as our organisational bandwidth allows, and right now it's purely taken up by the creation of a course.

Examples of such strategies are: a central online hub for study groups, creating a licensing center/talent agency that specializes in measuring AIS research talent, and partnering with the EA hotel to provide a free living space for high-performing students.

Projected spending

Of course, all of this is subject to change

Our first target is $30.000 to cover expenses for the coming year. For that amount, we expect to:

  • Hire a content developer (2/5 fte)
  • Hire a software developer (1/5 fte)
  • Compensate Robert for his lectures
  • Compensate one of the management team
  • Allocate $2400 for misc spending

Our second target is another $30.000, from which we expect to:

  • Compensate the management team (adding up to 3/5 fte)
  • Hire an animator/editor (1/5 fte)
  • Extend the contract of our content developer (1/5 fte)
  • Extend the contract of our software developer (1/5 fte)
  • Allocate another $1200 for misc spending

We aren’t too sure about what amount of funding to expect. Should our estimates be too low: returns will not start diminishing until well beyond $200.000.

Call to action

If you believe in us:

I would like to end with a pep talk:

What we are doing here isn’t hard. Courses at universities are often created on the fly by one person in a matter of weeks. They get away with it.

There is little risk. There is a lot of opportunity. If we do this well, we might just multiply the amount of AIS researchers by a significant fraction.

If that’s not impact, I don’t know what is.

New Comment
9 comments, sorted by Click to highlight new comments since: Today at 11:17 AM

After looking into the prototype course, I updated upwards on this project, as I think it is a decent introduction to Dylan's Off-Switch Game paper. Could I ask what other stuff RAISE wants to cover in the course? What other work on corrigibility are you planning to cover? (For example Dylan's other work, MIRI's work on this subject and Smitha Mili's paper?)

Could you also write more about who your course is targeting? Why does RAISE believe that the best way to fix the talent gap in AI safety is to help EAs change careers via introductory AI Safety material, instead of, say, making it easier for CS PhD students to do research on AI Safety-relevant topics? Why do we need to build a campus, instead of co-opting the existing education mechanisms of academia?

Finally, could you link some of the mind maps and summaries RAISE has created?

[-][anonymous]6y60
After looking into the prototype course, I updated upwards on this project, as I think it is a decent introduction to Dylan's Off-Switch Game paper. Could I ask what other stuff RAISE wants to cover in the course? What other work on corrigibility are you planning to cover? (For example Dylan's other work, MIRI's work on this subject and Smitha Mili's paper?)

Thank you!

Expecting to know better after getting our hands dirty, we decided to take it one subfield at the time. We haven't decided which subfield to cover beyond Corrigibility. Though a natural choice seems to be Value Learning.

We have identified 9 papers within/adjacent to Corrigibility:

  • 1) Russell & LaVictoire - "Corrigibility in AI systems" (2015)
    https://intelligence.org/files/CorrigibilityAISystems.pdf
  • 2) Omohundro - "The basic AI drives" (2008)
    https://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf
  • 3) Soares et al. - "Corrigibility" (2015)
    https://intelligence.org/files/Corrigibility.pdf
  • 4) Hadfield-Menell et al. - "The Off-Switch Game" (2016)
    https://arxiv.org/pdf/1611.08219
  • 5) Orseau & Armstrong - "Safely Interruptible Agents" (2016)
    https://intelligence.org/files/Interruptibility.pdf
  • 6) Milli et al. - "Should robots be obedient?" (2017)
    https://arxiv.org/pdf/1705.09990
  • 7) Carey - "Incorrigibility in the CIRL framework" (2017)
    https://arxiv.org/pdf/1709.06275
  • 8) El Mhamdi et al. - "Dynamic Safe Interruptibility for Decentralized Multi-Agent Reinforcement Learning" (2017)
    https://arxiv.org/pdf/1704.02882
  • 9) Armstrong - "‘Indifference’ methods for managing agent rewards" (2018)
    https://arxiv.org/pdf/1712.06365
Could you also write more about who your course is targeting? Why does RAISE believe that the best way to fix the talent gap in AI safety is to help EAs change careers via introductory AI Safety material, instead of, say, making it easier for CS PhD students to do research on AI Safety-relevant topics? Why do we need to build a campus, instead of co-opting the existing education mechanisms of academia?

To do our views justice requires a writeup of it's own, but I can give a stub. This doesn't necessarily represent the official view of RAISE, because that view doesn't exist yet, but let me just try to grasp at my intuition here:

First of all, I think both approaches are valid. There are people entrenched in academia who should be given the means to do good work. But there are also people outside of academia that could be given the means to do even better work.

Here's just a first stab at ways in which academia is inadequate:

  • Academia lacks (various kinds of) customization. For example, each bachelor's degree takes 3 years. It's self-evident that not every field has an equal amount of knowledge to teach. Students don't get a lot of personal attention. They're exposed to all kinds of petty punishment schemes and I have found that this demeaning treatment can be strongly off-putting for some people.
  • Student culture allows for strongly negligent work habits. I only have data from the Netherlands, but I would guess that students put in about 15 hours per week on average. That's great if you want people to develop and self-actualize, but we would rather they study AIS instead. In my opinion, a standard of 60+ hours per week would be more fitting given the stakes we're dealing with.
  • There is barely any room for good distillation. I'm sure academics would like to take the time to craft better explanations, but they have to sacrifice it to keep up with the rat race of flashy publications. Most of them will just stick with the standard textbooks, and copy whatever slides they have laying around.
  • Partly because of all this, it takes at least 5 years from high school graduation to finally start doing useful work. My intuitive guess is that it shouldn't take much more than 2 years if people Actually Tried. Especially for those that are highly talented.

But hey, these are problems anyone could be having, right? Now the real problem isn't any of these specific bugs. The real problem is that academia is an old bureaucratic institution with all kinds of entrenched interests, and patching it is hard. Even if you jump through all the hoops and do the politics and convince some people, you will hardly gain any traction. Baseline isn't so bad, but we could do so much better.

The real problem that I have with academia isn't necessarily it's current form. It's the amount of optimization power you need to upgrade it.

Finally, could you link some of the mind maps and summaries RAISE has created?

Sure! Here's the work we've done for Corrigibility. I haven't read all of it, so I do not necessarily endorse the quality of every piece. If you'd like to look at the script we used for the first lesson, go to "script drafts" and have a look at script F.

Really excited about this! Donation on the way

This is excellent, and I am happy that you are working on it.

We do have a rule of not having organizational announcements on the frontpage (for example, we moved both the MIRI and CFAR fundraising posts to the personal section) so I moved this to your personal blog.

Is this rule still in place?

Why do you have this rule? It seems to me like banning organizational announcement will make it much harder to get new initiatives of the ground.

I know that I wrote about this at length somewhere else, but I can't currently find it, so here is a super short summary: 

  • One of the primary goals of the frontpage guidelines is to avoid making the site only feel welcome to people who share all the social context of the rationality community. I want people to feel welcome if they are good at rationality, and don't want them to feel like their membership is conditional on having lots and lots of social connections. Organizational announcements often assume a massive amount of context and create an in-groupy feeling that I think distracts from the goals of LessWrong.
  • I think it's quite bad if someone shows up to LessWrong and the first thing they see is someone asking them to donate money to them. I think it creates a much more epistemically adversarial environment than is healthy, and generally will make people (justifiedly) think that all the other content on the site is just there to sell you these organizations that want money from you. There are few things a website can do to make me trust them less than to ask me for money in the first few minutes of showing up.
  • Eliezer (I think) went through the same line of thinking with the original sequences, which is why he avoided talking about any organizational announcements or any requests for donations during the whole period in which he wrote the sequences. I think this was quite important and good for the site.

Organizational announcements can still get a lot of traction on people's personal blog. Most people who have enough context to be interested in that kind of announcement have the personal blog filter for the frontpage turned off, so I think this doesn't actually hurt organizations very much (and overall I think it creates an environment of higher trust in which people are much less likely to feel spammed or inundated by requests for donations and organizational announcements, which I think is overall better for communicating about projects and getting early-stage projects off the ground).

Edited to add: 

  • Another major component for not having organizational announcements is our attempt at encouraging timeless content on LessWrong, and generally trying to disentangle LessWrong from news and other time-driven content. The vast majority of organizational announcements have almost no value when read a year or two after they were posted, and one of the goals of the frontpage/personal distinction is to create a space where content tries to be timeless and long-lasting.

That makes sense. Thanks for taking the time to answer.

It seems to me like banning organizational announcement will make it much harder to get new initiatives of the ground.

Incidentally, anyone in this space trying to get a new initiative off the ground may want to apply to SurvivalAndFlourishing.org's first funding round. (We'll be providing funding, as well as fiscal sponsorship some administrative support. Applications due by October 1st.)

[Edited to clarify that we won't provide full fiscal sponsorship. We will provide some administrative support via SEE (who is SAF's fiscal sponsor). Project seeking long-term fiscal sponsorship may want to apply directly to SEE (perhaps after bootstrapping via SAF). See more details on our announcement page.]

Epic. I remember talking to some people about this at EA Global last year, and I'm excited to see that you've continued working on it and are ready to double down.

I've donated & shared this article on FB!