Call for new SIAI Visiting Fellows, on a rolling basis

Last summer, 15 Less Wrongers, under the auspices of SIAI, gathered in a big house in Santa Clara (in the SF bay area), with whiteboards, existential risk-reducing projects, and the ambition to learn and do.

Now, the new and better version has arrived.  We’re taking folks on a rolling basis to come join in our projects, learn and strategize with us, and consider long term life paths.  Working with this crowd transformed my world; it felt like I was learning to think.  I wouldn’t be surprised if it can transform yours.

A representative sample of current projects:

  • Research and writing on decision theory, anthropic inference, and other non-dangerous aspects of the foundations of AI;
  • The Peter Platzer Popular Book Planning Project;
  • Editing and publicizing theuncertainfuture.com;
  • Improving the LW wiki, and/or writing good LW posts;
  • Getting good popular writing and videos on the web, of sorts that improve AI risks understanding for key groups;
  • Writing academic conference/journal papers to seed academic literatures on questions around AI risks (e.g., takeoff speed, economics of AI software engineering, genie problems, what kinds of goal systems can easily arise and what portion of such goal systems would be foreign to human values; theoretical compsci knowledge would be helpful for many of these questions).

Interested, but not sure whether to apply?

Past experience indicates that more than one brilliant, capable person refrained from contacting SIAI, because they weren’t sure they were “good enough”.  That kind of timidity destroys the world, by failing to save it.  So if that’s your situation, send us an email.  Let us be the one to say “no”.  Glancing at an extra application is cheap, and losing out on a capable applicant is expensive.

And if you’re seriously interested in risk reduction but at a later time, or in another capacity -- send us an email anyway.  Coordinated groups accomplish more than uncoordinated groups; and if you care about risk reduction, we want to know.

What we’re looking for

At bottom, we’re looking for anyone who:

  • Is capable (strong ability to get things done);
  • Seriously aspires to rationality; and
  • Is passionate about reducing existential risk.

Bonus points for any (you don’t need them all) of the following traits:

  • Experience with management, for example in a position of responsibility in a large organization;
  • Good interpersonal and social skills;
  • Extraversion, or interest in other people, and in forming strong communities;
  • Dazzling brilliance at math or philosophy;
  • A history of successful academic paper-writing; strategic understanding of journal submission processes, grant application processes, etc.
  • Strong general knowledge of science or social science, and the ability to read rapidly and/or to quickly pick up new fields;
  • Great writing skills and/or marketing skills;
  • Organization, strong ability to keep projects going without much supervision, and the ability to get mundane stuff done in a reliable manner;
  • Skill at implementing (non-AI) software projects, such as web apps for interactive technological forecasting, rapidly and reliably;
  • Web programming skill, or website design skill;
  • Legal background;
  • A history of successfully pulling off large projects or events;
  • Unusual competence of some other sort, in some domain we need, but haven’t realized we need.
  • Cognitive diversity: any respect in which you're different from the typical LW-er, and in which you're more likely than average to notice something we're missing.

If you think this might be you, send a quick email to jasen@intelligence.org.  Include:

  1. Why you’re interested;
  2. What particular skills you would bring, and what evidence makes you think you have those skills (you might include a standard resume or c.v.);
  3. Optionally, any ideas you have for what sorts of projects you might like to be involved in, or how your skillset could help us improve humanity’s long-term odds.

Our application process is fairly informal, so send us a quick email as initial inquiry and we can decide whether or not to follow up with more application components.

As to logistics: we cover room, board, and, if you need it, airfare, but no other stipend.

Looking forward to hearing from you,
Anna

ETA (as of 3/25/10):  We are still accepting applications, for summer and in general.  Also, you may wish to check out http://www.singinst.org/grants/challenge#grantproposals for a list of some current projects.

 

264 comments, sorted by
magical algorithm
Highlighting new comments since Today at 3:14 AM
Select new highlight date

I'm slowly waking up to the fact that people at the Singularity Institute as well as Less Wrong are dealing with existential risk as a Real Problem, not just a theoretical idea to play with in an academic way. I've read many essays and watched many videos, but the seriousness just never really hit my brain. For some reason I had never realized that people were actually working on these problems.

I'm an 18 year old recent high school dropout, about to nab my GED. I could go to community college, or I could go along with my plan of leading a simple life working a simple job, which I would be content doing. I'm a sort of tabla rossa here: if I wanted to get into the position where I would be of use to the SIAI, what skills should I develop? Which of the 'What we're looking for' traits would be most useful in a few years? (The only thing I'm good at right now is reading very quickly and retaining large amounts of information about various fields: but I rarely understand the math, which is currently very limiting.)

Yorick, and anyone else who is serious about reducing existential risk and is not in our contact network: please email me. anna at singinst dot org. The reason you should email is that empirically, people seem to make much better decisions about what paths will reduce existential risks when in dialog with others. Improved information here can go a long way.

I'll answer anyway, for the benefit of lurkers (but Yorick, don't believe my overall advice. Email me instead, about your specific strengths and situation):

  1. Work on rationality. To help existential risk at all, you need: (a) unusual ability to weigh evidence fairly, in confusing instances and despite the presence of strong emotions; (b) the ability to take far-more evidence seriously on an emotional and action-based level. (But (b) is only an asset after you have formed careful, robust, evidence-based conclusions. If you're as bad a thinker as 95% of the population, acting on far-mode conclusions can be dangerous, and can make your actions worse.)
  2. Learn one of: math, physics, programming, or possibly analytic philosophy, because they teach useful habits of thought. Programming is perhaps the most useful of these because it can additionally be used to make money.
  3. Learn people skills. Tutoring skills; sales skills; the ability to start and maintain positive conversations with strangers; management skills and experience; social status non-verbals (which one can learn in the pickup community, among other places); observational skills and the ability to understand and make accurate predictions about the people around you; skill at making friends; skill at building effective teams...
  4. Learn to track details, to direct your efforts well within complex projects, and to reliably get things done. Exercise regularly, too.

Note that it's also good to have some preliminary discussion here, moving on to e-mail mainly if personal details come up that one feels unwilling to share in public. If a lot of people publicly post their interest to participate, then that will encourage others to apply as well. Plus it gives people a picture of what sort of other folks they might end up working with. Also, discussing the details of the issue in public will help those who might initially be too shy to send a private e-mail, as they can just read what's been discussed before. Even if you weren't shy as such, others might raise questions you didn't happen to think of. For instance, I think Anna's four points above are good advice for a lot of people, and I'm happy that Yorick posted the comment that prompted this response and didn't just e-mail Anna directly.

(EDIT: Removed a few paragraphs as I realized I'd have to rethink their content.)

I don't feel like having this discussion in public, but Anna's invitation is framed in broad enough terms that I'll be getting in touch.

Where are the non pickup community places to learn social status non-verbals?

I've sent an email your way. Given that email has become a slightly unreliable medium, thanks to the arms race between Spam and Bayesian (and other) countermeasures, I'd appreciate an acknowledge (even if just to say "got it"), here or via email.

Thanks for the heads up. Oddly enough, it was sitting in the spam filter on my SIAI account (without making it through forwarding to my gmail account, where I was checking the spam filter). Yours was the only message caught in the SIAI spam filter, out of 19 who emailed so far in response to this post.

Did you have special reason to expect to be caught in a spam filter?

It happens every so often to email people send me, so I periodically check the spam folder on Gmail; by symmetry I assume it happens to email I send. It's more likely to occur on a first contact, too. And last, I spent a fair bit of time composing that email, getting over the diffidence you're accurately assuming.

your handle sounds like a brand name drug ;) e.g. paxil

I wonder how long I can expect to wait before receiving a response. I sent my email on Wednesday, by the way.

So you want to know f(x) := P(will receive a response|have not received a response in x days) for values of x from 0 to say, 7?

I'm sorry; I still haven't responded to many of them. Somewhere in the 1-3 days range for an initial response, probably.

I suggest you reply to the parent (Anna's) comment, that will show up in her inbox.

I sent an email on January the 10th, and haven't yet got a reply. Has my email made it to you? Granted, it is over a month since this article was posted, so I understand if you are working on things other than applications at this point...

I took part of the 2009 summer program during the vacation of my day job as a software developer in Sweden. This entailed spending five weeks with the smartest and most dedicated people I have ever met, working on a wide array of projects both short- and long-term, some of which were finished by the time I left and some of which are still on-going.

My biggest worry beforehand was that I would not be anywhere near talented enough to participate and contribute in the company of SIAI employees and supporters. That seems to not have occurred, though I don't claim to have anywhere near the talent of most others involved. Some of the things I was involved with during the summer was work on the Singularity Summit website as well continuing the Uncertain Future project for assigning probability distributions to events and having the conclusions calculated for you. I also worked on papers with Carl Shulman and Nick Tarleton, read a massive amount of papers and books, took trips to San Fransisco and elsewhere, played games, discussed weird forms of decision theories and counter-factual everything, etc, etc.

My own comparative advantages seem to be having the focus to keep hacking away at projects, as well as the specialized skills that came from having a CS background and some experience (less than a year though) of working in the software industry. I'm currently writing this from the SIAI house, to which I returned about three weeks ago. This time I mainly focused on getting a job as a software developer in the Bay area (I seem to have succeeded), for the aims of earning money (some of which will go to donations) and also making it easier for me to participate in SIAI projects.

I'd say that the most important factor for people considering applying should be if they have strong motivations and a high level of interest in the issues that SIAI involves itself with. Agreeing with specific perceived beliefs of the SIAI or people involved with it is not necessary, and the disagreements will be brought out and discussed as thoroughly as you could ever wish for. As long as the interest and motivation is there, the specific projects you want to work with should work itself out nicely. My own biggest regret is that I kept lurking for so long before getting in touch with the people here.

Past experience indicates that more than one brilliant, capable person refrained from contacting SIAI, because they weren’t sure they were “good enough”.

Well, who can blame them?

Seriously, FYI (where perhaps the Y stands for "Yudkowsky's"): that document (or a similar one) really rubbed me the wrong way the first time I read it. It just smacked of "only the cool kids can play with us". I realize that's probably because I don't run into very many people who think they can easily solve FAI, whereas Eliezer runs into them constantly; but still.

It rubbed me the wrong way when, after explaining for several pages that successful FAI Programmers would have to be so good that the very best programmers on the planet may not be good enough, it added - "We will probably, but not definitely, end up working in Java".

...I don't know if that's a bad joke or a hint that the writer isn't being serious. Well, if it's a joke, it's bad and not funny. Now I'll have nightmares of the best programmers Planet Earth could field failing to write a FAI because they used Java of all things.

It rubbed me the wrong way when, after explaining for several pages that successful FAI Programmers would have to be so good that the very best programmers on the planet may not be good enough, it added - "We will probably, but not definitely, end up working in Java"

I had the same thought -- how incongruous! (Not that I'm necessarily particularly qualified to critique the choice, but it just sounded...inappropriate. Like describing a project to build a time machine and then solemnly announcing that the supplies would be purchased at Target.)

I assume, needless to say, that (at least) that part is no longer representative of Eliezer's current thinking.

I can't understand how it could ever have been part of his thinking. (Java was even worse years ago!)

Not relative to its competitors, surely. Many of them didn't exist back then.

That's true. But inasfar the requirements of the FAI project are objective, independent of PL development in the industry, they should be the main point of reference. Developing your own language is a viable alternative and was even more attractive years ago - that's what I meant to imply.

It depends on whether you want to take advantage of resources like editors, IDEs, refactoring tools, lint tools - and a pool of developers.

Unless you have a very good reason to do so, inventing your own language is a large quantity of work - and one of its main effects is to cut you off from the pool of other developers - making it harder to find other people to work on your project and restricting your choice of programming tools to ones you can roll for yourself.

Anecdotally, half the benefit of inventing your own language is cutting yourself off from the pool of other, inferior developers :-)

Remember that Eliezer's assumption is that he'd be starting with a team of super-genius developers. They wouldn't have a problem with rolling their own tools.

Well, it's not that it's impossible, it's more that it drains off energy from your project into building tools. If your project is enormous, that kind of expense might be justified. Or if you think you can make a language for your application domain which works much better than the best of the world's professional language designers.

However, in most cases, these kinds of proposals are a recipe for disaster. You spend a lot of your project resources pointlessly reinventing the wheel in terms of lint, refactoring, editing and code-generation technology - and you make it difficult for other developers to help you out. I think this sort of thing is only rather rarely a smart move.

This was written circa 2002 when Java was at least worthy of consideration compared to the other options out there.

Yup. The logic at the time went something like, "I want something that will be reasonably fast and scale to lots of multiple processors and runs in a tight sandbox and has been thoroughly debugged with enterprise-scale muscle behind it, and which above all is not C++, and in a few years (note: HAH!) when we start coding, Java will probably be it." There were lots of better-designed languages out there but they didn't have the promise of enterprise-scale muscle behind their implementation of things like parallelism.

Also at that time, I was thinking in terms of a much larger eventual codebase, and was much more desperate to use something that wasn't C++. Today I would say that if you can write AI at all, you can write the code parts in C, because AI is not a coding problem.

Mostly in that era there weren't any good choices, so far as I knew then. Ben Goertzel, who was trying to scale a large AI codebase, was working in a mix of C/C++ and a custom language running on top of C/C++ (I forget which), which I think he had transitioned either out of Java or something else, because nothing else was fast enough or handled parallelism correctly. Lisp, he said at that time, would have been way too slow.

Today I would say that if you can write AI at all, you can write the code parts in C, because AI is not a coding problem.

I'd rather the AI have a very low probability of overwriting its supergoal by way of a buffer overflow.

Proving no buffer overflows would be nothing next to the other formal verification you'd be doing (I hope).

I fully agree that C++ is much, much, worse than Java. The wonder is that people still use it for major new projects today. At least there are better options than Java available now (I don't know what the state of art was in 2002 that well).

If you got together an "above-genius-level" programming team, they could design and implement their own language while they were waiting for your FAI theory. Probably they would do it anyway on their own initiative. Programmers build languages all the time - a majority of today's popular languages started as a master programmer's free time hobby. (Tellingly, Java is among the few that didn't.)

A custom language built and maintained by a star team would be at least as good as any existing general-purpose one, because you would borrow design you liked and because programming language design is a relatively well explored area (incl. such things as compiler design). And you could fit the design to the FAI project's requirements: choosing a pre-existing language means finding one that happens to match your requirements.

Incidentally, all the good things about Java - including the parallelism support - are actually properties of the JVM, not of the Java the language; they're best used from other languages that compile to the JVM. If you said "we'll probably run on the JVM", that would have sounded much better than "we'll probably write in Java". Then you'll only have to contend with the CLR and LLVM fans :-)

I don't think it will mostly be a coding problem. I think there'll be some algorithms, potentially quite complicated ones, that one will wish to implement at high speed, preferably with reproducible results (even in the face of multithreading and locks and such). And there will be a problem of reflecting on that code, and having the AI prove things about that code. But mostly, I suspect that most of the human-shaped content of the AI will not be low-level code.

How's the JVM on concurrency these days? My loose impression was that it wasn't actually all that hot.

I think it's pretty fair to say that no language or runtime is that great on concurrency today. Coming up with a better way to program for many-core machines is probably the major area of research in language design today and there doesn't appear to be a consensus on the best approach yet.

I think a case could be made that the best problem a genius-level programmer could devote themselves to right now is how to effectively program for many-core architectures.

My impression is that JVM is worse at concurrency than every other approach that's been tried so far.

Haskell and other functional programming languages has many promising ideas but isn't widely used in the industry AFAIK.

This presentation gives a good short overview of the current state of concurrency approaches.

Speaking of things that aren't Java but run on the JVM, Scala is one such (really nice) language. It's designed and implemented by one of the people behind the javac compiler, Martin Odersky. The combination of excellent support for concurrency and functional programming would make it my language of choice for anything that I would have used Java for previously, and it seems like it would be worth considering for AI programming as well.

Today I would say that if you can write AI at all, you can write the code parts in C, because AI is not a coding problem.

Exactly -- which is why the sentence sounded so odd.

Well, yes, Yudkowsky-2002 is supposed to sound odd to a modern LW reader.

It's mentioned twice, so I doubt it's a joke.

That's if you want to be an FAI developer and on the Final Programming Team of the End of the World, not if you want to work for SIAI in any capacity whatsoever. If you're writing to myself, rather than Anna, then yes, mentioning e.g. the International Math Olympiad will help to get my attention. (Though I'm certain the document does need updating - I haven't looked at it myself in a long while.)

It does kinda give the impression that a) donors and b) programmers are all that SIAI has a use for, though. It mentions that if you want to help but aren't a genius, sure, you can be a donor, or you can see if you get into a limited number of slots for non-genius programmers, but that's it.

I'm also one of the people who's been discouraged from the thought of being useful for SIAI by that document, though. (Fortunately people have afterwards been giving the impression I might be of some use after all. Submitted an application today.)

Anna, and in general the Vassarian lineage, are more effective cooperators than I am. The people who I have the ability to cooperate with, form a much more restricted set than those who they can cooperate with.

It does kinda give the impression that a) donors and b) programmers are all that SIAI has a use for, though.

I once had that impression too, almost certainly in part from SYWTBASAIP.

SYWTBASAIP always makes me think of Reid Barton -- which I imagine is probably quite a bit higher that EY meant to convey as a lower bound -- so I know what you mean.

After some doubts as to ability to contribute and the like, I went to be an intern in this year's summer program. It was fun and I'm really glad I went. At the moment, I'm back there as a volunteer, mostly doing various writing tasks, like academic papers.

Getting to talk a lot to people immersed in these ideas has been both educational and motivating, much more so than following things through the internet. So I'd definitely recommend applying.

Also, the house has an awesome library that for some reason isn't being mentioned. :-)

Is that library's catalog available on a site like LibraryThing ?

If it isn't, please get one of those visiting fellows to spend as long as it takes entering ISBNs so that others can virtually browse your bookshelves.

I've set up a SIAI account on LibraryThing, for a bunch of reasons even though I've not heard back from MBlume.

http://www.librarything.com/catalog/siai

The heuristic "it's easier to seek forgiveness than permission" seemed to apply, the upvotes on the comments below indicate interest, I wanted to separate my stuff from SIAI's but still have a Web 2.0-ish way to handle it, and information wants to be free.

If this was a mistake on my part, it's easily corrected.

Re anonym's comment, maybe you might like to set up a SIAI/LW LibraryThing account. I'll gladly donate the $25 to make it a lifetime account.

Also an easy way to introduce SIAI to new people who might be interested in learning more (and donating), because librarything recommends similar libraries and shows you which libraries contain a book, etc.

I second Morendil's thanks. This list provides a view of what material is being thought about and discussed by the SIAI volunteers, and I hope that it alleviates some of the concerns of potential applicants who are hesitating.

If it's an option, please make the spreadsheet sortable. It would be much easier to browse if it were sorted by (location, creator), so all math books would be together, and books by the same author on the same topic would be together.

Thanks for making this available though. I enjoyed browsing and already bought one. You might consider putting Amazon links in there with an affiliate tag for SIAI.

Thanks, that's helpful, but the original spreadsheet being sortable would still be very useful, because the librarything doesn't have "shelf", so you can't sort and view all math books together, for example.

I've sorted MBlume's original list so that it displays all the books of the same location together...however some of the places (living room floor/shelf etc.) are a collection of books on different topics. I may sort them out another time.

Here it is: http://spreadsheets.google.com/pub?key=t5Fz_UEo8JLZyEFfUvJVvPA&output=html

And make sure they use a barcode scanner. Given that books tend to have ISBN barcodes, it would be... irrational not to.

(If it seems to you like a matter of knowledge, not rationality, then take a little while to ponder how you could be wrong.)

I'm using (at Andrew Hay's kind suggestion) Delicious Library 2 (sadly only available on Macintosh, but I happen to be an Apple fanboy) which integrates with my webcam to do all my barcode scanning for me.

I had a dream where some friends and I invaded the "Less Wrong Library", and I agree it was most impressive. ...in my dream.

[-]MBlume
1 point