I took part of the 2009 summer program during the vacation of my day job as a software developer in Sweden. This entailed spending five weeks with the smartest and most dedicated people I have ever met, working on a wide array of projects both short- and long-term, some of which were finished by the time I left and some of which are still on-going.
My biggest worry beforehand was that I would not be anywhere near talented enough to participate and contribute in the company of SIAI employees and supporters. That seems to not have occurred, though I don't claim to have anywhere near the talent of most others involved. Some of the things I was involved with during the summer was work on the Singularity Summit website as well continuing the Uncertain Future project for assigning probability distributions to events and having the conclusions calculated for you. I also worked on papers with Carl Shulman and Nick Tarleton, read a massive amount of papers and books, took trips to San Fransisco and elsewhere, played games, discussed weird forms of decision theories and counter-factual everything, etc, etc.
My own comparative advantages seem to be having the focus to keep hacking away at proje...
I'm slowly waking up to the fact that people at the Singularity Institute as well as Less Wrong are dealing with existential risk as a Real Problem, not just a theoretical idea to play with in an academic way. I've read many essays and watched many videos, but the seriousness just never really hit my brain. For some reason I had never realized that people were actually working on these problems.
I'm an 18 year old recent high school dropout, about to nab my GED. I could go to community college, or I could go along with my plan of leading a simple life working a simple job, which I would be content doing. I'm a sort of tabla rossa here: if I wanted to get into the position where I would be of use to the SIAI, what skills should I develop? Which of the 'What we're looking for' traits would be most useful in a few years? (The only thing I'm good at right now is reading very quickly and retaining large amounts of information about various fields: but I rarely understand the math, which is currently very limiting.)
Yorick, and anyone else who is serious about reducing existential risk and is not in our contact network: please email me. anna at singinst dot org. The reason you should email is that empirically, people seem to make much better decisions about what paths will reduce existential risks when in dialog with others. Improved information here can go a long way.
I'll answer anyway, for the benefit of lurkers (but Yorick, don't believe my overall advice. Email me instead, about your specific strengths and situation):
Past experience indicates that more than one brilliant, capable person refrained from contacting SIAI, because they weren’t sure they were “good enough”.
Well, who can blame them?
Seriously, FYI (where perhaps the Y stands for "Yudkowsky's"): that document (or a similar one) really rubbed me the wrong way the first time I read it. It just smacked of "only the cool kids can play with us". I realize that's probably because I don't run into very many people who think they can easily solve FAI, whereas Eliezer runs into them constantly; but still.
It rubbed me the wrong way when, after explaining for several pages that successful FAI Programmers would have to be so good that the very best programmers on the planet may not be good enough, it added - "We will probably, but not definitely, end up working in Java".
...I don't know if that's a bad joke or a hint that the writer isn't being serious. Well, if it's a joke, it's bad and not funny. Now I'll have nightmares of the best programmers Planet Earth could field failing to write a FAI because they used Java of all things.
After some doubts as to ability to contribute and the like, I went to be an intern in this year's summer program. It was fun and I'm really glad I went. At the moment, I'm back there as a volunteer, mostly doing various writing tasks, like academic papers.
Getting to talk a lot to people immersed in these ideas has been both educational and motivating, much more so than following things through the internet. So I'd definitely recommend applying.
Also, the house has an awesome library that for some reason isn't being mentioned. :-)
This is a bit off topic, but I find it strange that for years I was unable to find many people interested in decision theory and anthropic reasoning (especially a decision theoretic approach to anthropic reasoning) to talk with, and now they're hot topics (relatively speaking) because they're considered matters of existential risk. Why aren't more people working on these questions just because they can't stand not knowing the answers?
haven't noticed the difference in style between the relation between logical axioms and logical models versus causal laws and causal processes.
Amplify?
Brilliant decisive reasons are rare for most topics, and most people can't articulate very many of their reasons for most of their choices. Their most common reason would probably be that they found other topics more interesting, and to evaluate that reason Wei would have to understand the reasons for thinking all those other topics interesting. Saying "if you can't prove to me why I'm wrong in ten minutes I must be right" is not a very reliable path to truth.
I'm probably not the best person to explain why decision theory is interesting from an FAI perspective. For that you'd want to ask Eliezer or other SIAI folks. But I think the short answer there is that without a well-defined decision theory for an AI, we can't hope to prove that it has any Friendliness properties.
My own interest in decision theory is mainly philosophical. Originally, I wanted to understand how probabilities should work when there are multiple copies of oneself, either due to mind copying technology, or because all possible universes exist. That led me to ask, "what are probabilities, anyway?" The philosophy of probability is its own subfield in philosophy, but I came to the conclusion that probabilities only have meaning within a decision theory, so the real question I should be asking is what kind of decision theory one should use when there are multiple copies of oneself.
Your own answer is also pretty relevant to FAI. Because anything that confuses you can turn out to contain the black box surprise from hell.
Until you know, you don't know if you need to know, you don't know how much you need to know, and you don't know the penalty for not knowing.
Is it just me or does this seem a bit backwards? SIAI is trying to make FAI yet so much of the time spent is spent on risks and benefits of this FAI that doesn't exist. For a task that is estimated to be so dangerous and so world changing would it not behoove SIAI to be the first to make FAI? If this be the case then I am a bit confused as to the strategy SIAI is employing to accomplish the goal of FAI.
Also if FAI is the primary goal here then it seems to me that one should be looking not at LessWrong but at gathering people from places like Google, Intel, IBM, and DARPA... Why would you choose to pull from a predominantly amateur talent pool like LW (sorry to say that but there it is)?
Goertzel, Voss, and similar folks are not working on the FAI problem. They're working on the AGI problem. Contrary to what Goertzel, Voss, and similar folks find most convenient to believe, these two problems are not on the same planet or even in the same galaxy.
I shall also be quite surprised if Goertzel's or Voss's project yields AGI. Code is easy. Code that is actually generally intelligent is hard. Step One is knowing which code to write. It's futile to go on to Step Two until finishing Step One. If anyone tries to tell you otherwise, bear in mind that the advice to rush ahead and write code has told quite a lot of people that they don't in fact know which code to write, but has not actually produced anyone who does know which code to write. I know I can't sit down and write an FAI at this time; I don't need to spend five years writing code in order to collapse my pride.
The arc of Less Wrong read start to finish should be sufficient for an intelligent person to discard existing AGI projects - once your "mysterious answer to mysterious question" detector is initialized and switched on, and so on - so I consider my work of explanation in that area to be pretty much done. Anything left is public relations, taking an existing explanation and making it more persuasive.
Writing up the math I've already mentioned with impressive Greek symbols so it can be published is lower priority than the rationality book.
I'm curious to know your reasoning behind this, if you can share it.
It seems to me that the publication of some high-quality technical papers would increase the chances of attracting and keeping the attention of one-in-a-million people like this much more than a rationality book would.
The problem is that even if nothing "impressive" is available at SIAI, there is no other source where something is. Nada. The only way to improve this situation is to work on the problem. Criticism would be constructive if you suggested a method of improvement on this situation, e.g. organize a new team that is expected to get to FAI more likely than SIAI. Merely arguing about status won't help to solve the problem.
You keep ignoring the distinction between AGI and FAI, which doesn't add sanity to this conversation. You may disagree that there is a difference, but that's distinct from implying that people who believe there is a difference should also act as if there is none. To address the latter, you must directly engage this disagreement.
I agree with Nesov and can offer a personal example here. I have a crypto design that was only "published" to a mailing list and on my homepage, and it still got eighty-some citations according to Google Scholar.
Also, just because you (Eliezer) don't like playing status games, doesn't mean it's not rational to play them. I hate status games too, but I can get away with ignoring them since I can work on things that interest me without needing external funding. Your plans, on the other hand, depend on donors, and most potential donors aren't AI or decision theory experts. What do they have to go on except status? What Nesov calls "a bureaucratic formality in the funding/hiring process" is actually a human approximation to group rationality, I think.
TDT was explained in enough detail for Dai and some others to get it.
It's explained in enough detail for me to get an intuitive understanding of it, and to obtain some inspirations and research ideas to follow up. But it's not enough for me to try to find flaws in it. I think that should be the standard of detail in scientific publication: the description must be detailed enough that if the described idea or research were to have a flaw, then a reader would be able to find it from the description.
It might not make sense to a lay audience but any philosophically competent fellow who's read the referenced books could reconstruct TDT out of Ingredients of Timeless Decision Theory.
Ok, but what if TDT is flawed? In that case, whoever is trying to reconstruct TDT would just get stuck somewhere before they got to a coherent theory, unless they recreated the same flaw by coincidence. If they do get stuck, how can they know or convince you that it's your fault, and not theirs? Unless they have super high motivation and trust in you, they'll just give up and do something else, or never attempt the reconstruction in the first place.
I participated in the 2008 summer intern program and visited the 2009 program several times and thought it was a lot of fun and very educational. The ideas that I bounced off of people at these programs still inform my writing and thinking now.
"Getting good popular writing and videos on the web, of sorts that improve AI risks understanding for key groups;"
Though good popular writing is, of course, very important, I think we sometimes overestimate the value of producing summaries/rehashings of earlier writing by Vinge, Kurzweil, Eliezer, Michael Vassar and Anissimov, etc.
I have a (probably stupid) question. I have been following Less Wrong for a little over a month, and I've learned a great deal about rationality in the meantime. My main interest, however, is not rationality, it is in creating FAI. I see that the SIAI has an outline of a research program, described here: http://www.singinst.org/research/researchareas.
Is there an online community that is dedicated solely to discussing friendly AI research topics? If not, is the creation of one being planned? If not, why not? I realize that the purpose of these SIAI fellowsh...
I really like what SIAI is trying to do, the spirit that it embodies.
However I am getting more skeptical of any projections or projects based on non-good old fashioned scientific knowledge (my own included).
You can progress scientifically to make AI if you copy human architecture somewhat. By making predictions about how the brain works and organises itself. However I don't see how we can hope make significant progress on non-human AI. How will we test whether our theories are correct or on the right path? For example, what evidence from the real world wou...
When you say 'rotating,' what time frame do you have in mind? A month? A year? Are there set sessions, like the summer program, or are they basically whenever someone wants to show up?
"Working with this crowd transformed my world; it felt like I was learning to think. I wouldn’t be surprised if it can transform yours."
I was there during the summer of 2008 and 2009, and I wholeheartedly agree with this.
"Improving the LW wiki, and/or writing good LW posts;"
Does anyone have data on how many people actually use the LW wiki? If few people use it, then we should find out why and solve it; if it cannot be solved, we should avoid wasting further time on it. If many people use it, of course, we should ask for their comments on what could be improved.
What kind of leeway are the fellows given in pursuing their own projects? I have an AI project I am planning to work on after I finish my Phd; it would be fun to do it at SIAI, as opposed to my father's basement.
So just to make sure I understand correctly: successful applicants will spend a month with the SIAI in the Bay Area. Board and airfare are paid but no salary can be offered.
I may not be the sort of person you're looking for, but taking a month off work with no salary would be difficult for me to manage. No criticism of the SIAI intended, who are trying to achieve the best outcomes with limited funds.
How long will this opportunity be available? I'm very interested, but I probably won't have a large enough block of free time for a year and a half.
Logistics question: Is the cost to SIAI approximately 1k per month? (aside from the limited number of slots, which is harder to quantify)
Minor editing thing: theuncertainfuture.com links to http://lesswrong.com/theuncertainfuture.com, not http://theuncertainfuture.com/.
Last summer, 15 Less Wrongers, under the auspices of SIAI, gathered in a big house in Santa Clara (in the SF bay area), with whiteboards, existential risk-reducing projects, and the ambition to learn and do.
Now, the new and better version has arrived. We’re taking folks on a rolling basis to come join in our projects, learn and strategize with us, and consider long term life paths. Working with this crowd transformed my world; it felt like I was learning to think. I wouldn’t be surprised if it can transform yours.
A representative sample of current projects:
Interested, but not sure whether to apply?
Past experience indicates that more than one brilliant, capable person refrained from contacting SIAI, because they weren’t sure they were “good enough”. That kind of timidity destroys the world, by failing to save it. So if that’s your situation, send us an email. Let us be the one to say “no”. Glancing at an extra application is cheap, and losing out on a capable applicant is expensive.
And if you’re seriously interested in risk reduction but at a later time, or in another capacity -- send us an email anyway. Coordinated groups accomplish more than uncoordinated groups; and if you care about risk reduction, we want to know.
What we’re looking for
At bottom, we’re looking for anyone who:
Bonus points for any (you don’t need them all) of the following traits:
If you think this might be you, send a quick email to jasen@intelligence.org. Include:
Our application process is fairly informal, so send us a quick email as initial inquiry and we can decide whether or not to follow up with more application components.
As to logistics: we cover room, board, and, if you need it, airfare, but no other stipend.
Looking forward to hearing from you,
Anna
ETA (as of 3/25/10): We are still accepting applications, for summer and in general. Also, you may wish to check out http://www.singinst.org/grants/challenge#grantproposals for a list of some current projects.