I believe that most people hoping to do independent academic research vastly underestimate both the amount of prior work done in their field of interest, and the advantages of working with other very smart and knowledgeable people. Note that it isn't just about working with other people, but with other very smart people. That is, there is a difference between "working at a university / research institute" and "working at a top university / research institute". (For instance, if you want to do AI research in the U.S., you probably want to be at MIT, Princeton, Carnegie Mellon, Stanford, CalTech, or UC Berkeley. I don't know about other countries.)
Unfortunately, my general impression is that most people on LessWrong are mostly unaware of the progress made in statistical machine learning (presumably the brand of AI that most LWers care about) and cognitive science in the last 20 years (I mention these two fields because I assume they are the most popular on LW, and also because I know the most about them). And I'm not talking about impressive-looking results that dodge around the real issues, I'm talking about fundamental progress towards resolving the key problems...
A good overview would fill up a post on its own, but some relevant topics are given below. I don't think any of it is behind a paywall, but if it is, let me know and I'll link to another article on the same topic. In cases where I learned about the topic by word of mouth, I haven't necessarily read the provided paper, so I can't guarantee the quality for all of these. I generally tried to pick papers that either gave a survey of progress or solved a specific clearly interesting problem. As a result you might have to do some additional reading to understand some of the articles, but hopefully this is a good start until I get something more organized up.
Learning:
Online concept learning: rational rules for concept learning [a somewhat idealized situation but a good taste of the sorts of techniques being applied]
Learning categories: Bernoulli mixture model for document classification, spatial pyramid matching for images
Learning category hierarchies: nested Chinese restaurant process, hierarchical beta process
Learning HMMs (hidden Markov models): HDP-HMMs this is pretty new so the details haven't been hammered out, but the article should give you a taste of how people are approaching th...
WFG, please quit with the 'increase existential risk' idea. Allowing Eliezer to claim moral high ground here makes the whole situation surreal.
A (slightly more) sane response would be to direct your altruistic punishment towards the SIAI specifically. They are, after all, the group who is doing harm (to you according to your values). Opposing them makes sense (given your premises.)
After several years as a post-doc I am facing a similar choice.
If I understand correctly you have no research experience so far. I'd strongly suggest completing a doctorate because:
You may also be able to continue as a post-doc with almost the same freedom. I have done this for 5 years. It cannot last forever, though, and the longer you go on, the more people will expect you to devote yourself to grant applications, teaching and management. That is why I'm quitting.
Ron Gross's The Independent Scholar's Handbook has lots of ideas like this. A lot of the details in it won't be too useful, since it is mostly about history and the humanities, but quite a bit will be. It is also a bit old to have some more recent stuff, since there was almost no internet in 1993.
I'm putting the finishing touches on a future Less Wrong post about the overwhelming desirability of casually working in Australia for 1-2 years vs "whatever you were planning on doing instead". It's designed for intelligent people who want to earn more money, have more free time, and have a better life than they would realistically be able to get in the US or any other 1st world nation without a six-figure, part-time career... something which doesn't exist. My world saving article was actually just a prelim for this.
Whats frustrating is I would have had no idea it was deleted- and just assumed it wasn't interesting to anyone, had I not checked after reading the above. I'd much rather be told to delete the relevant portions of the comment- lets at least have precise censorship!
Wow. Even the people being censored don't know it. That's kinda creepy!
his comment led me to discover that quite a long comment I made a little bit ago had been deleted entirely.
How did you work out that it had been deleted? Just by logging out, looking and trying to remember where you had stuff posted?
Consider taking a job as a database/web developer at a university department. This gets you around journal paywalls, and is a low-stress job (assuming you have or can obtain above-average coding skills) that leaves you plenty of time to do your research. (My wife has such a job.) I'm not familiar with freelance journalism at all, but I'd still guess that going the software development route is lower risk.
Some comments on your list of advantages/disadvantages:
A couple other advantages of the non-traditional path:
Well I guess this is our true point of disagreement. I went to the effort of finding out a lot, went to SIAI and Oxford to learn even more, and in the end I am left seriously disappointed by all this knowedge. In the end it all boils down to:
"most people are irrational, hypocritical and selfish, if you try and tell them they shoot the messenger, and if you try and do anything you bear all the costs, internalize only tiny fractions of the value created if you succeed, and you almost certainly fail to have an effect anyway. And by the way the future is an impending train wreck"
I feel quite strongly that this knowledge is not a worthy thing to have sunk 5 years of my life into getting. I don't know, XiXiDu, you might prize such knowledge, including all the specifics of how that works out exactly.
If you really strongly value the specifics of this, then yes you probably would on net benefit from the censored knowledge, the knowledge that was never censored because I never posted it, and the knowledge that I never posted because I was never trusted with it anyway. But you still probably won't get it, because those who hold it correctly infer that the expected value of releasing it is strongly negative from an altruist's perspective.
The future is probably an impending train wreck. But if we can save the train, then it'll grow wings and fly up into space while lightning flashes in the background and Dragonforce play a song about fiery battlefields or something. We're all stuck on the train anyway, so saving it is worth a shot.
I hate to see smart people who give a shit losing to despair. This is still the most important problem and you can still contribute to fixing it.
TL;DR: I want to give you a hug.
most people are irrational, hypocritical and selfish, if you try and tell them they shoot the messenger, and if you try and do anything you bear all the costs, internalize only tiny fractions of the value created if you succeed,
So? They're just kids!
(or)
He glanced over toward his shoulder, and said, "That matter to you?"
Caw!
He looked back up and said, "Me neither."
The largest disadvantage to not having, essentially, an apprenticeship is the stuff you don't learn.
Now, if you want to research something where all you need is a keen wit, and there's not a ton of knowledge for you to pick up before you start... sure, go ahead. But those topics are few and far between. (EDIT: oh, LW-ish stuff. Meh. Sure, then, I guess. I thought you meant researching something hard >:DDDDD
No, but really, if smart people have been doing research there for 50 years and we don't have AI, that means that "seems easy to make progress" is a dirty lie. It may mean that other people haven't learned much to teach you, though - you should put some actual effort (get responses from at least two experts) finding out of this is the case)
Usually, an apprenticeship will teach you:
What needs to be done in your field.
How to write, publicize and present your work. The communication protocols of the community. How to access the knowledge of the community.
How to use all the necessary equipment, including the equipment that builds other equipment.
How to be properly rigorous - a hard one in most fields, you have to make it instinctual rather than just known.
The subtle tricks an experienced researcher uses to actually do research - all sorts of things you might not have noticed on your own.
And more!
Another idea is the "Bostrom Solution", i.e. be so brilliant that you can find a rich guy to just pay for you to have your own institute at Oxford University.
Then there's the "Reverse Bostrom Solution": realize that you aren't Bostrom-level brilliant, but that you could accrue enough money to pay for an institute for somebody else who is even smarter and would work on what you would have worked on. (FHI costs $400k/year, which isn't such a huge amount as to be unattainable by Kaj or a few Kaj-like entities collaborating)
Maybe. The disadvantage is lag time, of course. Discount rate for Singularity is very high. Assume that there are 100 years to the singularity, and that P(success) is linearly decreasing in lag time; then every second approximately 25 galaxies are lost, assuming that the entire 80 billion galaxies' fate is decided then.
25 galaxies per second. Wow.
Most people wouldn't dispute the first half of your comment. What they might take issue with is this:
Yes, that means we have to trust Eliezer.
The problem is that we have to defer to Eliezer's (and, by extension, SIAI's) judgment on such issues. Many of the commenters here think that this is not only bad PR for them, but also a questionable policy for a "community blog devoted to refining the art of human rationality."
Most people wouldn't dispute the first half of your comment. What they might take issue with is this:
Yes, that means we have to trust Eliezer.
If you are going to quote and respond to that sentence, which anticipates people objecting to trusting Eliezer to make those judgments, you should also quote and repond to my response to that anticipation (ie, the next sentence):
But I have no reason to doubt Eliezer's honesty or intelligence in forming those expectations.
Also, I am getting tired of objections framed as predictions that others would make the objections. It is possible to have a reasonable discussion with people who put forth their own objections, explain their own true rejections, and update their own beleifs. But when you are presenting the objections you predict others will make, it is much harder, even if you are personally convinced, to predict that these nebulous others will also be persuaded by my response. So please, stick your own neck out if you want to complain about this.
An important academic option: get tenure at a less reputable school. In the States at least there are tons of universities that don't really have huge research responsibilities (so you won't need to worry about pushing out worthless papers, preparing for conferences, peer reviewing, etc), and also don't have huge teaching loads. Once you get tenure you can cruise while focusing on research you think matters.
The down side is that you won't be able to network quite as effectively as if you were at a more prestigious university and the pay isn't quite as good.
Please pardon my prying,
No problem, and I welcome more such questions.
but as you've spent more time with SIAI, have you seen tendencies toward this sort of thing? Public declarations, competitions/pressure to prove devotion to reducing existential risks, scolding for not towing the party line, etc.
No; if anything, I see explicit advocacy, as Carl describes, against natural emergent fanaticism (see below), and people becoming less fanatical to the extent that they're influenced by group norms. I don't see emergent individual fanaticism generating significant unhealthy group dynamics like these. I do see understanding and advocacy of indirect utilitarianism as the proper way to 'shut up and multiply'. I would be surprised if I saw any of the specific things you mention clearly going on, unless non-manipulatively advising people on how to live up to ideals they've already endorsed counts. I and others have at times felt uncomfortable pressure to be more altruistic, but this is mostly pressure on oneself — having more to do with personal fanaticism and guilt than group dynamics, let alone deliberate manipulation — and creating a sense of pressure is generally recognized as harmf...
But for the most part the system seems to be set up so that you first spend a long time working for someone else and research their ideas, after which you can lead your own group, but then most of your time will be spent on applying for grants and other administrative trivia rather than actually researching the interesting stuff. Also, in Finland at least, all professors need to also spend time doing teaching, so that's another time sink.
This depends on the field, university, and maybe country. In many cases, doing your own research is the main focus f...
I would choose that knowledge if there was the chance that it wouldn't find out about it. As far as I understand your knowledge of the dangerous truth, it just increases the likelihood of suffering, it doesn't make it guaranteed.
I don't understand your reasoning here -- bad events don't get a "flawless victory" badness bonus for being guaranteed. A 100% chance of something bad isn't much worse than a 90% chance.
One big disadvantage is that you won't be interacting with other researchers from whom you can learn.
Research seems to be an insiders' game. You only ever really see the current state of research in informal settings like seminars and lab visits. Conference papers and journal articles tend to give strange, skewed, out-of-context projections of what's really going on, and books summarise important findings long after the fact.
The compelling argument for me is that knowing about bad things is useful to the extent that you can do something about them, and it turns out that people who don't know anything (call them "non-cogniscenti") will probably free-ride their way to any benefits of action on the collective-action-problem that is the at issue here, whilst avoiding drawing any particular attention to themselves ==> avoiding the risks.
Vladimir Nesov doubts this prima facie, i.e. he asks "how do you know that the strategy of being a completely inert player is best?".
-- to which I answer, "if you want to be the first monkey shot into space, then good luck" ;D
Do you also think that global warming is a hoax, that nuclear weapons were never really that dangerous, and that the whole concept of existential risks is basically a self-serving delusion?
Also, why are the folks that you disagree with the only ones that get to be described with all-caps narrative tropes? Aren't you THE LONE SANE MAN who's MAKING A DESPERATE EFFORT to EXPOSE THE TRUTH about FALSE MESSIAHS and the LIES OF CORRUPT LEADERS and SHOW THE WAY to their HORDES OF MINDLESS FOLLOWERS to AN ENLIGHTENED FUTURE? Can't you describe anything with all-caps narrative tropes if you want?
Not rhethorical questions, I'd actually like to read your answers.
To me, $200,000 for a charity seems to be pretty much the smallest possible amount of money. Can you find any charitable causes that recieve less than this?
Basically, you are saying that SIAI DOOM fearmongering is a trick to make money. But really, it fails to satisfy several important criteria:
it is shit at actually making money. I bet you that there are "save the earthworm" charities that make more money.
it is not actually frightening. I am not frightened; quick painless death in 50 years? boo-hoo. Whatever.
it is not optimized for believability. In fact it is almost optimized for anti-believability, "rapture of the nerds", much public ridicule, etc.
Being in a similar position (also as far as aversion to moving to e.g. US is concerned), I decided to work part time (roughly 1/5 of the time of even less) in software industry and spend the remainder of the day studying relevant literature, leveling up etc. for working on the FAI problem. Since I'm not quite out of the university system yet, I'm also trying to build some connections with our AI lab staff and a few other interested people in the academia, but with no intention to actually join their show. It would eat away almost all my time, so I could wo...
There is the minor detail that it really helps not to hate each and every individual second of your working life in the process. A goal will only pull you along to a certain degree.
(Computer types know all the money is in the City. I did six months of it. I found the people I worked with and the people whose benefit I worked for to be excellent arguments for an unnecessarily bloody socialist revolution.)
Conditioning on yourself deeming it optimal to make a metaphorical omelet by breaking metaphorical eggs, metaphorical eggs will deem it less optimal to remain vulnerable to metaphorical breakage by you than if you did not deem it optimal to make a metaphorical omelet by breaking metaphorical eggs; therefore, deeming it optimal to break metaphorical eggs in order to make a metaphorical omelet can increase the difficulty you find in obtaining omelet-level utility.
This thread raises the question about how many biologists and medical researchers are on here. Due to our specific cluster I expect a strong learning towards the IT people. So AI research gets over proportional recognition, while medical research including direct life extension falls on the wayside.
Speaking as someone who is in grad school now, even with prior research, the formal track of grad school is very helpful. I am doing research that I'm interested in. I don' t know if I'm a representative sample in that regard. It may be that people have more flexibility in math than in other areas. Certainly my anecdotal impression is that people in some areas such as biology don't have this degree of freedom. I'm also learning more about how to research and how to present my results. Those seem to be the largest advantages. Incidentally, my impression is that for grad school at least in many areas, taking a semester or two off if very stressed isn't treated that badly if one is otherwise doing productive research.
The above deleted comment referenced some details of the banned post. With those details removed, it said:
...(Note, this comment reacts to this thread generally, and other discussion of the banning)
The essential problem is that with the (spectacular) deletion of the Forbidden Post, LessWrong turned into the sort of place where posts get disappeared.
I realize that you are describing how people generally react to this sort of thing, but this knee jerk stupid reaction is one of the misapplied heurestics we ought to be able notice and overcome.
So far, one p
There is a big mismatch here between "sending an email to a blogger" and "increase existential risk by one in a million". All of the strategies for achieving existential risk increases that large are either major felonies, or require abusing a political office as leverage. When you first made the threat, I got angry at you on the assumption that you realized this. But if all you're threatening to do is send emails, well, I guess that's your right.
(I would have liked to reply to the deleted comment, but you can't reply to deleted comments so I'll reply to the repost.)
I don't think Roko should have been requested to delete his comment. I don't think Roko should have conceded to deleting his comment.
The correct reaction when someone posts something scandalous like
...I was once criticized by a senior singinst member for not being prepared to be tortured or raped for th
Roko may have been thinking of [just called him, he was thinking of it] a conversation we had when he and I were roommates in Oxford while I was visiting the Future of Humanity Institute, and frequently discussed philosophical problems and thought experiments. Here's the (redeeming?) context:
As those who know me can attest, I often make the point that radical self-sacrificing utilitarianism isn't found in humans and isn't a good target to aim for. Almost no one would actually take on serious harm with certainty for a small chance of helping distant others. Robin Hanson often presents evidence for this, e.g. this presentation on "why doesn't anyone create investment funds for future people?" However, sometimes people caught up in thoughts of the good they can do, or a self-image of making a big difference in the world, are motivated to think of themselves as really being motivated primarily by helping others as such. Sometimes they go on to an excessive smart sincere syndrome, and try (at the conscious/explicit level) to favor altruism at the severe expense of their other motivations: self-concern, relationships, warm fuzzy feelings.
Usually this doesn't work out well, as t...
I find this whole line of conversation fairly ludicrous, but here goes:
Number 1. Time-inconsistency: we have different reactions about an immediate certainty of some bad than a future probability of it. So many people might be willing to go be a health worker in a poor country where aid workers are commonly (1 in 10,000) raped or killed, even though they would not be willing to be certainly attacked in exchange for 10,000 times the benefits to others. In the actual instant of being tortured anyone would break, but people do choose courses of action that carry risk (every action does, to some extent), so the latter is more meaningful for such hypotheticals.
Number 2. I have driven and flown thousands of kilometers in relation to existential risk, increasing my chance of untimely death in a car accident or plane crash, so obviously I am willing to take some increased probability of death. I think I would prefer a given chance of being tortured to a given chance of death, so obviously I care enough to take at least some tiny risk from what I said above. As I also said above, I'm not willing to make very big sacrifices (big probabilities of such nasty personal outcomes) for tiny shifts ...
Do you understand the math behind the Roko post deletion?
Yes, his post was based on (garbled versions of) some work I had been doing at FHI, which I had talked about with him while trying to figure out some knotty sub-problems.
What do you think about the Roko post deletion?
I think the intent behind it was benign, at least in that Eliezer had his views about the issue (which is more general, and not about screwed-up FAI attempts) previously, and that he was motivated to prevent harm to people hearing the idea and others generally. Indeed, he was explicitly motivated enough to take a PR hit for SIAI.
Regarding the substance, I think there are some pretty good reasons for thinking that the expected value (with a small probability of a high impact) of the info for the overwhelming majority of people exposed to it would be negative, although that estimate is unstable in the face of new info.
It's obvious that the deletion caused more freak-out and uncertainty than anticipated, leading to a net increase in people reading and thinking about the content compared to the counterfactual with no deletion. So regardless of the substance about the info, clearly it was a mistake to delete (w...
Well, look, I deleted it of my own accord, but only after being prompted that it was a bad thing to have posted. Can we just drop this? It makes me look like even more of a troublemaker than I already look like, and all I really want to do is finish the efficient charity competition then get on with life outside teh intenetz.
Will you at least publicly state that you precommit, on behalf of CEV, to not apply negative incentives in this case? (Roko, Jul 24, 2010 1:37 PM)
This is very important. If the SIAI is the organisation to solve the friendly AI problem and implement CEV then it should be subject to public examination, especially if they ask for money.
The current evidence that anyone anywhere can implement CEV is two papers in six years that talk about it a bit. There appears to have been nothing else from SIAI and no-one else in philosophy appears interested.
If that's all there is for CEV in six years, and AI is on the order of thirty years away, then (approximately) we're dead.
This strikes me as a demand for particular proof. SIAI is small (and was much smaller until the last year or two), the set of people engaged in FAI research is smaller, Eliezer has chosen to focus on writing about rationality over research for nearly four years, and FAI is a huge problem, in which any specific subproblem should be expected to be underdeveloped at this early stage. And while I and others expect work to speed up in the near future with Eliezer's attention and better organization, yes, we probably are dead.
The reason for CEV is (as I understand it) the danger of the AI going FOOM before it cares about humans.
Somewhat nitpickingly, this is a reason for FAI in general. CEV is attractive mostly for moving as much work from the designers to the FAI as possible, reducing the potential for uncorrectable error, and being fairer than letting...
To the best of my knowledge, SIAI has not planned to do anything, under any circumstances, which would increase the probability of you or anyone else being tortured for the rest of infinity.
Supporting SIAI should not, to the best of my knowledge, increase the probability of you or anyone else being tortured for the rest of infinity.
Thank you.
I removed that sentence. I meant that I didn't believe that the SIAI plans to harm someone deliberately. Although I believe that harm could be a side-effect and that they would rather harm a few beings than allowing some Paperclip maximizer to take over.
You can call me a hypocrite because I'm in favor of animal experiments to support my own survival. But I'm not sure if I'd like to have someone leading an AI project who thinks like me. Take that sentence to reflect my inner conflict. I see why one would favor torture over dust specks but I don't like such decisions. I'd rather have the universe to end now, or having everyone turned into paperclips, than having to torture beings (especially if I am the being).
I feel uncomfortable that I don't know what will happen because there is a policy of censorship being favored when it comes to certain thought experiments. I believe that even given negative consequences, transparency is the way to go here. If the stakes are this high, people who believe will do anything to get what they want. That Yudkowsky claims that they are working for the benefit of humanity doesn't mean it is true. Surely I'd write that and many articles and papers that make it appear this way, if I wanted to shape the future to my liking.
Depending on what you're planning to research, lack of access to university facilities could also be a major obstacle. If you have a reputation for credible research, you might be able to collaborate with people within the university system, but I suspect that making the original break in would be pretty difficult.
While it's not geared specifically towards individuals trying to do research, the (Virtual) Employment Open Thread has relevant advice for making money with little work.
What (dis)advantages does this have compared to the traditional model?
I think this thread perfectly illustrates one disadvantage of doing research in an unstructured environment. It is so easy to become distracted from the original question by irrelevant, but bright and shiny distractions. Having a good academic adviser cracking the whip helps to keep you on track.
855 comments so far, with no sign of slowing down!
you have to be very clever to come up with a truly dangerous thought -- and if you do, and still decide to share it, he'll delete your comments
This is a good summary.
Of course, what he actually did was not delete the thread
Eh what? He did and that's what the whole scandal was about. If you mean that he did not succesfully delete the thread from the whole internet, then yes.
Also see my other comment.
I'm speaking of convincing people who don't already agree with them. SIAI and LW look silly now in ways they didn't before.
There may be, as you posit, a good and convincing explanation for the apparently really stupid behaviour. However, to convince said outsiders (who are the ones with the currencies of money and attention), the explanation has to actually be made to said outsiders in an examinable step-by-step fashion. Otherwise they're well within rights of reasonable discussion not to be convinced. There's a lot of cranks vying for attention and money, and an organisation has to clearly show itself as better than that to avoid losing.
See, that doesn't make sense to me. It sounds more like an initiation rite or something... not a thought experiment about quantum billionaires...
I can't picture EY picking up the phone and saying "delete that comment! wouldn't you willingly be tortured to decrease existential risk?"
... but maybe that's a fact about my imagination, and not about the world :p
I am doing something similar, except working as a freelance software developer. My mental model is that in both the traditional academic path and the freelance path, you are effectively spending a lot of your time working for money. In academia, the "dirty work" is stuff like teaching, making PowerPoint presentations (ugh), keeping your supervisor happy, jumping through random formatting hoops to get papers published, and then going to conferences to present the papers. For me, the decisive factor is that software development is actually quite fun, while academic money work is brain-numbing.
Ideally, I'd like to save the world. One way to do that involves contributing academic research, which raises the question of what's the most effective way of doing that.
The traditional wisdom says if you want to do research, you should get a job in a university. But for the most part the system seems to be set up so that you first spend a long time working for someone else and research their ideas, after which you can lead your own group, but then most of your time will be spent on applying for grants and other administrative trivia rather than actually researching the interesting stuff. Also, in Finland at least, all professors need to also spend time doing teaching, so that's another time sink.
I suspect I would have more time to actually dedicate on research, and I could get doing it quicker, if I took a part-time job and did the research in my spare time. E.g. the recommended rates for a freelance journalist in Finland would allow me to spend a week each month doing work and three weeks doing research, of course assuming that I can pull off the freelance journalism part.
What (dis)advantages does this have compared to the traditional model?
Some advantages:
Some disadvantages:
EDIT: Note that while I certainly do appreciate comments specific to my situation, I posted this over at LW and not Discussion because I was hoping the discussion would also be useful for others who might be considering an academic path. So feel free to also provide commentary that's US-specific, say.