Ideally, I'd like to save the world. One way to do that involves contributing academic research, which raises the question of what's the most effective way of doing that.

The traditional wisdom says if you want to do research, you should get a job in a university. But for the most part the system seems to be set up so that you first spend a long time working for someone else and research their ideas, after which you can lead your own group, but then most of your time will be spent on applying for grants and other administrative trivia rather than actually researching the interesting stuff. Also, in Finland at least, all professors need to also spend time doing teaching, so that's another time sink.

I suspect I would have more time to actually dedicate on research, and I could get doing it quicker, if I took a part-time job and did the research in my spare time. E.g. the recommended rates for a freelance journalist in Finland would allow me to spend a week each month doing work and three weeks doing research, of course assuming that I can pull off the freelance journalism part.

What (dis)advantages does this have compared to the traditional model?

Some advantages:

  • Can spend more time on actual research.
  • A lot more freedom with regard to what kind of research one can pursue.
  • Cleaner mental separation between money-earning job and research time (less frustration about "I could be doing research now, instead of spending time on this stupid administrative thing").
  • Easier to take time off from research if feeling stressed out.

Some disadvantages:

  • Harder to network effectively.
  • Need to get around journal paywalls somehow.
  • Journals might be biased against freelance researchers.
  • Easier to take time off from research if feeling lazy.
  • Harder to combat akrasia.
  • It might actually be better to spend some time doing research under others before doing it on your own.

EDIT: Note that while I certainly do appreciate comments specific to my situation, I posted this over at LW and not Discussion because I was hoping the discussion would also be useful for others who might be considering an academic path. So feel free to also provide commentary that's US-specific, say.

New Comment
Rendering 1000/1001 comments, sorted by (show more) Click to highlight new comments since: Today at 9:01 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I believe that most people hoping to do independent academic research vastly underestimate both the amount of prior work done in their field of interest, and the advantages of working with other very smart and knowledgeable people. Note that it isn't just about working with other people, but with other very smart people. That is, there is a difference between "working at a university / research institute" and "working at a top university / research institute". (For instance, if you want to do AI research in the U.S., you probably want to be at MIT, Princeton, Carnegie Mellon, Stanford, CalTech, or UC Berkeley. I don't know about other countries.)

Unfortunately, my general impression is that most people on LessWrong are mostly unaware of the progress made in statistical machine learning (presumably the brand of AI that most LWers care about) and cognitive science in the last 20 years (I mention these two fields because I assume they are the most popular on LW, and also because I know the most about them). And I'm not talking about impressive-looking results that dodge around the real issues, I'm talking about fundamental progress towards resolving the key problems... (read more)

9Danny_Hintze13y
This might not even be a significant problem when the time does come around. High fluid intelligence only lasts for so long, and thus using more crystallized intelligence later on in life to guide research efforts rather than directly performing research yourself is not a bad strategy if the goal is to optimize for the actual research results.
4jsteinhardt13y
Those are roughly my thoughts as well, although I'm afraid that I only believe this to rationalize my decision to go into academia. While the argument makes sense, there are definitely professors that express frustration with their position. What does seem like pretty sound logic is that if you could get better results without a research group, you wouldn't form a research group. So you probably won't run into the problem of achieving suboptimal results from administrative overhead (you could always just hire less people), but you might run into the problem of doing work that is less fun than it could be. Another point is that plausibly some other profession (corporate work?) would have less administrative overhead per unit of efficiency, but I don't actually believe this to be true.
7nhamann13y
Could you point me towards some articles here? I fully admit I'm unaware of most of this progress, and would like to learn more.

A good overview would fill up a post on its own, but some relevant topics are given below. I don't think any of it is behind a paywall, but if it is, let me know and I'll link to another article on the same topic. In cases where I learned about the topic by word of mouth, I haven't necessarily read the provided paper, so I can't guarantee the quality for all of these. I generally tried to pick papers that either gave a survey of progress or solved a specific clearly interesting problem. As a result you might have to do some additional reading to understand some of the articles, but hopefully this is a good start until I get something more organized up.

Learning:

Online concept learning: rational rules for concept learning [a somewhat idealized situation but a good taste of the sorts of techniques being applied]

Learning categories: Bernoulli mixture model for document classification, spatial pyramid matching for images

Learning category hierarchies: nested Chinese restaurant process, hierarchical beta process

Learning HMMs (hidden Markov models): HDP-HMMs this is pretty new so the details haven't been hammered out, but the article should give you a taste of how people are approaching th... (read more)

2Perplexed13y
I'm not planning to do AI research, but I do like to stay no more than ~10 years out of date regarding progress in fields like this. At least at the intelligent-outsider level of understanding. So, how do I go about getting and keeping almost up-to-date in these fields. Is MacKay's book a good place to start on machine learning? How do I get an unbiased survey of cognitive science? Are there blogs that (presuming you follow the links) can keep you up to date on what is getting a buzz?
3jsteinhardt13y
I haven't read MacKay myself, but it looks like it hits a lot of the relevant topics. You might consider checking out Tom Griffiths' website, which has a reading list as well as several tutorials.
1sark13y
We should try to communicate with long letters (snail mail) more. Academics seem to have done that a lot in the past. From what I have seen these exchanges seem very productive, though this could be a sampling bias. I don't see why there aren't more 'personal communication' cites, except for them possibly being frowned upon.
1jsteinhardt13y
Why use snail mail when you can use skype? My lab director uses it regularly to talk to other researchers.
3sark13y
Because it is written. Which makes it good for communicating complex ideas. The tradition behind it also lends it an air of legitimacy. Researchers who don't already have a working relationship with each other will take each other's letters more seriously.
3jsteinhardt13y
Upvoted for the good point about communication. Not sure I agree with the legitimacy part (what is p(Crackpot | Snail Mail) compared to p(Crackpot | Email)? I would guess higher).
2Sniffnoy13y
What I'm now wondering is, how does using email vs. snail mail affect the probability of using green ink, or its email equivalent...
1sark13y
Heh you are probably right. It just seemed strange to me how researchers cannot just communicate with each other as long as they have the same research interests. My first thought was that it might have been something to do with status games, where outsiders are not allowed. I suppose some exchanges require rapid and frequent feedback. But then, like you mentioned, wouldn't Skype do?
1jsteinhardt13y
I'm not sure what the general case looks like, but the professors who I have worked with (who all have the characteristic that they do applied-ish research at a top research university) are both constantly barraged by more e-mails than they can possibly respond to. I suspect that as a result they limit communication to sources that they know will be fruitful. Other professors in more theoretical fields (like pure math) don't seem to have this problem, so I'm not sure why they don't do what you suggest (although some of them do). And I am not sure that all professors run into the same problem as I have described, even in applied fields.
0Desrtopa13y
"In the past" as in before they had alternative methods of long distance communication, or after?

(Shrugs.)

Your decision. The Singularity Institute does not negotiate with terrorists.

WFG, please quit with the 'increase existential risk' idea. Allowing Eliezer to claim moral high ground here makes the whole situation surreal.

A (slightly more) sane response would be to direct your altruistic punishment towards the SIAI specifically. They are, after all, the group who is doing harm (to you according to your values). Opposing them makes sense (given your premises.)

-7waitingforgodel13y
-26waitingforgodel13y
-21waitingforgodel13y

After several years as a post-doc I am facing a similar choice.

If I understand correctly you have no research experience so far. I'd strongly suggest completing a doctorate because:

  • you can use that time to network and establish a publication record
  • most advisors will allow you as much freedom as you can handle, particularly if you can obtain a scholarship so you are not sucking their grant money. Choose your advisor carefully.
  • you may well get financial support that allows you to work full time on your research for at least 4 years with minimal accountability
  • if you want, you can practice teaching and grant applications to taste how onerous they would really be
  • once you have a doctorate and some publications, it probably won't be hard to persuade a professor to offer you an honorary (unpaid) position which gives you an institutional affiliation, library access, and maybe even a desk. Then you can go ahead with freelancing, without most of the disadvantages you cite.

You may also be able to continue as a post-doc with almost the same freedom. I have done this for 5 years. It cannot last forever, though, and the longer you go on, the more people will expect you to devote yourself to grant applications, teaching and management. That is why I'm quitting.

5Kaj_Sotala13y
Huh. That's a fascinating idea, one which had never occurred to me. I'll have to give this suggestion serious consideration.

Ron Gross's The Independent Scholar's Handbook has lots of ideas like this. A lot of the details in it won't be too useful, since it is mostly about history and the humanities, but quite a bit will be. It is also a bit old to have some more recent stuff, since there was almost no internet in 1993.

4James_Miller13y
Or become a visiting professor in which you teach one or two courses a year in return for modest pay, affiliation and library access.

Dude, don't be an idiot. Really.

I'm putting the finishing touches on a future Less Wrong post about the overwhelming desirability of casually working in Australia for 1-2 years vs "whatever you were planning on doing instead". It's designed for intelligent people who want to earn more money, have more free time, and have a better life than they would realistically be able to get in the US or any other 1st world nation without a six-figure, part-time career... something which doesn't exist. My world saving article was actually just a prelim for this.

Are you going to accompany the "this is cool" part with a "here's how" part? I estimate that would cause it to influence an order of magnitude more people, by removing an inconvenience that looks at least trivial and might be greater.

4David_Gerard13y
I'm now thinking of why Australian readers should go to London and live in a cramped hovel in an interesting place. I feel like I've moved to Ankh-Morpork.
1Mardonius13y
Simple! Tell them they too can follow the way of Lu-Tze, The Sweeper! For is it not said, "Don't knock a place you've never been to"
3erratio13y
As someone already living in Australia and contemplating a relocation to the US for study purposes, I would be extremely interested in this article
1David_Gerard13y
Come to England! It's small, cramped and expensive! The stuff here is amazing, though. (And the GBP is taking a battering while the AUD is riding high.)
0Desrtopa13y
I was under the impression that England was quite difficult to emigrate to?
0David_Gerard13y
My mother's English, so I'm British by paperwork. Four-year working or study visas for Australians without a British parent are not impossible and can also be converted to a working one or even permanent residency if whatever hoops are in place at the time happen to suit.
2diegocaleiro13y
Hope face. Let's see if you can beat my next 2 years in Brazil..... I've been hoping for something to come along (trying to defeat my status quo bias) but it has been really hard to find something comparable. In fact, if this comment is upvoted enough, I might write a "How to be effective from wherever you are currently outside 1st world countries" post...... because if only I knew, life would be just, well, perfect. I assume many other latinos, africans, filipinos, and slavic fellows feel the same way!
0lukeprog13y
Louie? I was thinking about this years ago and would love to know more details. Hurry up and post it! :)
0katydee13y
Color me very interested!

Whats frustrating is I would have had no idea it was deleted- and just assumed it wasn't interesting to anyone, had I not checked after reading the above. I'd much rather be told to delete the relevant portions of the comment- lets at least have precise censorship!

Wow. Even the people being censored don't know it. That's kinda creepy!

his comment led me to discover that quite a long comment I made a little bit ago had been deleted entirely.

How did you work out that it had been deleted? Just by logging out, looking and trying to remember where you had stuff posted?

I think it's a standard tool: trollish comments look like being ignored to the trolls. But I think it's impolite to delete comments made in good faith without notification and usable guidelines for cleaning up and reposting. (Hint hint.)

5Jack13y
I only made one comment on the subject and I was rather confused that it was being ignored. I also knew I might have said too much about the Roko post and actually included a sentence saying that if I crossed the line I'd appreciate being told to edit it instead of having the entire thing deleted. So I just checked that one comment in particular. If other comments of mine have been deleted I wouldn't know about it, though this was the only comment in which I have discussed the Roko post.
5[anonymous]13y
I doubt that this is a deliberate feature.

Consider taking a job as a database/web developer at a university department. This gets you around journal paywalls, and is a low-stress job (assuming you have or can obtain above-average coding skills) that leaves you plenty of time to do your research. (My wife has such a job.) I'm not familiar with freelance journalism at all, but I'd still guess that going the software development route is lower risk.

Some comments on your list of advantages/disadvantages:

  • Harder to network effectively. - I guess this depends on what kind of research you want to do. For the areas I've been interested in, networking does not seem to matter much (unless you count participating in online forums as networking :).
  • Journals might be biased against freelance researchers. - I publish my results online, informally, and somehow they've usually found an interested audience. Also, the journals I'm familiar with require anonymous submissions. Is this not universal?
  • Harder to combat akrasia. - Actually, might be easier.

A couple other advantages of the non-traditional path:

  • If you get bored you can switch topics easily.
  • I think it's crazy to base one's income on making research progress. How do you stay o
... (read more)

Well I guess this is our true point of disagreement. I went to the effort of finding out a lot, went to SIAI and Oxford to learn even more, and in the end I am left seriously disappointed by all this knowedge. In the end it all boils down to:

"most people are irrational, hypocritical and selfish, if you try and tell them they shoot the messenger, and if you try and do anything you bear all the costs, internalize only tiny fractions of the value created if you succeed, and you almost certainly fail to have an effect anyway. And by the way the future is an impending train wreck"

I feel quite strongly that this knowledge is not a worthy thing to have sunk 5 years of my life into getting. I don't know, XiXiDu, you might prize such knowledge, including all the specifics of how that works out exactly.

If you really strongly value the specifics of this, then yes you probably would on net benefit from the censored knowledge, the knowledge that was never censored because I never posted it, and the knowledge that I never posted because I was never trusted with it anyway. But you still probably won't get it, because those who hold it correctly infer that the expected value of releasing it is strongly negative from an altruist's perspective.

The future is probably an impending train wreck. But if we can save the train, then it'll grow wings and fly up into space while lightning flashes in the background and Dragonforce play a song about fiery battlefields or something. We're all stuck on the train anyway, so saving it is worth a shot.

I hate to see smart people who give a shit losing to despair. This is still the most important problem and you can still contribute to fixing it.

TL;DR: I want to give you a hug.

-4Roko13y
I disagree with this argument. Pretty strongly. No selfish incentive to speak of.

most people are irrational, hypocritical and selfish, if you try and tell them they shoot the messenger, and if you try and do anything you bear all the costs, internalize only tiny fractions of the value created if you succeed,

So? They're just kids!

(or)

He glanced over toward his shoulder, and said, "That matter to you?"

Caw!

He looked back up and said, "Me neither."

4Roko13y
I mean I guess I shouldn't complain that you don't find this bothers you, because you are, in fact, helping me by doing what you do and being very good at it, but that doesn't stop it being demotivating for me! I'll see what I can do regarding quant jobs.
1[anonymous]13y
I liked the first response better.
5katydee13y
This isn't meant as an insult, but why did it take you 5 years of dedicated effort to learn that?
6Roko13y
Specifics. Details. The lesson of science is that details can sometimes change the overall conclusion. Also some amount of nerdyness meaning that the statements about human nature weren't obvious to me.
4timtyler13y
That doesn't sound right to me. Indeed, it sounds as though you are depressed :-( Unsolicited advice over the public internet is rather unlikely to help - but maybe focus for a bit on what you want - and the specifics of how to get to there.
3Jack13y
Upvoted for the excellent summary!
5katydee13y
I'm curious about the "future is an impending train wreck" part. That doesn't seem particularly accurate to me.
3Roko13y
Maybe it will all be OK. Maybe the trains fly past each other on separate tracks. We don't know. There sure as hell isn't a driver though. All the inside-view evidence points to bad things,with the exception that Big Worlds could turn out nicely. Or horribly.
-1timtyler13y
Perhaps try this one: The Rational Optimist: How Prosperity Evolves

The largest disadvantage to not having, essentially, an apprenticeship is the stuff you don't learn.

Now, if you want to research something where all you need is a keen wit, and there's not a ton of knowledge for you to pick up before you start... sure, go ahead. But those topics are few and far between. (EDIT: oh, LW-ish stuff. Meh. Sure, then, I guess. I thought you meant researching something hard >:DDDDD

No, but really, if smart people have been doing research there for 50 years and we don't have AI, that means that "seems easy to make progress" is a dirty lie. It may mean that other people haven't learned much to teach you, though - you should put some actual effort (get responses from at least two experts) finding out of this is the case)

Usually, an apprenticeship will teach you:

  • What needs to be done in your field.

  • How to write, publicize and present your work. The communication protocols of the community. How to access the knowledge of the community.

  • How to use all the necessary equipment, including the equipment that builds other equipment.

  • How to be properly rigorous - a hard one in most fields, you have to make it instinctual rather than just known.

  • The subtle tricks an experienced researcher uses to actually do research - all sorts of things you might not have noticed on your own.

  • And more!

Another idea is the "Bostrom Solution", i.e. be so brilliant that you can find a rich guy to just pay for you to have your own institute at Oxford University.

Then there's the "Reverse Bostrom Solution": realize that you aren't Bostrom-level brilliant, but that you could accrue enough money to pay for an institute for somebody else who is even smarter and would work on what you would have worked on. (FHI costs $400k/year, which isn't such a huge amount as to be unattainable by Kaj or a few Kaj-like entities collaborating)

5shokwave13y
Sounds like a good bet even if you are brilliant. Make money, use money to produce academic institute, do your research in concert with academics at your institute. This solves all problems of needing to be part of academia, and also solves the problem of academics doing lots of unnecessary stuff - at your institute, academics will not be required to do unnecessary stuff.

Maybe. The disadvantage is lag time, of course. Discount rate for Singularity is very high. Assume that there are 100 years to the singularity, and that P(success) is linearly decreasing in lag time; then every second approximately 25 galaxies are lost, assuming that the entire 80 billion galaxies' fate is decided then.

25 galaxies per second. Wow.

8PeerInfinity13y
I'm surprised that noone has asked Roko where he got these numbers from. Wikipedia says that there are about 80 billion galaxies in the "observable universe", so that part is pretty straightforward. Though there's still the question of why all of them are being counted, when most of them probably aren't reachable with slower-than-light travel. But I still haven't found any explanation for the "25 galaxies per second". Is this the rate at which the galaxies burn out? Or the rate at which something else causes them to be unreachable? Is it the number of galaxies, multiplied by the distance to the edge of the observable universe, divided by the speed of light? calculating... Wikipedia says that the comoving distance from Earth to the edge of the observable universe is about 14 billion parsecs (46 billion light-years short scale, i.e. 4.6 × 10^10 light years) in any direction. Google Calculator says 80 billion galaxies / 46 billion light years = 1.73 galaxies per year, or 5.48 × 10^-8 galaxies per second so no, that's not it. If I'm going to allow my mind to be blown by this number, I would like to know where the number came from.
2Caspian13y
I also took a while to understand what was meant, so here is my understanding of the meaning: Assumptions: There will be a singularity in 100 years. If the proposed research is started now it will be a successful singularity, e.g. friendly AI. If the proposed research isn't started by the time of the singularity, it will be a unsuccessful (negative) singularity, but still a singularity. The probability of the successful singularity linearly decreases with the time when the research starts, from 100 percent now, to 0 percent in 100 years time. A 1 in 80 billion chance of saving 80 billion galaxies is equivalent to definitely saving 1 galaxy, and the linearly decreasing chance of a successful singularity affecting all of them is equivalent to a linearly decreasing number being affected. 25 galaxies per second is the rate of that decrease.
2Roko13y
I meant if you divide the number of galaxies by the number of seconds to an event 100 years from now. Yes, not all reachable. Probably need to discount by an order of magnitude for reachability at lightspeed.
0FAWS13y
Hmm, by the second wikipedia link there is no basis for the 80 billion galaxies since only a relatively small fraction of the observable universe (4.2%?) is reachable if limited by the speed of light, and if not the whole universe is probably at least 10^23 times larger (by volume or by radius?).
5shokwave13y
Guh. Every now and then something reminds me of how important the Singularity is. Time to reliable life extension is measured in lives per minute, time to Singularity is measured in galaxies per second.
1MartinB13y
Now thats a way to eat up your brain.
1Roko13y
Well conservatively assuming that each galaxy supports lives at 10^9 per sun per century (1/10th of our solar system), that's already 10^29 lives per second right there. And assuming utilization of all the output of the sun for living, i.e. some kind of giant spherical shell of habitable land, we can add another 12 orders of magnitude straight away. Then if we upload people that's probably another 10 orders of magnitude. Probably up to 10^50 lives per second, without assuming any new physics could be discovered (a dubious assumption). If instead we assume that quantum gravity gives us as much of an increase in power as going from newtonian physics to quantum mechanics did, we can pretty much slap another 20 orders of magnitude onto it, with some small probability of the answer being "infinity".
1XFrequentist13y
In what I take to be a positive step towards viscerally conquering my scope neglect, I got a wave of chills reading this.
0[anonymous]13y
What's your P of "the fate of all 80 billion galaxies will be decided on Earth in the next 100 years"?
0Vladimir_Nesov13y
About 10% (if we ignore existential risk, which is a way of resolving the ambiguity of "will be decided"). Multiply that by opportunity cost of 80 billion galaxies.
1David_Gerard13y
Could you please detail your working to get to this 10% number? I'm interested in how one would derive it, in detail.
0Vladimir_Nesov13y
I restored the question as asking about probability that we'll be finishing an FAI project in the next 100 years. Dying of engineered virus doesn't seem like an example of "deciding the fate of 80 billion galaxies", although it's determining that fate. FAI looks really hard. Improvements in mathematical understanding to bridge comparable gaps in understanding can take at least many decades. I don't expect a reasonable attempt at actually building a FAI anytime soon (crazy potentially world-destroying AGI projects go in the same category as engineered viruses). One possible shortcut is ems, that effectively compress the required time, but I estimate that they probably won't be here for at least 80 more years, and then they'll still need time to become strong enough and break the problem. (By that time, biological intelligence amplification could take over as a deciding factor, using clarity of thought instead of lots of time to think.)
-1[anonymous]13y
My question has only a little bit to do with the probability that an AI project is successful. It has mostly to do with P(universe goes to waste | AI projects are unsuccessful). For instance, couldn't the universe go on generating human utility after humans go extinct?
2ata13y
How? By coincidence? (I'm assuming you also mean no posthumans, if humans go extinct and AI is unsuccessful.)
2[anonymous]13y
Aliens. I would be pleased to learn that something amazing was happening (or was going to happen, long "after" I was dead) in one of those galaxies. Since it's quite likely that something amazing is happening in one of those 80 billion galaxies, shouldn't I be pleased even without learning about it? Of course, I would be correspondingly distressed to learn that something horrible was happening in one of those galaxies.
0Roko13y
Some complexities regarding "decided" since physics is deterministic, but hand waving that aside, I'd say 50%.
1[anonymous]13y
With high probability, many of those galaxies are already populated. Is that irrelevant?
-1Roko13y
I disagree. I claim that the probability of >50% of the universe being already populated (using the space of simultaneity defined by a frame of reference comoving with earth) is maybe 10%.
-1[anonymous]13y
"Already populated" is a red herring. What's the probability that >50% of the universe will ever be populated? I don't see any reason for it to be sensitive to how well things go on Earth in the next 100 years.
1Roko13y
I think it is likely that we are the only spontaneously-created intelligent species in the entire 4-manifold that is the universe, space and time included (excluding species which we might create in the future, of course).
1[anonymous]13y
I'm curious to know how likely, and why. But do you agree that aliens are relevant to evaluating astronomical waste?
0timtyler13y
That seems contrary to the http://en.wikipedia.org/wiki/Self-Indication_Assumption Do you have a critique - or a supporting argument?
5Roko13y
Yes, I have a critique. Most of anthropics is gibberish. Until someone makes anthropics work, I refuse to update on any of it. (Apart from the bits that are commonsensical enough to derive without knowing about "anthropics", e.g. that if your fising net has holes 2 inches big, don't expect to catch fish smaller then 2 inches wide)
3timtyler13y
I don't think you can really avoid anthropic ideas - or the universe stops making sense. Some anthropic ideas can be challenging - but I think we have got to try. Anyway, you did the critique - but didn't go for a supporting argument. I can't think of very much that you could say. We don't have very much idea yet about what's out there - and claims to know such things just seem over-confident.
1Roko13y
Basically Rare Earth seems to me to be the only tenable solution to Fermi's paradox.
0timtyler13y
Fermi's paradox implying no aliens surely applies within-galaxy only. Many galaxies are distant, and intelligent life forming there concurrently (or long before us) is quite compatible with it not having arrived on our doorsteps yet - due to the speed of light limitation. If you think we should be able to at least see life in distant galaxies, then, in short, not really - or at least we don't know enough to say yea or nay on that issue with any confidence yet.
0Roko13y
The Andromeda Galaxy is 2.5 million light-years away. The universe is about 1250 million years old. Therefore that's not far enough away to protect us from colonizing aliens travelling at 0.5c or above.
2timtyler13y
The universe is about 13,750 million years old. The Fermi argument suggests that - if there were intelligent aliens in this galaxy, they should probably have filled it by now - unless they originated very close to us in time - which seems unlikely. The argument applies much more weakly to galaxies, because they are much further away, and they are separated from each other by huge regions of empty space. Also, the Andromeda Galaxy is just one galaxy. Say only one galaxy in 100 has intelligent life - and the Andromeda Galaxy isn't among them. That bumps the required distance to be travelled up to 10 million light years or so. Even within this galaxy, the Fermi argument is not that strong. Maybe intelligent aliens formed in the last billion years, and haven't made it here yet - because space travel is tricky, and 0.1c is about the limit. The universe is only about 14 billion years old. For some of of that there were not too many second generations stars. The odds are against there being aliens nearby - but they are not that heavily stacked. For other galaxies, the argument is much, much less compelling.
0[anonymous]13y
There are strained applications of anthropics, like the doomsday argument. "What happened here might happen elsewhere" is much more innocuous.
1[anonymous]13y
There are some more practical and harmless applications as well. In Nick Bostrom's Anthropic Bias, for example, there is an application of the Self-Sampling Assumption to traffic analysis.
1timtyler13y
Bostrom says: "Cars in the next lane really do go faster"
0Vladimir_Nesov13y
I agree.
2[anonymous]13y
Even Nick Bostrom, who is arguably the leading expert on anthropic problems, rejects SIA for a number of reasons (see his book Anthropic Bias). That alone is a pretty big blow to its credibility.
0timtyler13y
That is curious. Anyway, the self-indication assumption seems fairly straight-forwards (as much as any anthropic reasoning is, anyway). The critical material from Bostrom on the topic I have read seems unpersuasive. He doesn't seem to "get" the motivation for the idea in the first place.
0Kevin13y
If you think there is a significant probability that an intelligence explosion is possible or likely, then that question is sensitive to how well things go on Earth in the next 100 years.
4[anonymous]13y
However likely they are, I expect intelligence explosions to be evenly distributed through space and time. If 100 years from now Earth loses by a hair, there are still plenty of folks around the universe who will win or have won by a hair. They'll make whatever use of the 80 billion galaxies that they can--will they be wasting them? If Earth wins by a hair, or by a lot, we'll be competing with those folks. This also significantly reduces the opportunity cost Roko was referring to.
-1timtyler13y
That seems like a rather exaggerated sense of importance. It may be a fun fantasy in which the fate of the entire universe hangs in the balance in the next century - but do bear in mind the disconnect between that and the real world.
5shokwave13y
Out of curiosity: what evidence would convince you that the fate of the entire universe does hang in the balance?
2Manfred13y
No human-comparable aliens, for one. Which seems awfully unlikely, the more we learn about solar systems.
-2timtyler13y
"Convince me" - with some unspecified level of confidence? That is not a great question :-| We lack knowlegde of the existence (or non-existence) of aliens in other galaxies. Until we have such knowledge, our uncertainty on this matter will necessarily be high - and we should not be "convinced" of anything.
1shokwave13y
What evidence would convince you, with 95% confidence, that the fate of the universe hangs in the balance in this next century on Earth? You may specify evidence such as "strong evidence that we are completely alone in the universe" even if you think it is unlikely we will get such evidence.
-2timtyler13y
I did get the gist of your question the first time - and answered according. The question takes us far into counter-factual territory, though.
1shokwave13y
I was just curious to see if you rejected the fantasy on principle, or if you had other reasons.
1Larks13y
Unfortunately, FHI seems to have filled the vacancies it advertised earlier this month.
1Alexandros13y
Are you talking about these? (http://www.fhi.ox.ac.uk/news/2010/vacancies) This seems odd, the deadline for applications is on Jan 12th.
0Larks13y
Oh yes - strange, I swear it said no vacancies...
0Roko13y
Sure, so this favors the "Create a new James Martin" strategy.
[-][anonymous]13y10

Most people wouldn't dispute the first half of your comment. What they might take issue with is this:

Yes, that means we have to trust Eliezer.

The problem is that we have to defer to Eliezer's (and, by extension, SIAI's) judgment on such issues. Many of the commenters here think that this is not only bad PR for them, but also a questionable policy for a "community blog devoted to refining the art of human rationality."

Most people wouldn't dispute the first half of your comment. What they might take issue with is this:

Yes, that means we have to trust Eliezer.

If you are going to quote and respond to that sentence, which anticipates people objecting to trusting Eliezer to make those judgments, you should also quote and repond to my response to that anticipation (ie, the next sentence):

But I have no reason to doubt Eliezer's honesty or intelligence in forming those expectations.

Also, I am getting tired of objections framed as predictions that others would make the objections. It is possible to have a reasonable discussion with people who put forth their own objections, explain their own true rejections, and update their own beleifs. But when you are presenting the objections you predict others will make, it is much harder, even if you are personally convinced, to predict that these nebulous others will also be persuaded by my response. So please, stick your own neck out if you want to complain about this.

3[anonymous]13y
That's definitely a fair objection, and I'll answer: I personally trust Eliezer's honesty, and he is obviously much smarter than myself. However, that doesn't mean that he's always right, and it doesn't mean that we should trust his judgment on an issue until it has been discussed thoroughly. I agree. The above paragraph is my objection.
2JGWeissman13y
The problem with a public thorough discussion in these cases is that once you understand the reasons why the idea is dangerous, you already know it, and don't have the opportunity to choose whether to learn about it. If you trust Eliezer's honesty, then though he may make mistakes, you should not expect him to use this policy as a cover for banning posts as part of some hidden agenda.
6[anonymous]13y
That's definitely the root of the problem. In general, though, if we are talking about FAI, then there shouldn't be a dangerous idea. If there is, then it means we are doing something wrong. I don't think he's got a hidden agenda; I'm concerned about his mistakes. Though I'm not astute enough to point them out, I think the LW community as a whole is.
5JGWeissman13y
I have a response to this that I don't actually want to say, because it could make the idea more dangerous to those who have heard about it but are currently safe due to not fully understanding it. I find that predicting that this sort of thing will happen makes me reluctant to discuss this issue, which may explain why of those who are talking about it, most seem to think the banning was wrong. Given that there has been one banned post. I think that his mistakes are much less of a problem than overwrought concern about his mistakes.
1[anonymous]13y
If you have a reply, please PM me. I'm interested in hearing it.
1JGWeissman13y
Are you interested in hearing it if it does give you a better understanding of the dangerous idea that you then realize is in fact dangerous?
0[anonymous]13y
It may not matter anymore, but yes, I would still like to hear it.
0JGWeissman13y
In this case, the same point has been made by others in this thread.
2Vladimir_Nesov13y
Why do you believe that? FAI is full of potential for dangerous ideas. In its full development, it's an idea with the power to rewrite 100 billion galaxies. That's gotta be dangerous.
[-][anonymous]13y14

Let me try to rephrase: correct FAI theory shouldn't have dangerous ideas. If we find that the current version does have dangerous ideas, then this suggests that we are on the wrong track. The "Friendly" in "Friendly AI" should mean friendly.

Pretty much correct in this case. Roko's original post was, in fact, wrong; correctly programmed FAIs should not be a threat.

(FAIs shouldn't be a threat, but a theory to create a FAI will obviously have at least potential to be used to create uFAIs. FAI theory will have plenty of dangerous ideas.)

6XiXiDu13y
I want to highlight at this point how you think about similar scenarios: That isn't very reassuring. I believe that if you had the choice of either letting a Paperclip maximizer burn the cosmic commons or torture 100 people, you'd choose to torture 100 people. Wouldn't you? They are always a threat to some beings. For example beings who oppose CEV or other AI's. Any FAI who would run a human version of CEV would be a potential existential risk to any alien civilisation. If you accept all this possible oppression in the name of what is subjectively friendliness, how can I be sure that you don't favor torture for some humans that support CEV, in order to ensure it? After all you already allow for the possibility that many beings are being oppressed or possible killed.
4wedrifid13y
This seems to be true and obviously so.
-1Vladimir_Nesov13y
Narrowness. You can parry almost any statement like this, by posing a context outside its domain of applicability.
0[anonymous]13y
Another pointless flamewar. This part makes me curious though: There are two ways I can interpret your statement: a) you know a lot more about decision theory than you've disclosed so far (here, in the workshop and elsewhere); b) you don't have that advanced knowledge, but won't accept as "correct" any decision theory that leads to unpalatable consequences like Roko's scenario. Which is it?
8Vladimir_Nesov13y
From my point of view, and as I discussed in the post (this discussion got banned with the rest, although it's not exactly on that topic), the problem here is the notion of "blackmail". I don't know how to formally distinguish that from any other kind of bargaining, and the way in which Roko's post could be wrong that I remember required this distinction to be made (it could be wrong in other ways, but that I didn't notice at the time and don't care to revisit). (The actual content edited out and posted as a top-level post.)
2cousin_it13y
(I seem to have a talent for writing stuff, then deleting it, and then getting interesting replies. Okay. Let it stay as a little inference exercise for onlookers! And please nobody think that my comment contained interesting secret stuff; it was just a dumb question to Eliezer that I deleted myself, because I figured out on my own what his answer would be.) Thanks for verbalizing the problems with "blackmail". I've been thinking about these issues in the exact same way, but made no progress and never cared enough to write it up.
4Perplexed13y
Perhaps the reason you are having trouble coming up with a satisfactory characterization of blackmail is that you want a definition with the consequence that it is rational to resist blackmail and therefore not rational to engage in blackmail. Pleasant though this might be, I fear the universe is not so accomodating. Elsewhere VN asks how to unpack the notion of a status-quo, and tries to characterize blackmail as a threat which forces the recipient to accept less utility than she would have received in the status quo. I don't see any reason in game theory why such threats should be treated any differently than other threats. But it is easy enough to define the 'status-quo'. The status quo is the solution to a modified game - modified in such a way that the time between moves increases toward infinity and the current significance of those future moves (be they retaliations or compensations) is discounted toward zero. A player who lives in the present and doesn't respond to delayed gratification or delayed punishment is pretty much immune to threats (and to promises).
7David_Gerard13y
On RW it's called Headless Chicken Mode, when the community appears to go nuts for a time. It generally resolves itself once people have the yelling out of their system. The trick is not to make any decisions based on the fact that things have gone into headless chicken mode. It'll pass. [The comment this is in reply to was innocently deleted by the poster, but not before I made this comment. However, I think I'm making a useful point here, so would prefer to keep this comment.]
2Jack13y
This is certainly the case with regard to the kind of decision theoretic thing in Roko's deleted post. I'm not sure if it is the case with all ideas that might come up while discussing FAI.
-16Vladimir_Nesov13y

An important academic option: get tenure at a less reputable school. In the States at least there are tons of universities that don't really have huge research responsibilities (so you won't need to worry about pushing out worthless papers, preparing for conferences, peer reviewing, etc), and also don't have huge teaching loads. Once you get tenure you can cruise while focusing on research you think matters.

The down side is that you won't be able to network quite as effectively as if you were at a more prestigious university and the pay isn't quite as good.

3utilitymonster13y
Don't forget about the ridiculous levels of teaching you're responsible for in that situation. Lots worse than at an elite institution.
3Jordan13y
Not necessarily. I'm not referring to no-research universities, which do have much higher teaching loads (although still not ridiculous. Teaching 3 or 4 classes a semester is hardly strenuous). I'm referring to research universities that aren't in the top 100, but which still push out graduate students. My undegrad Alma Mater, Kansas University, for instance. Professors teach 1 or 2 classes a semester, with TA support (really, when you have TAs, teaching is not real work). They are still expected to do research, but the pressure is much less than at a top 50 school.

I pointed out to Roko by PM that his comment couldn't be doing his cause any favors, but did not ask him to delete it, and would have discouraged him from doing so.

2waitingforgodel13y
I can't be sure, but it sounded from: like he'd gotten a stronger message from someone high up in SIAI -- though of course, I probably like that theory because of the Bayesian Conspiracy aspects. Would you mind PM'ing me (or just posting) the message you sent? Also, does the above fit with your experiences at SIAI? I find it hard, but not impossible, to believe that Roko just described something akin to standard hiring procedure, and would very much like to hear an inside (and presumably saner) account.
9MichaelAnissimov13y
Most people who actually work full-time for SIAI are too busy to read every comments thread on LW. In some cases, they barely read it at all. The wacky speculation here about SIAI is very odd -- a simple visit in most cases would eliminate the need for it. Surely more than a hundred people have visited our facilities in the last few years, so plenty of people know what we're really like in person. Not very insane or fanatical or controlling or whatever generates a good comic book narrative.
6Nick_Tarleton13y
PMed the message I sent. Certainly not anything like standard hiring procedure.
8waitingforgodel13y
Thanks Nick. Please pardon my prying, but as you've spent more time with SIAI, have you seen tendencies toward this sort of thing? Public declarations, competitions/pressure to prove devotion to reducing existential risks, scolding for not towing the party line, etc. I've seen evidence of fanaticism, but have always been confused about what the source is (did they start that way, or were they molded?). Basically, I would very much like to know what your experience has been as you've gotten closer to SIAI. I'm sure I'm not the only (past, perhaps future) donor would appreciate the air being cleared about this.

Please pardon my prying,

No problem, and I welcome more such questions.

but as you've spent more time with SIAI, have you seen tendencies toward this sort of thing? Public declarations, competitions/pressure to prove devotion to reducing existential risks, scolding for not towing the party line, etc.

No; if anything, I see explicit advocacy, as Carl describes, against natural emergent fanaticism (see below), and people becoming less fanatical to the extent that they're influenced by group norms. I don't see emergent individual fanaticism generating significant unhealthy group dynamics like these. I do see understanding and advocacy of indirect utilitarianism as the proper way to 'shut up and multiply'. I would be surprised if I saw any of the specific things you mention clearly going on, unless non-manipulatively advising people on how to live up to ideals they've already endorsed counts. I and others have at times felt uncomfortable pressure to be more altruistic, but this is mostly pressure on oneself — having more to do with personal fanaticism and guilt than group dynamics, let alone deliberate manipulation — and creating a sense of pressure is generally recognized as harmf... (read more)

7Larks13y
I was there for a summer and don't think I was ever even asked to donate money.
0waitingforgodel13y
Ahh. I was trying to ask about Cialdini-style influence techniques.
6Roko13y
Very little, if any.
0wedrifid13y
What exactly is Roko's cause by your estimation? I wasn't aware he had one, at least in the secretive sense.
2Nick_Tarleton13y
I meant SIAI.

But for the most part the system seems to be set up so that you first spend a long time working for someone else and research their ideas, after which you can lead your own group, but then most of your time will be spent on applying for grants and other administrative trivia rather than actually researching the interesting stuff. Also, in Finland at least, all professors need to also spend time doing teaching, so that's another time sink.

This depends on the field, university, and maybe country. In many cases, doing your own research is the main focus f... (read more)

I would choose that knowledge if there was the chance that it wouldn't find out about it. As far as I understand your knowledge of the dangerous truth, it just increases the likelihood of suffering, it doesn't make it guaranteed.

I don't understand your reasoning here -- bad events don't get a "flawless victory" badness bonus for being guaranteed. A 100% chance of something bad isn't much worse than a 90% chance.

0XiXiDu13y
I said that I wouldn't want to know it if a bad outcome was guaranteed. But if it would make a bad outcome possible, but very-very-unlikely to actually occur, then the utility I assign to knowing the truth would outweigh the very unlikely possibility of something bad happening.
0Roko13y
No, dude, you're wrong

One big disadvantage is that you won't be interacting with other researchers from whom you can learn.

Research seems to be an insiders' game. You only ever really see the current state of research in informal settings like seminars and lab visits. Conference papers and journal articles tend to give strange, skewed, out-of-context projections of what's really going on, and books summarise important findings long after the fact.

4Danny_Hintze13y
At the same time however, you might be able to interact with researchers more effectively. For example, you could spend some of those research weeks visiting selected labs and seminars and finding out what's up. It's true that this would force you to be conscientious about opportunities and networking, but that's not necessarily a bad thing. Networks formed with a very distinct purpose are probably going to outperform those that form more accidentally. You wouldn't be as tied down as other researchers, which could give you an edge in getting the ideas and experiences you need for your research, while simultaneously making you more valuable to others when necessary (For example, imagine if one of your important research contacts needs two weeks of solid help on something. You could oblige whereas others with less fluid obligations could not.).

The compelling argument for me is that knowing about bad things is useful to the extent that you can do something about them, and it turns out that people who don't know anything (call them "non-cogniscenti") will probably free-ride their way to any benefits of action on the collective-action-problem that is the at issue here, whilst avoiding drawing any particular attention to themselves ==> avoiding the risks.

Vladimir Nesov doubts this prima facie, i.e. he asks "how do you know that the strategy of being a completely inert player is best?".

-- to which I answer, "if you want to be the first monkey shot into space, then good luck" ;D

-1timtyler13y
This is the "collective-action-problem" - where the end of the world arrives - unless a select band of heroic messiahs arrive and transport everyone to heaven...? That seems like a fantasy story designed to manipulate - I would council not getting sucked in.

No, this is the "collective-action-problem" - where the end of the world arrives - despite a select band of decidedly amateurish messiahs arriving and failing to accomplish anything significant.

You are looking at those amateurs now.

-3timtyler13y
The END OF THE WORLD is probably the most frequently-repeated failed prediction of all time. Humans are doing spectacularly well - and the world is showing many signs of material and moral progress - all of which makes the apocalypse unlikely. The reason for the interest here seems obvious - the Singularity Institute's funding is derived largely from donors who think it can help to SAVE THE WORLD. The world must first be at risk to enable heroic Messiahs to rescue everyone. The most frequently-cited projected cause of the apocalypse: an engineering screw-up. Supposedly, future engineers are going to be so incompetent that they accidentally destroy the whole world. The main idea - as far as I can tell - is that a bug is going to destroy civilisation. Also - as far as I can tell - this isn't the conclusion of analysis performed on previous engineering failures - or on the effects of previous bugs - but rather is wild extrapolation and guesswork. Of course it is true that there may be a disaster, and END OF THE WORLD might arrive. However there is no credible evidence that this is likely to be a probable outcome. Instead, what we have appears to be mostly a bunch of fear mongering used for fundraising aimed at fighting the threat. That gets us into the whole area of the use and effects of fear mongering. Fearmongering is a common means of psychological manipulation, used frequently by advertisers and marketers to produce irrational behaviour in their victims. It has been particularly widely used in the IT industry - mainly in the form of fear, uncertainty and doubt. Evidently, prolonged and widespread use is likely to help to produce a culture of fear. The long-term effects of that are not terribly clear - but it seems to be dubious territory. I would council those using fear mongering for fund-raising purposes to be especially cautious of the harm this might do. It seems like a potentially dangerous form of meme warfare. Fear targets circuits in the human brai

Do you also think that global warming is a hoax, that nuclear weapons were never really that dangerous, and that the whole concept of existential risks is basically a self-serving delusion?

Also, why are the folks that you disagree with the only ones that get to be described with all-caps narrative tropes? Aren't you THE LONE SANE MAN who's MAKING A DESPERATE EFFORT to EXPOSE THE TRUTH about FALSE MESSIAHS and the LIES OF CORRUPT LEADERS and SHOW THE WAY to their HORDES OF MINDLESS FOLLOWERS to AN ENLIGHTENED FUTURE? Can't you describe anything with all-caps narrative tropes if you want?

Not rhethorical questions, I'd actually like to read your answers.

2multifoliaterose13y
I laughed aloud upon reading this comment; thanks for lifting my mood.
1timtyler13y
Tim on global warming: http://timtyler.org/end_the_ice_age/ 1-line summary - I am not too worried about that either. Global warming is far more the subject of irrational fear-mongering then machine intelligence is. It's hard to judge how at risk the world was from nuclear weapons during the cold war. I don't have privileged information about that. After Japan, we have not had nuclear weapons used in anger or war. That doesn't give much in the way of actual statistics to go on. Whatever estimate is best, confidence intervals would have to be wide. Perhaps ask an expert on the history of the era this question. The END OF THE WORLD is not necessarily an idea that benefits those who embrace it. If you consider the stereotypical END OF THE WORLD plackard carrier, they are probably not benefitting very much personally. The benefit associated with the behaviour accrues mostly to the END OF THE WORLD meme itself. However, obviously, there are some people who benefit. 2012 - and all that. The probabality of the END OF THE WORLD soon - if it is spelled out exactly what is meant by that - is a real number which could be scientifically investigated. However whether the usual fundraising and marketing campaigns around the subject illuminate that subject more than they systematically distort it seems debatable.
3Desrtopa13y
This is a pretty optimistic way of looking at it, but unfortunately it's quite unfounded. Current scientific consensus is that we've already released more than enough greenhouse gases to avert the next glacial period. Melting the ice sheets and thus ending the ice age entirely is an extremely bad idea if we do it too quickly for global ecosystems to adapt.
-2timtyler13y
We don't even really understand what causes the glacial cycles yet. This is an area where there are multiple competing hypotheses. I list four of these on my site. So, since we don't have a proper understanding of the mechanics involved with much confidence yet, we don't yet know what it would take to prevent them. Here's what Dyson says on the topic: I do not believe this is contrary to any "scientific consensus" on the topic. Where is this supposed "scientific consensus" of which you speak? Melting the ice caps is inevitably an extremely slow process - due to thermal inertia. It is also widely thought to be a runaway positive feedback cycle - and so probably a phenomenon that it would be difficult to control the rate of.
1Desrtopa13y
Melting of the icecaps is now confirmed to be a runaway positive feedback process pretty much beyond a shadow of a doubt. Within the last few years, melting has occurred at a rate that exceeded the upper limits of our projection margins. Have you performed calculations on what it would take to avert the next glacial period on the basis of any of the competing models, or did you just assume that ice ages are bad, so preventing them is good and we should thus work hard to prevent reglaciation? There's a reason why your site is the first and possibly only only result in online searches for support of preventing glaciation, and it's not because you're the only one to think of it
1timtyler13y
There are others who share my views - e.g.: * http://www.theregister.co.uk/2007/08/14/freeman_dyson_climate_heresies/ * http://www.guardian.co.uk/environment/2002/dec/05/comment.climatechange * http://www.stanford.edu/~moore/Boon_To_Man.html * http://www.telegraph.co.uk/news/uknews/1563054/Global-warming-is-good-and-is-not-our-fault.html
2Desrtopa13y
Why is being difficult to control glacial melting a point in the favor of increasing greenhouse gas emissions? It's true that climate change models are limited in their ability to project climate change accurately, although they're getting better all the time. Unfortunately, the evidence currently suggests that they're undershooting actual warming rates even at their upper limits. The pro-warming arguments on your site essentially boil down to "warm earth is better than cold earth, so we should try to warm the earth up." Regardless of the relative merits of a warmer or colder planet though, rapid change of climate is a major burden on ecosystems. Flooding and forest fires are relatively trivial effects, it's mass extinction events that are a real matter of concern.
0timtyler13y
That is hard to parse. You are asking why I think the rate of runaway positive feedback cycles is difficult to control? That is because that is often their nature. You talk as though I am denying warming is happening. HUH? Right. So, if you want a stable climate, you need to end the yo-yo glacial cycles - and end the ice age. A stable climate is one of the benefits of doing that. I have a section entitled "Climate stablity" in my essay. To quote from it: * http://timtyler.org/end_the_ice_age/
2Desrtopa13y
I have no idea how you got that out of my question. It's obvious why runaway positive feedback cycles would be hard to control, the question I asked is why this in any way supports global warming not being dangerous. That was not something I meant to imply. My point is that you seem to have decided that it's better for our earth to be warm than cold, and thus that it's good to approach that state, but not done any investigation into whether what we're doing is a safe means of accomplishing that end; rather you seem to have assumed that we cannot do too much. Most of the species on earth today have survived through multiple glaciation periods. Our ecosystems have that plasticity, because those species that were not able to cope with the rapid cooling periods died out. Global warming could lead to a stable climate, but it's also liable to cause massive extinction in the process as climate zones shift in ways that they haven't in millions of years far at a rate outside the tolerances of many ecosystems. When it comes to global climate, there are really no "better" or "worse" states. Species adapt to the way things are. Cretaceous organisms are adapted to Cretaceous climates, Cenozoic organisms are adapted to Cenozoic climates, and either would have problems dealing with the other's climate. Humans more often suffer problems from being too cold than too hot, but we've scarcely had time to evolve since we left near-equatorial climates. We're adapted to be comfortable in hotter climates than the ones in which most people live today, but the species we rely on are mostly adapted to deal with the climates they're actually in, with cooling periods lying within the tolerances of ecosystems that have been forced to deal with them recently in their evolutionary history.
0timtyler13y
There most certainly are - from the perspective of individuals, groups, or species.
2Desrtopa13y
From the perspective of species, "better," is generally "maintain ecosystem status quo" and "worse" is everything else, except for cases where they come out ahead due to competitors suffering more heavily from the changes.
1timtyler13y
For most possible changes, a good rule of thumb is on average that half the agents affected do better than average, and half the agents affected do worse than average. Fitness is relative - and that's just what it means to consider an average value. I go into all this in more detail on: http://timtyler.org/why_everything_is_controversial/
0Desrtopa13y
Roughly half of agents may have a better than average response to the change, but when rapid ecosystem changes occur, the average species response is negative. Particularly when accompanied by other forms of ecosystem pressure (which humanity is certainly exerting) rapid changes in climate tend to be accompanied by extinction spikes and decreases in species diversity.
0timtyler13y
I am not sure I am following. You are saying that such changes are bad - because they drive species towards extinction? If you look at: http://alife.co.uk/essays/engineered_future/ ...you will see that I expect the current mass extinction to intensify tremendously. However, I am not clear about how or why that would be bad. Surely it is a near-inevitable result of progress.
0Desrtopa13y
Rapid change drives species to extinction at a rate liable to endanger the function of ecosystems we rely on. Massive extinction events are in no way an inevitable consequence of improving the livelihoods of humans, although I'm not optimistic about our prospects of actually avoiding them. Loss of a large percentage of the species on earth would hurt us, both in practical terms and as a matter of widely shared preference. As a species, we would almost certainly survive anthropogenic climate change even if it caused a runaway mass extinction event, but that doesn't mean that it's not an outcome that would be better to avoid if possible. Frankly, I don't expect legislation or social agitation ever to have an adequate impact in halting anthropogenic global warming; unless we come up with some really clever hack, the battle is going to be lost, but that doesn't mean that we shouldn't be aware of what we stand to lose, and take notice if any viable means of avoiding it arises.
1timtyler13y
The argument suggesting that we should move away from the "cliff edge" of reglaciation is that it is dangerous hanging around there - and we really don't want to fall off. You seem to be saying that we should be cautious about moving too fast - in case we break something. Very well, I agree entirely - so: let us study the whole issue while moving as rapidly away from the danger zone as we feel is reasonably safe.
4Desrtopa13y
As I already noted, as best indicated by our calculations we have already overshot the goal of preventing the next glaciation period. Moving away from the danger zone at a reasonably safe pace would mean a major reduction in greenhouse gas emissions.
6timtyler13y
We don't know that. The science of this isn't settled. The Milankovitch hypothesis of glaciation is more band-aid than theory. See: http://en.wikipedia.org/wiki/Milankovitch_cycles#Problems CO2 apparently helps - but even that is uncertain. I would want to see a very convincing case that we are far enough from the edge for the risk of reglaciation to be over before advocating hanging around on the reglaciation cliff-edge. Short of eliminating the ice caps, it is difficult to imagine what would be convincing. Those ice caps are potentially major bad news for life on the planet - and some industrial CO2 is little reassurance - since that could relatively quickly become trapped inside plants and then buried.
4Desrtopa13y
The global ice caps have been around for millions of years now. Life on earth is adapted to climates that sustain them. They do not constitute "major bad news for life on this planet." Reglaciation would pose problems for human civilization, but the onset of glaciation occurs at a much slower rate than the warming we're already subjecting the planet to, and as such even if raising CO2 levels above what they've been since before the glaciations began in the Pleistocene were not enough to prevent the next round, it would still be a less pressing issue. On a geological time scale, the amount of CO2 we've released could quickly be trapped in plants and buried, but with the state of human civilization as it is, how do you suppose that would actually happen quickly enough to be meaningful for the purposes of this discussion?
1timtyler13y
The ice age is a pretty major problem for the planet. Huge ice sheets obliterate most life on the northern hemisphere continents every 100 thousand years or so. Re: reglaciation being slow - the last reglaciation looked slower than the last melt. The one before that happened at about the same speed. However, they both look like runaway positive feedback processes. Once the process has started it may not be easy to stop it. Thinking of reglaciation as "not pressing" seems like a quick way to get reglaciated. Humans have got to intervene in the planet's climate and warm it up in order to avoid this disaster. Leaving the climate alone would be a recipe for reglaciation. Pumping CO2 into the atmosphere may have saved us from disaster already, may save us from disaster in the future, may merely be a step in the right direction - or may be pretty ineffectual. However, it is important to realise that humans have got to take steps to warm the planet up - otherwise our whole civilisation may be quickly screwed. We don't know that industrial CO2 will protect us from reglaciation - since we don't yet fully understand the latter process - though we do know that it devastates the planet like clockwork, and so has an astronomical origin. The atmosphere has a CO2 decay function with an estimated half-life time of somwhere between 20-100 years. It wouldn't vanish overnight - but a lot of it could go pretty quickly if civilisation problems resulted in a cessation of production.
2NancyLebovitz13y
If reglaciation starts, could it be stopped by sprinkling coal dust on some of the ice?
0timtyler13y
Hopefully - if we have enough of a civilisation at the time. Reglaciation seems likely to only really be a threat after a major disaster or setback - I figure. Otherwise, we can just adjust the climate controls. The chances of such a major setback may seem slender - but perhaps are not so small that we can afford to be blazee about the matter. What we don't want is to fall down the stairs - and then be kicked in the teeth. I discuss possible theraputic interventions on: http://timtyler.org/tundra_reclamation/ The main ones listed are planting northerly trees and black ground sheets.
0[anonymous]13y
We don't know great many things, but what to do right now, we must decide right now, based on whatever we happen to know. (To address the reason for Desrtopa's comment, if not any problem with your comment on this topic I'm completely ignorant about.)
0timtyler13y
If you are concerned about loss of potentially valuable information in the form of species extinction, global warming seems like total fluff. Look instead to habitat destruction and decimation, farming practices, and the resistribution of pathogens, predators and competitors by humans.
0Desrtopa13y
I do look at all these issues. I've spoken at conferences about how they receive too little attention relative to the danger they pose. That doesn't mean that global warming does not stand to cause major harm, and going on the basis of the content of your site, you don't seem to have invested adequate effort into researching the potential dangers, only the potential benefits.
-6timtyler13y
-1timtyler13y
Global warming seems a lot less dangerous than reglaciation. Actually, I expect us to master climate control fairly quickly. That is another reason why global warming is a storm in a teacup. However, the future is uncertain. We might get unlucky - and be hit by a fair-sized meteorite. If that happens, reglaciation is about the last thing we would want for desert.
3nshepperd13y
"Fairly quickly"? What if we don't? Do you expect reglaciation to occur within the next 100 years, 200 years? If not we can wait until we have the knowledge to pull off climate control safely. (And if we do get hit by an asteroid, the last thing we probably want is runaway climate change started when we didn't know what we were doing either.)
-3timtyler13y
If things go according to plan, we get climate control - and then need to worry little about either warming or reglaciation. The problem is things not going according to plan. Indeed. The "runaway climate change" we are scheduled for is reglaciation. The history of the planet is very clear on this topic. That is exactly what we don't want. A disaster followed by glaciers descending over the northern continents could make a mess of civilisation for quite a while. Warming, by contrast doesn't represent a significant threat - living systems including humans thrive in warm conditions.
4Desrtopa13y
Living systems including humans also thrive in cold conditions. Most species on the planet today have persisted through multiple glaciation periods, but not through pre-Pleistocene level warmth or rapid warming events. Plus, the history of the Pleistocene, in which our record of glaciation exists, contains no events of greenhouse gas release and warming comparable to the one we're in now, this is not business as usual on the track to reglaciation. Claiming that the history of the planet is very clear that we're headed for reglaciation is flat out misleading. Last time the world had CO2 levels as high as they are now, it wasn't going through cyclical glaciation.
-2timtyler13y
Most species on the planet are less than 2.5 million years old?!? I checked and found: "The fossil record suggests an average species lifespan of about five million years" and "Average species lifespan in fossil record: 4 million years." (search for sources). So, I figure your claim is probably factually incorrect. However, isn't it a rather meaningless statistic anyway? It depends on how often lineages speciate. That actually says very little about how long it takes to adapt to an environment.
2Desrtopa13y
The average species age is necessarily lower than the average species duration. Additionally, the fossil record measures species in paleontological terms, a paleontological "species" is not a species in biological terms, but a group which cannot be distinguished from each other by fossilized remains. Paleontological species duration sets the upper bound on biological species duration; in practice, biological species duration is shorter. Species originating more than 2.5 million years ago which were not capable of enduring glaciation periods would have died out when they occurred. The origin window for species without adaptations to cope is the last ten thousand years. Any species with a Pleistocene origin or earlier has persisted through glaciation periods.
0Vaniver13y
Allow me to try: There are positive feedback cycles which appear to be going in runaway mode. Why is this evidence for "things are going to get better" rather than "things are going to get worse"? Your argument as a whole- "we need to get above this variability regime into a stable regime"- answers why the runaway positive feedback loop would be desirable, but does not convincingly establish (the part I've read, at least, you may do this elsewhere) that the part above the current variability is actually a stable attractor, instead of us shooting to up to Venus's climate (or something less extreme but still regrettable for humans).
0timtyler13y
Well, we already know what the planet is like when it is not locked into a crippling ice age. Ice-cap free is how the planet has spent the vast majority of its history. We have abundant records about that already.
0timtyler13y
That's the whole "ice age: bad / normal planet: good" notion. I figure a planet locked into a crippling era of catastrophic glacial cycles is undesirable.
1Vladimir_Nesov13y
So the real problem here is weakness of arguments, since they lack explanatory power by being able to "explain" too much.
7Roko13y
Point of fact: the negative singularity isn't a superstimulus for evolved fear circuits: current best-guess would be that it would be a quick painless death in the distant future (30 years+ by most estimates, my guess 50 years+ if ever). It doesn't at all look like how I would design a superstimulus for fear.
4timtyler13y
It typically has the feature that you, all your relatives, friends and loved-ones die - probably enough for most people to seriously want to avoid it. Michael Vasser talks about "eliminating everything that we value in the universe". Maybe better super-stimuli could be designed - but there are constraints. Those involved can't just make up the apocalypse that they think would be the most scary one. Despite that, some positively hell-like scenarios have been floated around recently. We will have to see if natural selection on these "hell" memes results in them becoming more prominent - or whether most people just find them too ridiculous to take seriously.
3wedrifid13y
Yes, you can only look at them through a camera lens, as a reflection in a pool or possibly through a ghost! ;)
2Roko13y
I think you're trying to fit the facts to the hypothesis. Negatve singularity in my opinion is at least 50 years away. Many people I know will already be dead by then, including me if I die at the same point in life as the average of my family. And as a matter of fact it is failing to actually get much in the way of donations, compared to donations to the church which is using hell as a superstimulus, or even compared to campaigns to help puppies (about $10bn in total as far as I can see). It is also not well-optimized to be believable.
4XiXiDu13y
It doesn't work. Jehovah's Witnesses don't even believe into a hell and they are gaining a lot of members each year and donations are on the rise. Donations are not even mandatory either, you are just asked to donate if possible. The only incentive they use is positive incentive. People will do everything for their country if it asks them to give their life. Suicide bombers also do not blow themselves up because of negative incentive but because they promise their families help and money. Also some believe that they will enter paradise. Negative incentive makes many people reluctant. There is much less crime in the EU than in the U.S. and they got death penalty. Here you get out of jail after max. ~20 years and there's almost no violence in jails either.
0wedrifid13y
I take it that you would place (t(positive singularity) | positive singularity) a significant distance further still? This got a wry smile out of me. :)
0Roko13y
(t(positive singularity) | positive singularity) I'm going to say 75 years for that. But really, this is becoming very much total guesswork. I do know that AGI -ve singularity won't happen in the next 2 decades and I think one can bet that it won't happen after that for another few decades either.
0wedrifid13y
It's still interesting to hear your thoughts. My hunch is that the difficulty of the -ve --> +ve step is much harder than the 'singularity' step so I would expect the time estimates to reflect that somewhat. But there are all sorts of complications there and my guesswork is even more guess-like than yours! If you find anyone who is willing to take you up on a bet of that form given any time estimate and any odds then please introduce them to me! ;)
0Roko13y
Many plausible ways to S^+ involve something odd or unexpected happening. WBE might make computational political structures, i.e. political structures based inside a computer full of WBEs. This might change the way humans cooperate. Suffices to say that FAI doesn't have to come via the expected route of someone inventing AGI and then waiting until they invent "friendliness theory" for it.
-1timtyler13y
Church and cute puppies are likely worse causes, yes. I listed animal charities in my "Bad causes" video. I don't have their budget at my fingertips - but SIAI has raked in around 200,000 dollars a year for the last few years. Not enormous - but not trivial. Anyway, my concern is not really with the cash, but with the memes. This is a field adjacent to one I am interested in: machine intelligence. I am sure there will be a festival of fear-mongering marketing in this area as time passes, with each organisation trying to convince consumers that its products will be safer than those of its rivals. "3-laws-safe" slogans will be printed. I note that Google's recent chrome ad was full of data destruction images - and ended with the slogan "be safe". Some of this is potentially good. However, some of it isn't - and is more reminiscent of the Daisy ad.

To me, $200,000 for a charity seems to be pretty much the smallest possible amount of money. Can you find any charitable causes that recieve less than this?

Basically, you are saying that SIAI DOOM fearmongering is a trick to make money. But really, it fails to satisfy several important criteria:

  • it is shit at actually making money. I bet you that there are "save the earthworm" charities that make more money.

  • it is not actually frightening. I am not frightened; quick painless death in 50 years? boo-hoo. Whatever.

  • it is not optimized for believability. In fact it is almost optimized for anti-believability, "rapture of the nerds", much public ridicule, etc.

6Roko13y
A moment's googling finds this: http://www.buglife.org.uk/Resources/Buglife/Buglife%20Annual%20Report%20-%20web.pdf ($863 444) I leave it to readers to judge whether Tim is flogging a dead horse here.
5wedrifid13y
Not the sort of thing that could, you know, give you nightmares?
5Roko13y
The sort of thing that could give you nightmares is more like the stuff that is banned. This is different than the mere "existential risk" message.
-6timtyler13y
2NancyLebovitz13y
I think your disapproval of animal charities is based on circular logic, or at least an unproven premise. You seem to be saying that animal causes are unworthy recipients of human effort because animals aren't humans. However, people care about animals because of the emotional effects of animals. They care about people because of the emotional effects of people. I don't think it's proven that people only like animals because the animals are super-stimuli. I could be mistaken, but I think that a more abstract utilitarian approach grounds out in some sort of increased enjoyment of life, or else it's an effort to assume a universe-eye's view of what's ultimately valuable. I'm inclined to trust the former more. What's your line of argument for supporting charities that help people?
1timtyler13y
I usually value humans much more than I value animals. Given a choice between saving a human or N non-human animals, N would normally have to be very large before I would even think twice about it. Similar values are enshrined in law in most countries.
1wedrifid13y
To the extent that the law accurately represents the values of the people it governs charities are not necessary. Vales enshrined in law are by necessity irrelevant. (Noting by way of pre-emption that I do not require that laws should fully represent the values of the people.)
0timtyler13y
I do not agree. If the law says that killing a human is much worse than killing a dog, that is probably a reflection of the views of citizens on the topic.
0wedrifid13y
And yet this is not contrary to my point. Charity operates, only needs to operate, on areas that laws do not already create a solution for. If there was a law specifying that dying kids get trips to Disneyland and visits by popstars then there wouldn't be a "Make A Wish Foundation".
0timtyler13y
You said the law was "irrelevant" - but there's a sense in which we can see consensus human values about animals by looking at what the law dictates as punishment for their maltreatment. That is what I was talking about. It seems to me that the law has something to say about the issue of the value of animals relative to humans. For the most part, animals are given relatively few rights under the law. There are exceptions for some rare ones. Animals are routinely massacred in huge numbers by humans - including some smart mammals like pigs and dolphins. That is a broad reflection how relatively-valuable humans are considered to be.
0shokwave13y
And once it's enshrined in law, it no longer matters whether citizens think killing a human is worse or better than killing a dog. I think that is what wedrifid was noting.
0multifoliaterose13y
You may be interested in Alan Dawrst's essays on animal suffering and animal suffering prevention.
1Airedale13y
I believe the numbers are actually higher than $200,000. SIAI's 2008 budget was about $500,000. 2006 was about $400,000 and 2007 was about $300,000 (as listed further in the linked thread). I haven't researched to see if gross revenue numbers or revenue from donations are available. Curiously, Guidestar does not seem to have 2009 numbers for SIAI, or at least I couldn't find those numbers; I just e-mailed a couple people at SIAI asking about that. That being said, even $500,000, while not trivial, seems to me a pretty small budget.
0timtyler13y
Sorry, yes, my bad. $200,000 is what they spent on their own salaries.
9steven046113y
I wonder what fraction of actual historical events a hostile observer taking similar liberties could summarize to also sound like some variety of "a fantasy story designed to manipulate".
-1timtyler13y
I don't know - but believing inaction is best is rather common - and there are pages all about it - e.g.: http://en.wikipedia.org/wiki/Learned_helplessness

Being in a similar position (also as far as aversion to moving to e.g. US is concerned), I decided to work part time (roughly 1/5 of the time of even less) in software industry and spend the remainder of the day studying relevant literature, leveling up etc. for working on the FAI problem. Since I'm not quite out of the university system yet, I'm also trying to build some connections with our AI lab staff and a few other interested people in the academia, but with no intention to actually join their show. It would eat away almost all my time, so I could wo... (read more)

Kaj, why don't you add the option of getting rich in your 20s by working in finance, then paying your way into research groups in your late 30s? PalmPilot guy, uh Jeff Hawkins essentially did this. Except he was an entrepreneur.

3Kaj_Sotala13y
That doesn't sound very easy.
7wedrifid13y
Sounds a heck of a lot easier than doing an equivalent amount of status grabbing within academic circles over the same time. Money is a lot easier to game and status easier to buy.

There is the minor detail that it really helps not to hate each and every individual second of your working life in the process. A goal will only pull you along to a certain degree.

(Computer types know all the money is in the City. I did six months of it. I found the people I worked with and the people whose benefit I worked for to be excellent arguments for an unnecessarily bloody socialist revolution.)

2wedrifid13y
For many people that is about half way between the Masters and PhD degrees. ;) If only being in a university was a guarantee of an enjoyable working experience.
1Roko13y
Curious, why did it bother you that you disliked the people you worked with? Couldn't you just be polite to them and take part in their jokes/socialgames/whatever? They're paying you handsomely to be there, after all? Or was it a case of them being mean to you?
2David_Gerard13y
No, just loathsome. And the end product of what I did and finding the people I was doing it for loathsome.
3Roko13y
I dunno, "loathsome" sounds a bit theoretical to me. Can you be specific?
5CronoDAS13y
One of my brother's co-workers at Goldman Sachs has actively tried to sabotage his work. (Goldman Sachs runs on a highly competitive "up or out" system; you either get promoted or fired, and most people don't get promoted. If my brother lost his job, his coworker would be more likely to keep his.)
3Roko13y
I don't understand: he tried to sabotage his cowerker's work, or his own?
8sfb13y
CronoDAS's Brother's Co-worker tried to sabotage CronoDAS's Brother's work.
0TheOtherDave13y
"Hamlet, in love with the old man's daughter, the old man thinks."
2David_Gerard13y
Not without getting political. Fundamentally, I didn't feel good about what I was doing. And I was just a Unix sysadmin. This was just a job to live, not a job taken on in the furtherance of a larger goal.
3Roko13y
Agreed. Average Prof is a nobody at 40, average financier is a millionaire. shrugs
0Hul-Gil12y
The average financier is a millionaire at 40?! What job is this, exactly?
2sark13y
Thank you for this. This was a profound revelation for me.
6Manfred13y
Upvoted for comedy.
4Roko13y
Also, you can get a PhD in a relevant mathy discipline first, thereby satisfying the condition of having done research. And the process of dealing with the real world enough to make money will hopefully leave you with better anti-akrasia tactics, better ability to achieve real-world goals, etc. You might even be able to hire others.
2Roko13y
I don't think you need to be excessively rich. $1-4M ought to be enough. Edit: oh, I forgot, you live in scandanavia, with a taxation system so "progressive" that it has an essential singularity at $100k. Might have to move to US.
1Kaj_Sotala13y
I'm afraid that's not really an option for me, due to various emotional and social issues. I already got horribly homesick during just a four month visit.
3Vaniver13y
Alaska might be a reasonable Finland substitute, weather-wise, but the other issues will be difficult to resolve (if you're moving to the US to make a bunch of money, Alaska is not the best place to do it). One of my favorite professors was Brazilian, who went to graduate school at the University of Rochester. Horrified (I used to visit my ex in upstate New York, and so was familiar with the horrible winters that take up 8 months of the year without the compensations that convince people to live in Scandinavia), I asked him how he liked the transition- and he said that he loved it, and it was the best time of his life. I clarified that I was asking about the weather, and he shrugged and said that in academia, you absolutely need to put the ideas first. If the best place for your research is Antarctica, that's where you go. The reason why I tell this story is that this is what successful professors look like, and only one tenth of the people that go to graduate school end up as professors. If you would be outcompeted by this guy instead of this guy, keep that in mind when deciding you want to enter academia. And, if you want to do research outside of academia, in order to do that well that requires more effort than research done inside of academia.
1Kaj_Sotala13y
It's not the weather: I'd actually prefer a warmer climate than Finland has. It's living in a foreign culture and losing all of my existing social networks. I don't have a problem with putting in a lot of work, but to be able to put in a lot of work, my life needs to be generally pleasant otherwise, and the work needs to be at least somewhat meaningful. I've tried the "just grit your teeth and toil" mentality, and it doesn't work - maybe for someone else it does, but not for me.
5Vaniver13y
The first part is the part I'm calling into question, not the second. Of course you need to be electrified by your work. It's hard to do great things when you're toiling instead of playing. But your standards for general pleasantness are, as far as I can tell, the sieve for a lot of research fields. As an example, it is actually harder to be happy on a grad student/postdoc salary; instead of it being shallow to consider that a challenge, it's shallow-mindedness to not recognize that that is a challenge. It is actually harder to find a mate and start a family while an itinerant academic looking for tenure. (Other examples abound; two should be enough for this comment.) If you're having trouble leaving your network of friends to go to grad school / someplace you can get paid more, then it seems likely that you will have trouble with the standard academic life or standard corporate life. While there are alternatives, those tend not to play well with doing research, since the alternative tends to take the same kind of effort that you would have put into research. I should comment that I think a normal day job plus research on the side can work out but should be treated like writing a novel on the side- essentially, the way creative literary types play the lottery.
1diegocaleiro13y
It's living in a foreign culture and losing all of my existing social networks. Of course it is! I am in the same situation. Just finished undergrad in philosophy. But here life is completely optimized for happiness: 1) No errands 2) Friends filtered through 15 years for intelligence, fun, beauty, awesomeness. 3) Love, commitment, passion, and just plain sex with the one, and the others. 4) Deep knowledge of the free culture available 5) Ranking high in the city (São Paulo's) social youth hierarchy 6) Cheap services 7) Family and acquaintances network. 8) Freedom timewise to write my books 9) Going to the park 10 min walking 10) Having been to, and having friends who were in the US, and knowing for fact that life just is worse there.... This is how much fun I have, the list's impact is the only reason I'm considering not going to study, get FAI faster, get anti-ageing faster. If only life were just a little worse...... I would be in a plane towards posthumanity right now. So how good has a life to be for you to be forgiven of not working for what really matters? Help me folks!
1Roko13y
Well, you wanna make an omlet, you gotta break some eggs!

Conditioning on yourself deeming it optimal to make a metaphorical omelet by breaking metaphorical eggs, metaphorical eggs will deem it less optimal to remain vulnerable to metaphorical breakage by you than if you did not deem it optimal to make a metaphorical omelet by breaking metaphorical eggs; therefore, deeming it optimal to break metaphorical eggs in order to make a metaphorical omelet can increase the difficulty you find in obtaining omelet-level utility.

5JGWeissman13y
Many metaphorical eggs are not [metaphorical egg]::Utility maximizing agents.
2Clippy13y
True, and to the extent that is not the case, the mechanism I specified would not activate.
1Strange713y
Redefining one's own utility function so as to make it easier to achieve is the road that leads to wireheading.
4Clippy13y
Correct. However, the method I proposed does not involve redefining one's utility function, as it leaves terminal values unchanged. It simply recognizes that certain methods of achieving one's pre-existing terminal values are better than others, which leaves the utility function unaffected (it only alters instrumental values). The method I proposed is similar to pre-commitment for a causal decision theorist on a Newcomb-like problem. For such an agent, "locking out" future decisions can improve expected utility without altering terminal values. Likewise, a decision theory that fully absorbs such outcome-improving "lockouts" so that it outputs the same actions without explicit pre-commitment can increase its expected utility for the same utility function.
1Larks13y
Do you have any advice for getting into Quant work? (I'm a second year maths student at Oxford, don't know much about the city).
8[anonymous]13y
An advice sheet for mathematicians considering becoming quants. It's not a path that interests me, but if it was I think I'd find this useful.
0katydee13y
Are there any good ways of getting rich that don't involve selling your soul?
8RHollerith13y
Please rephrase without using "selling your soul".

Are there any good ways of getting rich that don't involve a Faustian exchange with Lucifer himself?

3Alicorn13y
Pfft. No good ways.
3katydee13y
Without corrupting my value system, I suppose? I'm interested in getting money for reasons other than my own benefit. I am not fully confident in my ability to enter a field like finance without either that changing or me getting burned out by those around me.
5gwern13y
As well ask if there are hundred-dollar bills lying on sidewalks. EDIT: 2 days after I wrote this, I was walking down the main staircase in the library and laying on the central landing, highly contrasted against the floor, in completely clear view of 4 or 5 people who walked past it, was a dollar bill. I paused for a moment reflecting on the irony that sometimes there are free lunches - and picked it up.

This thread raises the question about how many biologists and medical researchers are on here. Due to our specific cluster I expect a strong learning towards the IT people. So AI research gets over proportional recognition, while medical research including direct life extension falls on the wayside.

Speaking as someone who is in grad school now, even with prior research, the formal track of grad school is very helpful. I am doing research that I'm interested in. I don' t know if I'm a representative sample in that regard. It may be that people have more flexibility in math than in other areas. Certainly my anecdotal impression is that people in some areas such as biology don't have this degree of freedom. I'm also learning more about how to research and how to present my results. Those seem to be the largest advantages. Incidentally, my impression is that for grad school at least in many areas, taking a semester or two off if very stressed isn't treated that badly if one is otherwise doing productive research.

0Matt_Simpson13y
I'm in grad school in statistics and am in the same boat. It doesn't seem that difficult to do research on something you're interested in while still in grad school. In a nutshell, choose your major professor wisely. (And make sure the department is large enough that there are plenty of options)

The above deleted comment referenced some details of the banned post. With those details removed, it said:

(Note, this comment reacts to this thread generally, and other discussion of the banning)

The essential problem is that with the (spectacular) deletion of the Forbidden Post, LessWrong turned into the sort of place where posts get disappeared.

I realize that you are describing how people generally react to this sort of thing, but this knee jerk stupid reaction is one of the misapplied heurestics we ought to be able notice and overcome.

So far, one p

... (read more)
4David_Gerard13y
Strange LessWrong software fact: this showed up in my reply stream as a comment consisting only of a dot ("."), though it appears to be a reply to a reply to me.
0JGWeissman13y
It also shows up on my user page as a dot. Before I edited it to be just a dot, it showed up in your comment stream and my user page with the original complete content.

There is a big mismatch here between "sending an email to a blogger" and "increase existential risk by one in a million". All of the strategies for achieving existential risk increases that large are either major felonies, or require abusing a political office as leverage. When you first made the threat, I got angry at you on the assumption that you realized this. But if all you're threatening to do is send emails, well, I guess that's your right.

LW is a place where you'll get useful help on weeding out mistakes in your plan to blow up the world, it looks like.

For Epistemic Rationality!

That reminds me of the joke about the engineer in the French revolution.

1waitingforgodel13y
Are you joking? Do you have any idea what a retarded law can do to existential risks?
7jimrandomh13y
P(law will pass|it is retarded && its sole advocate publicly described it as retarded) << 10^-6
-10waitingforgodel13y

(I would have liked to reply to the deleted comment, but you can't reply to deleted comments so I'll reply to the repost.)

  • EDIT: Roko reveals that he was actually never asked to delete his comment! Disregard parts of the rest of this comment accordingly.

I don't think Roko should have been requested to delete his comment. I don't think Roko should have conceded to deleting his comment.

The correct reaction when someone posts something scandalous like

I was once criticized by a senior singinst member for not being prepared to be tortured or raped for th

... (read more)

Roko may have been thinking of [just called him, he was thinking of it] a conversation we had when he and I were roommates in Oxford while I was visiting the Future of Humanity Institute, and frequently discussed philosophical problems and thought experiments. Here's the (redeeming?) context:

As those who know me can attest, I often make the point that radical self-sacrificing utilitarianism isn't found in humans and isn't a good target to aim for. Almost no one would actually take on serious harm with certainty for a small chance of helping distant others. Robin Hanson often presents evidence for this, e.g. this presentation on "why doesn't anyone create investment funds for future people?" However, sometimes people caught up in thoughts of the good they can do, or a self-image of making a big difference in the world, are motivated to think of themselves as really being motivated primarily by helping others as such. Sometimes they go on to an excessive smart sincere syndrome, and try (at the conscious/explicit level) to favor altruism at the severe expense of their other motivations: self-concern, relationships, warm fuzzy feelings.

Usually this doesn't work out well, as t... (read more)

3waitingforgodel13y
First off, great comment -- interesting, and complex. But, some things still don't make sense to me... Assuming that what you described led to: 1. How did precommitting enter in to it? 2. Are you prepared to be tortured or raped for the cause? Have you precommitted to it? 3. Have other SIAI people you know of talked about this with you, have other SIAI people precommitted to it? 4. What do you think of others who do not want to be tortured or raped for the cause? Thanks, wfg

I find this whole line of conversation fairly ludicrous, but here goes:

Number 1. Time-inconsistency: we have different reactions about an immediate certainty of some bad than a future probability of it. So many people might be willing to go be a health worker in a poor country where aid workers are commonly (1 in 10,000) raped or killed, even though they would not be willing to be certainly attacked in exchange for 10,000 times the benefits to others. In the actual instant of being tortured anyone would break, but people do choose courses of action that carry risk (every action does, to some extent), so the latter is more meaningful for such hypotheticals.

Number 2. I have driven and flown thousands of kilometers in relation to existential risk, increasing my chance of untimely death in a car accident or plane crash, so obviously I am willing to take some increased probability of death. I think I would prefer a given chance of being tortured to a given chance of death, so obviously I care enough to take at least some tiny risk from what I said above. As I also said above, I'm not willing to make very big sacrifices (big probabilities of such nasty personal outcomes) for tiny shifts ... (read more)

8waitingforgodel13y
This sounds very sane, and makes me feel a lot better about the context. Thank you very much. I very much like the idea that top SIAI people believe that there is such a thing as too much devotion to the cause (and, I'm assuming, actively talk people who are above that level down as you describe doing for Roko). As someone who has demonstrated impressive sanity around these topics, you seem to be in a unique position to answer these questions with an above-average level-headedness: 1. Do you understand the math behind the Roko post deletion? 2. What do you think about the Roko post deletion? 3. What do you think about future deletions?

Do you understand the math behind the Roko post deletion?

Yes, his post was based on (garbled versions of) some work I had been doing at FHI, which I had talked about with him while trying to figure out some knotty sub-problems.

What do you think about the Roko post deletion?

I think the intent behind it was benign, at least in that Eliezer had his views about the issue (which is more general, and not about screwed-up FAI attempts) previously, and that he was motivated to prevent harm to people hearing the idea and others generally. Indeed, he was explicitly motivated enough to take a PR hit for SIAI.

Regarding the substance, I think there are some pretty good reasons for thinking that the expected value (with a small probability of a high impact) of the info for the overwhelming majority of people exposed to it would be negative, although that estimate is unstable in the face of new info.

It's obvious that the deletion caused more freak-out and uncertainty than anticipated, leading to a net increase in people reading and thinking about the content compared to the counterfactual with no deletion. So regardless of the substance about the info, clearly it was a mistake to delete (w... (read more)

Less Wrong has been around for 20 months. If we can rigorously carve out the stalker/PIN/illegality/spam/threats cases I would be happy to bet $500 against $50 that we won't see another topic banned over the next 20 months.

That sounds like it'd generate some perverse incentives to me.

6TheOtherDave13y
Just to be clear: he recognizes this by comparison with the alternative of privately having the poster delete it themselves, rather than by comparison to not-deleting. Or at least that was my understanding. Regardless, thanks for a breath of clarity in this thread. As a mostly disinterested newcomer, I very much appreciated it.
2CarlShulman13y
Well, if counterfactually Roko hadn't wanted to take it down I think it would have been even more of a mistake to delete it, because then the author would have been peeved, not just the audience/commenters.
6TheOtherDave13y
Which is fine. But Eliezer's comments on the subject suggest to me that he doesn't think that. More specifically, they suggest that he thinks the most important thing is that the post not be viewable, and if we can achieve that by quietly convincing the author to take it down, great, and if we can achieve it by quietly deleting it without anybody noticing, great, and if we can't do either of those then we achieve it without being quiet, which is less great but still better than leaving it up. And it seemed to me your parenthetical could be taken to mean that he agrees with you that deleting it would be a mistake in all of those cases, so I figured I would clarify (or let myself be corrected, if I'm misunderstanding).
-3waitingforgodel13y
I should have taken this bet
6Eliezer Yudkowsky13y
Your post has been moved to the Discussion section, not deleted.
0[anonymous]13y
Looking at your recent post, I think Alicorn had a good point.
2TimFreeman12y
I agree with your main point, but the thought experiment seems to be based on the false assumption that the risk of being raped or murdered are smaller than 1 in 10K if you stay at home. Wikipedia guesstimates that 1 in 6 women in the US are on the receiving end of attempted rape at some point, so someone who goes to a place with a 1 in 10K chance of being raped or murdered has probably improved their personal safety. To make a better thought experiment, I suppose you have to talk about the marginal increase in rape or murder rate when working in the poor country when compared to staying home, and perhaps you should stick to murder since the rape rate is so high.
0wedrifid13y
You lost me at 'ludicrous'. :)
5waitingforgodel13y
but he won me back by answering anyway <3
0[anonymous]13y
How so?
2Bongo13y
Thanks!
1multifoliaterose13y
Great comment Carl!
8Nick_Tarleton13y
Roko was not requested to delete his comment. See this parallel thread. (I would appreciate it if you would edit your comment to note this, so readers who miss this comment don't have a false belief reinforced.) (ETA: thanks) Agreed (and I think the chance of wfg's reposts being deleted is very low, because most people get this). Unfortunately, I know nothing about the alleged event (Roko may be misdescribing it, as he misdescribed my message to him) or its context.
1Bongo13y
Roko said he was asked. You didn't ask him but maybe someone else did?
4Nick_Tarleton13y
Roko's reply to me strongly suggested that he interpreted my message as requesting deletion, and that I was the cause of him deleting it. I doubt anyone at SIAI would have explicitly requested deletion.
6Roko13y
I can confirm that I was not asked to delete the comment but did so voluntarily.

I think you are too trigger-happy.

-9waitingforgodel13y
0Perplexed13y
I'm wondering whether you, Nick, have learned anything from this experience - something perhaps about how attempting to hide something is almost always counterproductive? Of course, Roko contributed here by deleting the message, you didn't create this mess by yourself. But you sure have helped. :)

Well, look, I deleted it of my own accord, but only after being prompted that it was a bad thing to have posted. Can we just drop this? It makes me look like even more of a troublemaker than I already look like, and all I really want to do is finish the efficient charity competition then get on with life outside teh intenetz.

Will you at least publicly state that you precommit, on behalf of CEV, to not apply negative incentives in this case? (Roko, Jul 24, 2010 1:37 PM)

This is very important. If the SIAI is the organisation to solve the friendly AI problem and implement CEV then it should be subject to public examination, especially if they ask for money.

9David_Gerard13y
The current evidence that anyone anywhere can implement CEV is two papers in six years that talk about it a bit. There appears to have been nothing else from SIAI and no-one else in philosophy appears interested. If that's all there is for CEV in six years, and AI is on the order of thirty years away, then (approximately) we're dead. This is rather disappointing, as if CEV is possible then a non-artificial general intelligence should be able to implement it, at least partially. And we have those. The reason for CEV is (as I understand it) the danger of the AI going FOOM before it cares about humans. However, human general intelligences don't go FOOM but should be able to do the work for CEV. If they know what that work is. Addendum: I see others have been asking "but what do you actually mean?" for a couple of years now.

The current evidence that anyone anywhere can implement CEV is two papers in six years that talk about it a bit. There appears to have been nothing else from SIAI and no-one else in philosophy appears interested.

If that's all there is for CEV in six years, and AI is on the order of thirty years away, then (approximately) we're dead.

This strikes me as a demand for particular proof. SIAI is small (and was much smaller until the last year or two), the set of people engaged in FAI research is smaller, Eliezer has chosen to focus on writing about rationality over research for nearly four years, and FAI is a huge problem, in which any specific subproblem should be expected to be underdeveloped at this early stage. And while I and others expect work to speed up in the near future with Eliezer's attention and better organization, yes, we probably are dead.

The reason for CEV is (as I understand it) the danger of the AI going FOOM before it cares about humans.

Somewhat nitpickingly, this is a reason for FAI in general. CEV is attractive mostly for moving as much work from the designers to the FAI as possible, reducing the potential for uncorrectable error, and being fairer than letting... (read more)

2David_Gerard13y
It wasn't intended to be - more incredulity. I thought this was a really important piece of the puzzle, so expected there'd be something at all by now. I appreciate your point: that this is a ridiculously huge problem and SIAI is ridiculously small. I meant that, as I understand it, CEV is what is fed to the seed AI. Or the AI does the work to ascertain the CEV. It requires an intelligence to ascertain the CEV, but I'd think the ascertaining process would be reasonably set out once we had an intelligence on hand, artificial or no. Or the process to get to the ascertaining process. I thought we needed the CEV before the AI goes FOOM, because it's too late after. That implies it doesn't take a superintelligence to work it out. Thus: CEV would have to be a process that mere human-level intelligences could apply. That would be a useful process to have, and doesn't require first creating an AI. I must point out that my statements on the subject are based in curiosity, ignorance and extrapolation from what little I do know, and I'm asking (probably annoyingly) for more to work with.
6Nick_Tarleton13y
"CEV" can (unfortunately) refer to either CEV the process of determining what humans would want if we knew more etc., or the volition of humanity output by running that process. It sounds to me like you're conflating these. The process is part of the seed AI and is needed before it goes FOOM, but the output naturally is neither, and there's no guarantee or demand that the process be capable of being executed by humans.
3David_Gerard13y
OK. I still don't understand it, but I now feel my lack of understanding more clearly. Thank you! (I suppose "what do people really want?" is a large philosophical question, not just undefined but subtle in its lack of definition.)
6Roko13y
I have recieved assurances that SIAI will go to significant efforts not to do nasty things, and I believe them. Private assurances given sincerely are, in my opinion, the best we can hope for, and better than we are likely to get from any other entity involved in this. Besides, I think that XiXiDu, et al are complaining about the difference between cotton and silk, when what is actually likely to happen is more like a big kick in the teeth from reality. SIAI is imperfect. Yes. Well done. Nothing is perfect. At least cut them a bit of slack.
1timtyler13y
What?!? Open source code - under a permissive license - is the traditional way to signal that you are not going to run off into the sunset with the fruits of a programming effort. Private assurances are usually worth diddly-squat by comparison.
2Roko13y
I think that you don't realize just how bad the situation is. You want that silken sheet. Rude awakening methinks. Also open-source not neccessarily good for FAI in any case.
5XiXiDu13y
I don't think that you realize how bad it is. I'd rather have the universe being paperclipped than supporting the SIAI if that means that I might be tortured for the rest of infinity!

To the best of my knowledge, SIAI has not planned to do anything, under any circumstances, which would increase the probability of you or anyone else being tortured for the rest of infinity.

Supporting SIAI should not, to the best of my knowledge, increase the probability of you or anyone else being tortured for the rest of infinity.

Thank you.

5XiXiDu13y
But imagine there was a person a level above yours that went to create some safeguards for an AGI. That person would tell you that you can be sure that the safeguards s/he plans to implement will benefit everyone. Are you just going to believe that? Wouldn't you be worried and demand that their project is being supervised? You are in a really powerful position because you are working for an organisation that might influence the future of the universe. Is it really weird to be skeptical and ask for reassurance of their objectives?
0[anonymous]13y
Logical rudeness is the error of rejecting an argument for reasons other than disagreement with it. Does your "I don't think so" mean that you in fact believe that SIAI (possibly) plans to increase the probability of you or someone else being tortured for the rest of eternity? If not, what does this statement mean?

I removed that sentence. I meant that I didn't believe that the SIAI plans to harm someone deliberately. Although I believe that harm could be a side-effect and that they would rather harm a few beings than allowing some Paperclip maximizer to take over.

You can call me a hypocrite because I'm in favor of animal experiments to support my own survival. But I'm not sure if I'd like to have someone leading an AI project who thinks like me. Take that sentence to reflect my inner conflict. I see why one would favor torture over dust specks but I don't like such decisions. I'd rather have the universe to end now, or having everyone turned into paperclips, than having to torture beings (especially if I am the being).

I feel uncomfortable that I don't know what will happen because there is a policy of censorship being favored when it comes to certain thought experiments. I believe that even given negative consequences, transparency is the way to go here. If the stakes are this high, people who believe will do anything to get what they want. That Yudkowsky claims that they are working for the benefit of humanity doesn't mean it is true. Surely I'd write that and many articles and papers that make it appear this way, if I wanted to shape the future to my liking.

2Vladimir_Nesov13y
I apologize. I realized my stupidity in interpreting your comment a few seconds after posting the reply (which I then deleted).
-1timtyler13y
Better yet, you could use a kind of doublethink - and then even actually mean it. Here is W. D. Hamilton on that topic: * Discriminating Nepotism - as reprinted in: Narrow Roads of Gene Land, Volume 2 Evolution of Sex, p.356.
-2timtyler13y
In TURING'S CATHEDRAL, George Dyson writes: I think many people would like to be in that group - if they can find a way to arrange it.
2shokwave13y
Unless AI was given that outcome (cheerful, contented people etc) as a terminal goal, or that circle of people was the best possible route to some other terminal goal, both of which are staggeringly unlikely, Dyson suspects wrongly. If you think he suspects rightly, I would really like to see a justification. Keep in mind that AGIs are currently not being built using multi-agent environment evolutionary methods, so any kind of 'social cooperation' mechanism will not arise.
-5timtyler13y
0katydee13y
That doesn't really strike me as a stunning insight, though. I have a feeling that I could find many people who would like to be in almost any group of "cheerful, contented, intellectually and physically well-nourished people."
0sketerpot13y
This all depends on what the AI wants. Without some idea of its utility function, can we really speculate? And if we speculate, we should note those assumptions. People often think of an AI as being essentially human-like in its values, which is problematic.
-2timtyler13y
It's a fair description of today's more successful IT companies. The most obvious extrapolation for the immediate future involves more of the same - but with even greater wealth and power inequalities. However, I would certainly also council caution if extrapolating this out more than 20 years or so.
3[anonymous]13y
Currently, there are no entities in physical existence which, to my knowledge, have the ability to torture anyone for the rest of eternity. You intend to build an entity which would have that ability (or if not for infinity, for a googolplex of subjective years). You intend to give it a morality based on the massed wishes of humanity - and I have noticed that other people don't always have my best interests at heart. It is possible - though unlikely - that I might so irritate the rest of humanity that they wish me to be tortured forever. Therefore, you are, by your own statements, raising the risk of my infinite torture from zero to a tiny non-zero probability. It may well be that you are also raising my expected reward enough for that to be more than counterbalanced, but that's not what you're saying - any support for SIAI will, unless I'm completely misunderstanding, raise the probability of infinite torture for some individuals.
7Eliezer Yudkowsky13y
See the "Last Judge" section of the CEV paper. As Vladimir observes, the alternative to SIAI doesn't involve nothing new happening.
4[anonymous]13y
That just pushes the problem along a step. IF the Last Judge can't be mistaken about the results of the AI running AND the Last Judge is willing to sacrifice the utility of the mass of humanity (including hirself) to protect one or more people from being tortured, then it's safe. That's very far from saying there's a zero probability.
4ata13y
If the Last Judge peeks at the output and finds that it's going to decide to torture people, that doesn't imply abandoning FAI, it just requires fixing the bug and trying again.
4Vladimir_Nesov13y
Just because AGIs have capability to inflict infinite torture, doesn't mean they have a motive. Also, status quo (with regard to SIAI's activity) doesn't involve nothing new happening.
8[anonymous]13y
I explained that he is planning to supply one with a possible motive (namely that the CEV of humanity might hate me or people like me). It is precisely because of this that the problem arises. A paperclipper, or any other AGI whose utility function had nothing to do with humanity's wishes, would have far less motive to do this - it might kill me, but it really would have no motive to torture me.
-5timtyler13y
7XiXiDu13y
How so? I've just reread some of your comments on your now deleted post. It looks like you honestly tried to get the SIAI to put safeguards into CEV. Given that the idea spread to many people by now, don't you think it would be acceptably to discuss the matter before one or more people take it serious or even consider to implement it deliberately?
0Roko13y
I don't think it is a good idea to discuss it. I think that the costs outweigh the benefits. The costs are very big. Benefits marginal.
5Perplexed13y
Ok by me. It is pretty obvious by this point that there is no evil conspiracy involved here. But I think the lesson remains, I you delete something, even if it is just because you regret posting it, you create more confusion than you remove.
2waitingforgodel13y
I think the question you should be asking is less about evil conspiracies, and more about what kind of organization SIAI is -- what would they tell you about, and what would they lie to you about.
4XiXiDu13y
If the forbidden topic would be made public (and people would believe it), it would result in a steep rise of donations towards the SIAI. That alone is enough to conclude that the SIAI is not trying to hold back something that would discredit it as an organisation concerned with charitable objectives. The censoring of the information was in accordance with their goal of trying to prevent unfriendly artificial intelligence. Making the subject matter public did already harm some people and could harm people in future.
9David_Gerard13y
But the forbidden topic is already public. All the effects that would follow from it being public would already follow. THE HORSE HAS BOLTED. It's entirely unclear to me what pretending it hasn't does for the problem or the credibility of the SIAI.
3XiXiDu13y
It is not as public as you think. If it was then people like waitingforgodel wouldn't ask about it. I'm just trying to figure out how to behave without being able talk about it directly. It's also really interesting on many levels.

It is not as public as you think.

Rather more public than a long forgotten counterfactual discussion collecting dust in the blog's history books would be. :P

3David_Gerard13y
Precisely. The place to hide a needle is in a large stack of needles. The choice here was between "bad" and "worse" - a trolley problem, a lose-lose hypothetical - and they appear to have chosen "worse".
8wedrifid13y
I prefer to outsource my needle-keeping security to Clippy in exchange for allowing certain 'bending' liberties from time to time. :)
5David_Gerard13y
Upvoted for LOL value. We'll tell Clippy the terrible, no good, very bad idea with reasons as to why this would hamper the production of paperclips. "Hi! I see you've accidentally the whole uFAI! Would you like help turning it into paperclips?"
4wedrifid13y
Brilliant.
0[anonymous]13y
Frankly, Clippy would be better than the Forbidden Idea. At least Clippy just wants paperclips.
3TheOtherDave13y
Of course, if Clippy were clever he would then offer to sell SIAI a commitment to never release the UFAI in exchange for a commitment to produce a fixed number of paperclips per year, in perpetuity. Admittedly, his mastery of human signaling probably isn't nuanced enough to prevent that from sounding like blackmail.
7David_Gerard13y
I really don't see how that follows. Will more of the public take it seriously? As I have noted, so far the reaction from people outside SIAI/LW has been "They did WHAT? Are they IDIOTS?" That doesn't make it not stupid or not counterproductive. Sincere stupidity is not less stupid than insincere stupidity. Indeed, sincere stupidity is more problematic in my experience as the sincere are less likely to back down, whereas the insincere will more quickly hop to a different idea. Citation needed. Citation needed.
6XiXiDu13y
I sent you another PM.
4David_Gerard13y
Hmm, okay. But that, I suggest, appears to have been a case of reasoning oneself stupid. It does, of course, account for SIAI continuing to attempt to secure the stable doors after the horse has been dancing around in a field for several months taunting them with "COME ON IF YOU THINK YOU'RE HARD ENOUGH." (I upvoted XiXiDu's comment here because he did actually supply a substantive response in PM, well deserving of a vote, and I felt this should be encouraged by reward.)
4waitingforgodel13y
I wish I could upvote twice
0[anonymous]13y
A kind of meta-question: is there any evidence suggesting that one of the following explanations of the recent deletion is better than another? * That an LW moderator deleted Roko's comment. * That Roko was asked to delete it, and complied. * That Roko deleted it himself, without prompting.
-8waitingforgodel13y

Depending on what you're planning to research, lack of access to university facilities could also be a major obstacle. If you have a reputation for credible research, you might be able to collaborate with people within the university system, but I suspect that making the original break in would be pretty difficult.

For those curious: we do agree, but he went to quite a bit more effort in showing that than I did (and is similarly more convincing).

No, the rationale for deletion was not based on the possibility that his exact, FAI-based scenario could actually happen.

4wedrifid13y
What was the grandparent?
5ata13y
Hm? Did my comment get deleted? I still see it.
3komponisto13y
I noticed you removed the content of the comment from the record on your user page. I would have preferred you not do this; those who are sufficiently curious and know about the trick of viewing the user page ought to have this option.
2Vladimir_Nesov13y
Only if you disagree with correctness of moderator's decision.
1komponisto13y
Disagreement may be only partial. One could agree to the extent of thinking that viewing of the comment ought to be restricted to a more narrowly-filtered subset of readers.
0Vladimir_Nesov13y
Yes, this is a possible option, depending on the scope of moderator's decision. Banning comments from a discussion, even if they a backed up and publicly available elsewhere, is still an effective tool in shaping the conversation.
1ata13y
There was nothing particularly important or interesting in it, just a question I had been mildly curious about. I didn't think there was anything dangerous about it either, but, as I said elsewhere, I'm willing to take Eliezer's word for it if he thinks it is, so I blanked it. Let it go.
9komponisto13y
I know why you did it. My intention is to register disagreement with your decision. I claim it would have sufficed to simply let Eliezer delete the comment, without you yourself taking additional action to further delete it, as it were. I could do without this condescending phrase, which unnecessarily (and unjustifiably) imputes agitation to me.
3ata13y
Sorry, you're right. I didn't mean to imply condescension or agitation toward you; it was written in a state of mild frustration, but definitely not at or about your post in particular.
2wedrifid13y
Weird. I see: How does Eliezer's delete option work exactly? It stays visible to the author? Now I'm curious.
0ata13y
Yes, I've been told that it was deleted but that I still see it since I'm logged in. In that case I won't repeat what I said in it, partly because it'll just be deleted again but mainly because I actually do trust Eliezer's judgment on this. (I didn't realize that I was saying more than I was supposed to.) All I'll say about it is that it did not actually contain the question that Eliezer's reply suggests he thought it was asking, but it's really not important enough to belabor the point.
2waitingforgodel13y
yep, this one is showing as deleted
1[anonymous]13y
What? Did something here get deleted?

No, I think you're nitpicking to dodge the question, and looking for a more convenient world.

I think at this point it's clear that you really can't be expected to give a straight answer. Well done, you win!

-6Vladimir_Nesov13y
[-][anonymous]13y3

While it's not geared specifically towards individuals trying to do research, the (Virtual) Employment Open Thread has relevant advice for making money with little work.

If you had a paper that was good enough to get published if you were a professor then the SIAI could probably find a professor to co-author with you.

Google Scholar has greatly reduced the benefit of having access to a college library.

8sketerpot13y
That depends on the field. Some fields are so riddled with paywalls that Google Scholar is all but useless; others like computer science, are much more progressive.

What (dis)advantages does this have compared to the traditional model?

I think this thread perfectly illustrates one disadvantage of doing research in an unstructured environment. It is so easy to become distracted from the original question by irrelevant, but bright and shiny distractions. Having a good academic adviser cracking the whip helps to keep you on track.

855 comments so far, with no sign of slowing down!

you have to be very clever to come up with a truly dangerous thought -- and if you do, and still decide to share it, he'll delete your comments

This is a good summary.

Of course, what he actually did was not delete the thread

Eh what? He did and that's what the whole scandal was about. If you mean that he did not succesfully delete the thread from the whole internet, then yes.

Also see my other comment.

Yeah, I thought about that as well. Trying to suppress it made it much more popular and gave it a lot of credibility. If they decided to act in such a way deliberately, that be fascinating. But that sounds like one crazy conspiracy theory to me.

7David_Gerard13y
I don't think it gave it a lot of credibility. Everyone I can think of who isn't an AI researcher or LW regular who's read it has immediately thought "that's ridiculous. You're seriously concerned about this as a likely consequence? Have you even heard of the Old Testament, or Harlan Ellison? Do you think your AI will avoid reading either?" Note, not the idea itself, but that SIAI took the idea so seriously it suppressed it and keeps trying to. This does not make SIAI look more credible, but less because it looks strange. These are the people running a site about refining the art of rationality; that makes discussion of this apparent spectacular multi-level failure directly on-topic. It's also become a defining moment in the history of LessWrong and will be in every history of the site forever. Perhaps there's some Xanatos retcon by which this can be made to work.
4XiXiDu13y
I just have a hard time to believe that they could be so wrong, people who write essays like this. That's why I allow for the possibility that they are right and that I simply do not understand the issue. Can you rule out that possibility? And if that was the case, what would it mean to spread it even further? You see, that's my problem.
7David_Gerard13y
Indeed. On the other hand, humans frequently use intelligence to do much stupider things than they could have done without that degree of intelligence. Previous brilliance means that future strange ideas should be taken seriously, but not that the future ideas must be even more brilliant because they look so stupid. Ray Kurzweil is an excellent example - an undoubted genius of real achievements, but also now undoubtedly completely off the rails and well into pseudoscience. (Alkaline water!)
2timtyler13y
Ray on alkaline water: http://glowing-health.com/alkaline-water/ray-kurzweil-alkaine-water.html
2David_Gerard13y
See, RationalWiki is a silly wiki full of rude people. But one thing we know a lot about, is woo. That reads like a parody of woo.
2[anonymous]13y
Scary.
-1shokwave13y
I don't think that's credible. Eliezer has focused much of his intelligence on avoiding "brilliant stupidity", orders of magnitude more so than any Kurzweil-esque example.
3David_Gerard13y
So the thing to do in this situation is to ask them: "excuse me wtf are you doin?" And this has been done. So far there's been no explanation, nor even acknowledgement of how profoundly stupid this looks. This does nothing to make them look smarter. Of course, as I noted, a truly amazing Xanatos retcon is indeed not impossible.
5TheOtherDave13y
There is no problem. If you observe an action (A) that you judge so absurd that it casts doubt on the agent's (G) rationality, then your confidence (C1) in G's rationality should decrease. If C1 was previously high, then your confidence (C2) in your judgment of A's absurdity should decrease. So if someone you strongly trust to be rational does something you strongly suspect to be absurd, the end result ought to be that your trust and your suspicions are both weakened. Then you can ask yourself whether, after that modification, you still trust G's rationality enough to believe that there exist good reasons for A. The only reason it feels like a problem is that human brains aren't good at this. It sometimes helps to write it all down on paper, but mostly it's just something to practice until it gets easier. In the meantime, what I would recommend is giving some careful thought to why you trust G, and why you think A is absurd, independent of each other. That is: what's your evidence? Are C1 and C2 at all calibrated to observed events? If you conclude at the end of it that they one or the other is unjustified, your problem dissolves and you know which way to jump. No problem. If you conclude that they are both justified, then your best bet is probably to assume the existence of either evidence or arguments that you're unaware of (more or less as you're doing now)... not because "you can't rule out the possibility" but because it seems more likely than the alternatives. Again, no problem. And the fact that other people don't end up in the same place simply reflects the fact that their prior confidence was different, presumably because their experiences were different and they don't have perfect trust in everyone's perfect Bayesianness. Again, no problem... you simply disagree. Working out where you stand can be a useful exercise. In my own experience, I find it significantly diminishes my impulse to argue the point past where anything new is being said, which ge
5Vaniver13y
Another thing: rationality is best expressed as a percentage, not a binary. I might look at the virtues and say "wow, I bet this guy only makes mistakes 10% of the time! That's fantastic!"- but then when I see something that looks like a mistake, I'm not afraid to call it that. I just expect to see fewer of them.
0timtyler13y
What issue? The forbidden one? You are not even supposed to be thinking about that! For pennance, go and say 30 "Hail Yudkowskys"!
2shokwave13y
You could make a similar comment about cryonics. "Everyone I can think of who isn't a cryonics project member or LW regular who's read [hypothetical cryonics proposal] has immediately thought "that's ridiculous. You're seriously considering this possibility?". "People think it's ridiculous" is not always a good argument against it. Consider that whoever made the decision probably made it according to consequentialist ethics; the consequences of people taking the idea seriously would be worse than the consequences of censorship. As many consequentialist decisions tend to, it failed to take into account the full consequences of breaking with deontological ethics ("no censorship" is a pretty strong injunction). But LessWrong is maybe the one place on the internet you could expect not to suffer for breaking from deontological ethics. Again, strange from a deontologist's perspective. If you're a deontologist, okay, your objection to the practice has been noted. The perfect Bayesian consequentialist, however, would look at the decision, estimate the chances of the decision-maker being irrational (their credibility), and promptly revise their probability estimate of 'bad idea is actually dangerous' upwards, enough to approve of censorship. Nothing strange there. You appear to be downgrading SIAI's credibility because it takes an idea seriously that you don't - I don't think you have enough evidence to conclude that they are reasoning imperfectly.

I'm speaking of convincing people who don't already agree with them. SIAI and LW look silly now in ways they didn't before.

There may be, as you posit, a good and convincing explanation for the apparently really stupid behaviour. However, to convince said outsiders (who are the ones with the currencies of money and attention), the explanation has to actually be made to said outsiders in an examinable step-by-step fashion. Otherwise they're well within rights of reasonable discussion not to be convinced. There's a lot of cranks vying for attention and money, and an organisation has to clearly show itself as better than that to avoid losing.

0shokwave13y
By the time a person can grasp the chain of inference, and by the time they are consequentialist and Aumann-agreement-savvy enough for it to work on them, they probably wouldn't be considered outsiders. I don't know if there's a way around that. It is unfortunate.
7David_Gerard13y
To generalise your answer: "the inferential distance is too great to show people why we're actually right." This does indeed suck, but is indeed not reasonably avoidable. The approach I would personally try is furiously seeding memes that make the ideas that will help close the inferential distance more plausible. See selling ideas in this excellent post.
5TheOtherDave13y
For what it's worth, I gather from various comments he's made in earlier posts that EY sees the whole enterprise of LW as precisely this "furiously seeding memes" strategy. Or at least that this is how he saw it when he started; I realize that time has passed and people change their minds. That is, I think he believes/ed that understanding this particular issue depends on understanding FAI theory depends on understanding cognition (or at least on dissolving common misunderstandings about cognition) and rationality, and that this site (and the book he's working on) are the best way he knows of to spread the memes that lead to the first step on that chain. I don't claim here that he's right to see it that way, merely that I think he does. That is, I think he's trying to implement the approach you're suggesting, given his understanding of the problem.
4David_Gerard13y
Well, yes. (I noted it as my approach, but I can't see another one to approach it with.) Which is why throwing LW's intellectual integrity under the trolley like this is itself remarkable.
3TheOtherDave13y
Well, there's integrity, and then there's reputation, and they're different. For example, my own on-three-minutes-thought proposed approach is similar to Kaminsky's, though less urgent. (As is, I think, appropriate... more people are working on hacking internet security than on, um, whatever endeavor it is that would lead one to independently discover dangerous ideas about AI. To put it mildly.) I think that approach has integrity, but it won't address the issues of reputation: adopting that approach for a threat that most people consider absurd won't make me seem any less absurd to those people.
6David_Gerard13y
However, discussion of the chain of reasoning is on-topic for LessWrong (discussing a spectacularly failed local chain of reasoning and how and why it failed), and continued removal of bits of the discussion does constitute throwing LessWrong's integrity in front of the trolley.
3Vaniver13y
There are two things going on here, and you're missing the other, important one. When a Bayesian consequentialist sees someone break a rule, they perform two operations- reduce the credibility of the person breaking the rule by the damage done, and increase the probability that the rule-breaking was justified by the credibility of the rule-breaker. It's generally a good idea to do the credibility-reduction first. Keep in mind that credibility is constructed out of actions (and, to a lesser extent, words), and that people make mistakes. This sounds like captainitis, not wisdom.
0Jack13y
Aside: Why would it matter?
-2Vaniver13y
You have three options, since you have two adjustments to do and you can use old or new values for each (but only three because you can't use new values for both).* Adjusting credibility first (i.e. using the old value of the rule's importance to determine the new credibility, then the new value of credibility to determine the new value of the credibility's importance) is the defensive play, and it's generally a good idea to behave defensively. For example, let's say your neighbor Tim (credibility .5) tells you that there are aliens out to get him (prior probability 1e-10, say). If you adjust both using the old values, you get that Tim's credibility has dropped massively, but your belief that aliens are out to get Tim has risen massively. If you adjust the action first (where the 'rule' is "don't believe in aliens having practical effects"), your belief that aliens are out to get Tim rises massively- and then your estimate of Tim's credibility drops only slightly. If you adjust Tim's credibility first, you find that his credibility has dropped massively, and thus when you update the probability that aliens are out to get Tim it only bumps up slightly. *You could iterate this a bunch of times, but that seems silly.
1Jack13y
Er, any update that doesn't use the old values for both is just wrong. If you use new values you're double-counting the evidence.
0Vaniver13y
I suppose that could be the case- I'm trying to unpack what exactly I'm thinking of when I think of 'credibility.' I can see strong arguments for either approach, depending on what 'credibility' is. Originally I was thinking of something along the lines of "prior probability a statement they make will be correct" but as soon as you know the content of the statement, that's not really relevant- and so now I'm imagining something along the lines of "how much I weight unlikely statements made by them," or more likely for a real person, "how much effort i put into checking their statements." And so for the first one, it doesn't make sense to update the credibility- if someone previously trustworthy tells you something bizarre, you weight it highly. But for the second one, it does make sense to update the credibility first- if someone previously trustworthy tells you something bizarre, you should immediately become more skeptical of the that statement and subsequent ones.
4Will_Sawin13y
But no more skeptical than is warranted by your prior probability. Let's say that if aliens exist, a reliable Tim has a 99% probability of saying they do. If they don't, he has a 1% probability of saying they do. An unreliable Tim has a 50/50 shot in either situation. My prior was 50/50 reliable/unreliable, 1,000,000/1 don't exist, exist so prior weights: reliable, exist: 1 unreliable, exist: 1 reliable, don't exist: 1,000,000 unreliable, don't exist, 1,000,000 Updates after he says they do: reliable, exist: .99 unreliable, exist: .5 reliable, don't exist: 10,000 unreliable, don't exist: 500,000 So we now believe approximately 50 to1 that he's unreliable, and 510,000 to 1.49 or 342,000 to 1 that they don't exist. This is what you get if you decide each of the new based on the old.
0Vaniver13y
Thanks for working that out- that made clearer to me what I think I was confused about before. What I was imagining by "update credibility based on their statement" was configuring your credibility estimate to the statement in question- but rather than 'updating' that's just doing a lookup to figure out what Tim's credibility is for this class of statements. Looking at shokwave's comment again with a clearer mind: When you estimate the chances that the decision-maker is irrational, I feel you need to include the fact that you disagree with them now (my original position of playing defensively), instead of just looking at your past. Why? Because it reduces the chances you get stuck in a trap- if you agree with Tim on propositions 1-10 and disagree on proposition 11, you might say "well, Tim might know something I don't, I'll change my position to agree with his." Then, when you disagree on proposition 12, you look back at your history and see that you agree with Tim on everything else, so maybe he knows something you don't. Now, even though you changed your position on proposition 11, you probably did decrease Tim's credibility- maybe you have stored "we agreed on 10 (or 10.5 or whatever) of 11 propositions." So, when we ask "does SIAI censor rationally?" it seems like we should take the current incident into account before we decide whether or not to take their word on their censorship. It's also rather helpful to ask that narrower question, instead of "is SIAI rational?", because general rationality does not translate to competence in narrow situations.
1shokwave13y
This is a subtle part of Bayesian updating. The question "does SIAI censor rationally?" is different to "was SIAI's decision to censor this case made rationally?" (it is different because in the second case we have some weak evidence that it was not - ie, that we as rationalists would not have made the decision they did). We used our prior for "SIAI acts rationally" to determine or derive the probability of "SIAI censors rationally" (as you astutely pointed out, general rationality is not perfectly transitive), and then used "SIAI censors rationally" as our prior for the calculation of "did SIAI censor rationally in this case". After our calculation, "did SIAI censor rationally in this case" is necessarily going to be lower in probability than our prior "SIAI censors rationally." Then, we can re-assess "SIAI censors rationally" in light of the fact that one of the cases of rational censorship has a higher level of uncertainty (now, our resolved disagreement is weaker evidence that SIAI does not censor rationally). That will revise "SIAI censors rationally" downwards - but not down to the level of "did SIAI censor rationally in this case". To use your Tim's propositions example, you would want your estimation of proposition 12 to depend on not only how much you disagreed with him on prop 11, but also how much you agreed with him on props 1-10. Perfect-Bayesian-Aumann-agreeing isn't binary about agreement; it would continue to increase the value of "stuff Tim knows that you don't" until it's easier to reduce the value of "Tim is a perfect Bayesian reasoner about aliens" - in other words, at about prop 13-14 the hypothesis "Tim is stupid with respect to aliens existing" would occur to you, and at prop 20 "Tim is stupid WRT aliens" and "Tim knows something I don't WRT aliens" would be equally likely.
3timtyler13y
It was left up for ages before the censorship. The Streisand effect is well known. Yes, this is a crazy kind of marketing stunt - but also one that shows Yu'El's compassion for the tender and unprotected minds of his flock - his power over the other participants - and one that adds to the community folklore.

See, that doesn't make sense to me. It sounds more like an initiation rite or something... not a thought experiment about quantum billionaires...

I can't picture EY picking up the phone and saying "delete that comment! wouldn't you willingly be tortured to decrease existential risk?"

... but maybe that's a fact about my imagination, and not about the world :p

I am doing something similar, except working as a freelance software developer. My mental model is that in both the traditional academic path and the freelance path, you are effectively spending a lot of your time working for money. In academia, the "dirty work" is stuff like teaching, making PowerPoint presentations (ugh), keeping your supervisor happy, jumping through random formatting hoops to get papers published, and then going to conferences to present the papers. For me, the decisive factor is that software development is actually quite fun, while academic money work is brain-numbing.

How hard is it to live off the dole in Finland? Also, non-academic research positions in think tanks and the like (including, of course, SIAI).

5Kaj_Sotala13y
Not very hard in principle, but I gather it tends to be rather stressful, with stuff like payments not arriving when they're supposed to happening every now and then. Also, I couldn't avoid the feeling of being a leech, justified or not. Non-academic think tanks are a possibility, but for Singularity-related matters I can't think of others than the SIAI, and their resources are limited.
3[anonymous]13y
Many people would steal food to save lives of the starving, and that's illegal. Working within the national support system to increase the chance of saving everybody/everything? If you would do the first, you should probably do the second. But you need to weigh the plausibility of the get-rich-and-fund-institute option, including the positive contributions of the others you could potentially hire.