I blew through all of MoR in about 48 hours, and in an attempt to learn more about the science and philosophy that Harry espouses, I've been reading the sequences and Eliezer's posts on Less Wrong. Eliezer has written extensively about AI, rationality, quantum physics, singularity research, etc. I have a question: how correct has he been?  Has his interpretation of quantum physics predicted any subsequently-observed phenomena?  Has his understanding of cognitive science and technology allowed him to successfully anticipate the progress of AI research, or has he made any significant advances himself? Is he on the record predicting anything, either right or wrong?   

Why is this important: when I read something written by Paul Krugman, I know that he has a Nobel Prize in economics, and I know that he has the best track record of any top pundit in the US in terms of making accurate predictions.  Meanwhile, I know that Thomas Friedman is an idiot.  Based on this track record, I believe things written by Krugman much more than I believe things written by Friedman.  But if I hadn't read Friedman's writing from 2002-2006, then I wouldn't know how terribly wrong he has been, and I would be too credulous about his claims.  

Similarly, reading Mike Darwin's predictions about the future of medicine was very enlightening.  He was wrong about nearly everything.  So now I know to distrust claims that he makes about the pace or extent of subsequent medical research.  

Has Eliezer offered anything falsifiable, or put his reputation on the line in any way?  "If X and Y don't happen by Z, then I have vastly overestimated the pace of AI research, or I don't understand quantum physics as well as I think I do," etc etc.

160 comments, sorted by Highlighting new comments since Today at 4:20 AM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

best track record of any top pundit in the US

The study linked to meant next to nothing in my eyes. It studied political predictions in an election year by political hacks on tv. 2007-2008. Guess what? IN an election cycle that liberals beat conservatives, the liberal predictions more often came true than conservative predictions.

Reminds me of the reported models of mortgage securities, created using data from boom times only.

Krugman was competing with a bunch of other political hacks and columnists. I doubt that accuracy is the highest motivation for any of them. The political hacks want to curry support, and the columnists want to be invited on tv and have their articles read. I'd put at least 3 motivations above accuracy for that crowd: manipulate attitudes, throw red meat to their natural markets, and entertain. It's Dark Arts, all the way, all the time.

This objection is not entirely valid, at least when it comes to Krugman. Krugman scored 17/19 mainly on economic predictions, and one of the two he got wrong looks like a pro-Republican prediction.

From their executive summary:

According to our regression analysis, liberals are better predictors than conservatives—even when taking out the Presidential and Congressional election questions.

From the paper:

Krugman...primarily discussed economics...

4buybuydandavis10yThat's a good point. I didn't read the whole thing, as the basic premise seemed flawed. That does seem like real information about Krugman's accuracy. I'd still wonder about the independence of the predictions, though. Did the paper authors address that issue at all? I saw noise about "statistical significance", so I assumed not. Are the specific predictions available online? It seemed like they had a large sample of predictions, so I doubt they were in the paper. This gives my estimate of Krugman a slight uptick, but without the actual predictions and results, this data can't do much more for me.

That is awesome! They actually put out their data.

Pretty much, Krugman successfully predicted that the downturn would last a while (2-6,8,15), made some obvious statements (7,9,12,16,17,18), was questionably supported on one (11), was unfairly said to miss another (14), hit on a political prediction (10), and missed on another (13).

He was 50-50 or said nothing, except for successfully predicting that the downturn would take at least a couple of years, which wasn't going out too far on a limb itself.

So I can't say that I'm impressed much with the authors of the study, as their conclusions about Krugman seem like a gross distortion to me, but I am very impressed that they released their data. That was civilized.

Yes, and the paper had several other big problems. For example, it didn't treat mild belief and certainty differently; someone who suspected Hilary might be the Democratic Nominee was treated as harshly as someone who was 100% sure the Danish were going to invade.

Worse, people get marked down for making conditional predictions whose antecedent was not satisfied! And then they have the audacity to claim that they've discovered that making conditional predictions predicts low accuracy.

They also penalise people for hedging, yet surely a hedged prediction is better than no prediction at all?

it didn't treat mild belief and certainty differently;

It did. Per the paper, the confidences of the predictions were rated on a scale from 1 to 5, where 1 is "No chance of occurring" and 5 is "Definitely will occur". They didn't use this in their top-level rankings because they felt it was "accurate enough" without that, but they did use it in their regressions.

Worse, people get marked down for making conditional predictions whose antecedent was not satisfied!

They did not. Per the paper, those were simply thrown out (as people do on PredictionBook).

They also penalise people for hedging, yet surely a hedged prediction is better than no prediction at all?

I agree here, mostly. Looking through the predictions they've marked as hedging, some seem like sophistry but some seem like reasonable expressions of uncertainty; if they couldn't figure out how to properly score them they should have just left them out.

If you think you can improve on their methodology, the full dataset is here: .xls.

Incidentally, the best way to make conditional predictions is to convert them to explicit disjunctions. For example, in November I wanted to predict that "If Mitt Romney loses the primary election, Barack Obama will win the general election." This is actually logically equivalent to "Either Mitt Romney or Barack Obama will win the 2012 Presidential Election," barring some very unlikely events, so I posted that instead, and so I won't have to withdraw the prediction when Romney wins the primary.

3Douglas_Knight10yWhile that may be best with current PB, I think conditional predictions are useful. If you are only interested in truth values and not the strength of the prediction, then it is logically equivalent, but the number of points you get is not the same. The purpose of a conditional probability is to take a conditional risk. If Romney is nominated, you get a gratuitous point for this prediction. Of course, simply counting predictions is easy to game, which is why we like to indicate the strength of the prediction, as you do with this one on PB. But turning a conditional prediction into an absolute prediction changes its probability and thus its effect on your calibration score. To a certain extent, it amounts to double counting the prediction about the hypothesis.
-1drethelin10yThis is less specific than the first prediction. The second version loses the part where you predict obama will beat romney
5Randaly10yThe first version doesn't have that part either- he's predicting that if Romney gets eliminated in the primaries, ie Gingrich, Santorum, or Paul is the Republican nominee, then Obama will win.
5drethelin10yyou're right, I misread.
5Larks10ySure, so we learn about how confidence is correlated with binary accuracy. But they don't take into account that being very confident and wrong should be penalised more than being slightly confident and wrong. I misread; you are right
3TimS10yThat made me giggle.
0[anonymous]10yWhy do you think this? Doesn't seem true at all to me. Looking at the spreadsheet there are many judgements left blank with the phrase "conditional not met." They are not counted in the total number of predictions.
8perpetualpeace110yOK, I don't want to get off-topic. EY doesn't practice the Dark Arts (at least, I hope not). A lot of what EY writes makes sense to me. And I'd like to believe that we'll be sipping champagne on the other side of the galaxy when the last star in the Milky Way burns out (and note that I'm not saying that he's predicting that will happen). But I'm not a physicist or AI researcher - I want some way to know how much to trust what he writes. Is anything that he's said or done falsifiable? Has he ever publicly made his beliefs pay rent? I want to believe in a friendly AI future... but I'm not going to believe for the sake of believing.
1Eugine_Nier10yPrevious discussion of Krugman's accuracy here [http://lesswrong.com/lw/7z9/1001_predictionbook_nights/509v].

I suggest looking at his implicit beliefs, not just explicit predictions. For example, he must have thought that writing HPMoR was a good use of time, and therefore must have (correctly) predicted that it would be quite popular if he was to write it. On the other hand, his decision to delay publishing his TDT document seems to imply some rather wrong beliefs.

I think at least a large part of the slowness on publishing TDT is due to his procrastination habits with publishable papers, which he openly acknowledges.

Although this may be a personal problem of his, it doesn't say anything against the ideas he has.

-2Strange79yTalking about his own procrastination rather than producing results could be taken as evidence of incapacity and willingness to bullshit.

For example, he must have thought that writing HPMoR was a good use of time, and therefore must have (correctly) predicted that it would be quite popular if he was to write it.

Isn't the simpler explanation that he just enjoys writing fiction?

0jschulter10yThe enjoyment of the activity factors into whether it is a good use of time.

Relevant: Don't Revere The Bearer Of Good Info, Argument Screens Off Authority. Sequences are to a significant extent a presentation of a certain selection of standard ideas, which can be judged on their own merit.

5torekp10yBy all means check out the arguments, but argument doesn't screen off authority, for roughly the reasons Robin Hanson mentions in that thread. Few human beings ever have verbalizable access to all their evidence, and you would be a fool not to take expertise into account so far as you can determine it, even for unusually well-documented arguments. I encourage you to keep scrutinizing EY's track record - and those of any other contributors you often read. (BTW I upvoted VN's comment, to help ever so slightly in this endeavor.)
2wedrifid10yThis being the case it indicates that the authority is screened of imperfectly, to the degree that the arguments are presented. For example if the arguments presented by the authority are flawed the degree of trust in their authority is drastically reduced. After all part of the information represented by the awareness of their authority is the expectation that their thinking wouldn't be terrible when they tried to express it. That turned out to be false so the earlier assumption must be discarded.
4fubarobfusco10yYep. Moreover, readers can actually look up the sources for the ideas in the Sequences; the people who actually did the science or proved the math. They're mostly cited. Just off the top of my head — Jaynes' probability theory (with Good, Cox, et al as influences there), Pearl's causality, Kahneman and Tversky's heuristics-and-biases program, Tooby and Cosmides on evolutionary psychology, Axelrod's Prisoners' Dilemma, Korzybski's map-vs.-territory distinction, ...

I'm going through all of Eliezer's comments looking for predictions by using search terms like "predict" and "bet". Just letting people know so that we don't duplicate effort.

7gwern10yWhat have you found so far besides the stuff recorded in the bet registry [http://wiki.lesswrong.com/wiki/Bets_registry] like my recent bet with him on MoR winning a Hugo?
6wedrifid10yFrom the registry: So if Eliezer doesn't want to pay up he has to make his excuses really boring or decline to make them altogether.
3TheOtherDave10yMaking them boring enough that people who are already hypervigilant on this issue won't be interested in them would be quite a challenge, actually.
3Luke_A_Somers10y"We got real lucky."
6Oscar_Cunningham10yLots of random posts. Also, I just realised that that page only covers Eliezer's stuff since the OB to LW move, so I'm not even getting most of his stuff.
7komponisto10yI don't think that's true. All OB comments were imported, and EY was one of the lucky ones who got them imported under their actual LW username.
7XiXiDu10yHe is talking about the script made my Wei_Dai. It does only fetch the LW content.
4komponisto10yDoesn't that script just retrieve all the comments on the userpage and put them on one big page?
5XiXiDu10yExactly. But for some reason it breaks down at 2009.
2Oscar_Cunningham10yThe page only goes down to 24 February 2009, which was around the time of he move. In his posts there's a gap of a week before he mentions LW by name, thus it seems likely that the cut-off was at the move. In any case, that page doesn't have all the comments.
0Grognor10yYou're not letting the page fully load. It loads them chronologically backwards.
0Oscar_Cunningham10yThat page is formed by a script of Wei_Dai that for some reason breaks down in 2009, very close to when the site move was.
8Wei_Dai10yIt's probably hitting an out of memory error, due to how many posts and comments Eliezer has.

I think it is very important to keep in mind that it is not very relevant to judge if someone is good at predicting stuff simply by dividing all the predictions made by the ones that turned out to be correct.

I predict that tomorrow will by Friday. Given that today it is Thursday, that's not so impressive. So my point is that it is more important to look at how difficult to make was that prediction and what the reasoning behind it is.

And I would also look and see if I can find improvements in the predictions that are made, if the person making the predictions actually learns something from the mistakes that he/she made in the past and applies the things that are learned to the new predictions he/she makes from that point on.

Here's two recent bets he made, both successful: He made a prediction against the propagation of any information faster than c and in a nearby thread a similar bet against the so-called proof of inconsistency for Peano Arithmetic.

And here's a prediction against the discovery of Higgs Boson, as yet undetermined.

Also got Amanda Knox right. As others said, he made bad predictions about transhumanist technologies as a teen, although he would say this was before he benefited from learning about rationality and disavowed them (in advance).

Although I agree that he got the Amanda Knox thing right, I don't think it actually counts as a prediction - he wasn't saying whether or not the jury would find her innocent the second time round, and as far as I can tell no new information came out after Eliezer made his call.

As I pointed out in the postmortem, there were at least 3 distinct forms of information that one could update on well after that survey was done.

Except for the Higgs prediction, which has a good chance of being proven wrong this year or next, the other two are in line with what experts in the area had predicted, so they can hardly be used to justify the EY's mad prediction skillz.

7buybuydandavis10yWhich makes it a wonderfully falsifiable prediction.

Suppose it is falsified. What conclusions would you draw from it? I.e. what subset of his teachings will be proven wrong? Obviously none.

His justification, " I don't think the modern field of physics has its act sufficiently together to predict that a hitherto undetected quantum field is responsible for mass." is basically a personal opinion of a non-expert. While it would make for an entertaining discussion, a discovery of the Higgs boson should not affect your judgement of his work in any way.

I'll update by putting more trust in mainstream modern physics - my probability that something like string theory is true would go way up after the detection of a Higgs boson, as would my current moderate credence in dark matter and dark energy. It's not clear how much I should generalize beyond this to other academic fields, but I probably ought to generalize at least a little.

8jschulter10yI'm not sure that this should be the case, as the Higgs is a Standard Model prediction and string theory is an attempt to extend that model. The accuracy of the former has little to say on whether the latter is sensible or accurate. For a concrete example, this is like allowing the accuracy of Newtonian Mechanics (via say some confirmed prediction about the existence of a planetary body based on anomalous orbital data) to influence your confidence in General Relativity before the latter had predicted Mercury's precession or the Michelson-Morley experiment had been done. EDIT: Unless of course you were initially under the impression that there were flaws in the basic theory which would make the extension fall apart, which I just realized may have been the case for you regarding the Standard Model.
8siodine10yWhat would a graph of your trust in mainstream modern physics over the last decade or so look like? And how 'bout other academic fields?
2[anonymous]9yOne of these things is not like the others, one of these things doesn't belong. Lots of physicists will acknowledge that string ‘theory’ -- sorry, I cannot bring myself to calling it a theory without scare quotes -- is highly speculative. With 10^500 free parameters, it has no more predictive power than “The woman down the street is a witch; she did it.” On the other hand, there's no way of explaining current cosmological observations without recurring to dark matter and dark energy (what the hell was wrong with calling it “cosmological constant”, BTW? who renamed it, and why?), short of replacing GR with something much less elegant (read: “higher Kolgomorov complexity”). Seriously, for things measured as 0.222±0.026 and 0.734±0.029 [http://en.wikipedia.org/wiki/Wilkinson_Microwave_Anisotropy_Probe#Seven-year_data_release] to not actually exist we would have to be missing something big.
2buybuydandavis10yUnless your prior for his accuracy on Quantum Physics is very strong, you should update your prior for his accuracy up when he makes accurate predictions, particularly where he would be right and a lot of pros would be wrong.
3shminux10yNot at all. The QM sequence predicts nothing about Higgs and has no original predictions, anyway, nothing that could falsify it, at any rate. In general, if the situation where "he would be right and a lot of pros would be wrong" happens unreasonably frequently, you might want to "update your prior for his accuracy up" because he might have uncommonly good intuition, but that would probably apply to all his predictions across the board.
1ArisKatsaris10yAnd btw, I'd also bet on the side of it not being found. I have no significant physics knowledge, and am just basing this on the fact that it's not been found yet: People who think a discovery is really close seem to not be properly updating upwards the probability of its absence, given all previous failures to locate it.
9Mitchell_Porter10yThey found something at 125 GeV last year. Most people think it is a Higgs, though perhaps not exactly according to the standard-model template. ETA: When talking about Higgs particles one needs to distinguish between possibilities. In the standard model, the Higgs gives mass to the W and Z bosons in one way, and mass to the fermions in another way. "Everyone" believes in the first mechanism, but the second is not so straightforward. For example, the Higgs could couple in the second way just to the top quark, and then the other fermions could get their masses indirectly, through virtual processes involving the top. And that's just one of many, many possibilities. Even if they have now found the Higgs mass, it will still take years to measure its couplings.
1ArisKatsaris10yTrue, I'm not claiming that they're highly extraordinary or surprising predictions, I'd have probably bet on Eliezer's side on both of them, after all.
[-][anonymous]10y 14

Rather than talking about Eliezer as if he's not here and trying to make complicated inferences based on his writings, it would be much easier to come up with a list of topics we would like Eliezer to make predictions about, and then just ask him.

Or, alternatively: Eliezer, are there any particularly important predictions you got right/wrong in the past? And what important predictions of yours are still pending?

9drethelin10yYou don't see anything wrong with errorchecking a person by asking them?
[-][anonymous]10y 11

There's a lot wrong with it, but I think it's better than trying to track implicit predictions based on things Eliezer wrote/did.

Regardless, the much better test of accuracy is asking for current predictions, which I also proposed in the parent comment.

I know it's not much as far as verifiable evidence goes but this one is pretty interesting.

When Eliezer writes about QM he's not trying to do novel research or make new predictions. He's just explaining QM as it is currently understood with little to no math. Actual physicists have looked over his QM sequence and said that it's pretty good. Source, see the top comment in particular.

I'm pretty interested in the AI part of the question though.

5RobertLumley10yMaybe I'm missing something. I'll fully concede that a transhuman AI could easily get out of the box. But that experiment doesn't even seem remotely similar. The gatekeeper has meta-knowledge that it's an experiment, which makes it completely unrealistic. That being said, I'm shocked Eliezer was able to convince them.

That being said, I'm shocked Eliezer was able to convince them.

Humans are much, much dumber and weaker than they generally think they are. (LessWrong teaches this very well, with references.)

6iDante10yI agree it's not perfect, but I don't think it is completely unrealistic. From the outside we see the people as foolish for not taking what we see as a free $10. They obviously had a reason, and that reason was their conversation with Eliezer. Swap out the $10 for the universe NOT being turned into paperclips and make Eliezer orders of magnitude smarter and it seems at least plausible that he could get out of the box.
2RobertLumley10yExcept you must also add the potential benefits of AI on the other side of the equation. In this experiment, the Gatekeeper has literally nothing to gain by letting Eliezer out, which is what confuses me.
3arundelo10yThe party simulating the Gatekeeper has nothing to gain, but the Gatekeeper has plenty to gain. (E.g., a volcano lair with cat(girls|boys).) Eliezer carefully distinguishes between role and party simulating that role in the description of the AI box experiment linked above [http://yudkowsky.net/singularity/aibox]. In the instances of the experiment where the Gatekeeper released the AI, I assume that the parties simulating the Gatekeeper were making a good-faith effort to roleplay what an actual gatekeeper would do.
0RobertLumley10yI guess I just don't trust that most gatekeeper simulators would actually make such an effort. But obviously they did, since they let him out.
4loup-vaillant10yMaybe not. According to the official protocol [http://yudkowsky.net/singularity/aibox], The Gatekeeper is allowed to drop out of character:
0Strange79yI suspect that the earlier iterations of the game, before EY stopped being willing to play, involved Gatekeepers who did not fully exploit that option.
3XiXiDu10yI am not. I would be shocked if he was able to convince wedrifid. I doubt it though.
0bogdanb10yIsn’t that redundant? (Sorry for nit-picking, I’m just wondering if I’m missing something.)

I, non-programmer, have a question my programmer friends asked me: why can't they find any of Eliezer's papers that have any hard math in them at all? According to them, it's all words words words. Excellent words, to be sure, but they'd like to see some hard equations, code, and the like, stuff that would qualify him as a researcher rather than a "mere" philosopher. What should I link them to?

The paper on TDT has some words that mean math, but the hard parts are mostly not done.

Eliezer is deliberately working on developing basics of FAI theory rather than producing code, but even then either he spends little time writing it down or he's not making much progress.

0Randaly10yThe SI folk say that they are deliberately not releasing the work they've done that's directly related to AGI. Doing so would speed up the development of an AGI without necessarily speeding up the development of an FAI, and therefore increase existential risk. ETA: Retracted, couldn't find my source for this.

AFAIK, nothing of the kind is publicly available. The closest thing to it is probably his Intuitive Explanation of Bayes' Theorem; however, Bayes' Theorem is high-school math. (His Cartoon Guide to Lob's Theorem might also be relevant- although they may think it's just more words.) Two relevant quotes by Eliezer:

On some gut level I’m also just embarrassed by the number of compliments I get for my math ability (because I’m a good explainer and can make math things that I do understand seem obvious to other people) as compared to the actual amount of advanced math knowledge that I have (practically none by any real mathematician’s standard).

My current sense of the problems of self-modifying decision theory is that it won’t end up being Deep Math, nothing like the proof of Fermat’s Last Theorem—that 95% of the progress-stopping difficulty will be in figuring out which theorem is true and worth proving, not the proof. (Robin Hanson spends a lot of time usefully discussing which activities are most prestigious in academia, and it would be a Hansonian observation, even though he didn’t say it AFAIK, that complicated proofs are prestigious but it’s much more important to figure out which theorem to prove.)

Source for both

4Gangsta_Trippin10yViewtifully phrased...

On quantum mechanics: the many-worlds interpretation probably shouldn't be referred to as "his" interpretation (around before he was born, etc.), and there's been some experimental work defending it (for example, entangling 10^4 particles and not seeing a certain kind of collapse), bit it's not very strong evidence.

He also made a lot of super-wrong plans/predictions over a decade ago. But on the other hand, that was a while ago.

4perpetualpeace110yFor the record, what were those predictions? What are your sources?
9bramflakes10yhttp://wiki.lesswrong.com/wiki/Yudkowsky%27s_coming_of_age [http://wiki.lesswrong.com/wiki/Yudkowsky%27s_coming_of_age]
8alex_zag_al10yin case you haven't seen it, a quote from the homepage of is website, yudkowsky.net:
7Manfred10yWell, what bramflakes said, and also the highly overconfident (see: conjunction fallacy [http://lesswrong.com/lw/ji/conjunction_fallacy], among others) "plan to singularity": http://yudkowsky.net/obsolete/plan.html [http://yudkowsky.net/obsolete/plan.html] .

The cognitive bias stuff is pretty good popular science writing. That's worth quite a bit to the world.

"Has his interpretation of quantum physics predicted any subsequently-observed phenomena?"

Interpretations of quantum mechanics are untestable, including the MWI. MWI is also not "his," it's fairly old.

I think the test for EY's articles aren't predictions he makes, which I don't recall many of, but what you can do with the insight he offers. For example, does it allow you to deal with your own problems better? Has it improved your instrumental rationality, including your epistemic rationality?

For example, does it allow you to deal with your own problems better?

I don't agree with this at all. I could become a Christian, and then believe that all of my problems are gone because I have an eternity in heaven waiting for me simply because I accepted Jesus Christ as my savior. Christianity makes few falsifiable predictions. I want to hold EY up to a higher standard.

-1buybuydandavis10yYes, a person who has misevaluated the evidence of how much their beliefs help has misevaluated the evidence of how much their beliefs help. Let us hope we can do better.
5faul_sname10yAnd let us do better by evaluating the evidence on a more objective level.
4David_Gerard10yExcept the standard you posit is not the standard EY holds himself to, per the quote in JMIV's comment [http://lesswrong.com/r/discussion/lw/bvg/a_question_about_eliezer/6ehn].
4buybuydandavis10yPlease state how you think his statement contradicts mine.
4David_Gerard10yHm, on rereading I'm not sure it actually does. Thank you.

My understanding is that for the most part SI prefers not to publish the results of their AI research, for reasons akin to those discussed here. However they have published on decision theory, presumably because it seems safer than publishing on other stuff and they're interested in attracting people with technical chops to work on FAI:

http://singinst.org/blog/2010/11/12/timeless-decision-theory-paper-released/

I would guess EY sees himself as more of a researcher than a forecaster, so you shouldn't be surprised if he doesn't make as many predictions as Pau... (read more)

7perpetualpeace110yOK. If that is the case, then I think that a fair question to ask is what have his major achievements in research been? But secondly, a lot of the discussion on LW and most of EY's research presupposes certain things happening in the future. If AI is actually impossible, then trying to design a friendly AI is a waste of time (or, alternately, if AI won't be developed for 10,000 years, then developing a friendly AI is not an urgent matter). What evidence can EY offer that he's not wasting his time, to put it bluntly?

If AI is actually impossible, then trying to design a friendly AI is a waste of time

No, if our current evidence suggests that AI is impossible, and does so sufficiently strongly to outweigh the large downside of a negative singularity, then trying to design a freindly AI is a waste of time.

Even if it turns out that your house doesn't burn down, buying insurance wasn't necessarily a bad idea. What is important is how likely it looked beforehand, and the relative costs of the outcomes.

Claiming AI constructed in a world of physics is impossible is equivalent to saying intelligence in a world of physics is impossible. This would require humans to work by dualism.

Of course, this is entirely separate from feasibility.

3JoshuaZ10yI would think that anyone claiming that AI is impossible would have the burden pretty strongly on their shoulders. However, if one was instead saying that a fast-take off was impossible or extremely unlikely there would be more of a valid issue.
4perpetualpeace110yIf his decision theory had a solid theoretical background, but turned out to be terrible when actually implemented, how would we know? Has there been any empirical testing of his theory?
7TimS10yWhat does a decision theory that "has a solid theoretical background but turns out to be terrible" look like when implemented?
7perpetualpeace110yYou play a game that you could either win or lose. One person follows, so far as he or she is able, the tenets of timeless decision theory. Another person makes a decision by flipping a coin. The coin-flipper outperforms the TDTer.
6Larks10yI'm pretty confident that if we played an iterated prisoners dilemma, your flipping a coin each time and my running TDT, I would win. This is, however, quite a low bar.
8David_Gerard10yI took it as a statement of what would prove the theory false, rather than a statement of something believed likely.
0JoshuaZ10yI think that would only be true if there were more than 2 players. Won't random coin flip and TDT be tied in the two player case?
6Luke_A_Somers10yNo. TDT, once it figures out it's facing a coin flipper, defects 100% of the time and runs away with it.
4Larks10yNope, TDT will defect every time.
4[anonymous]10yI'm not sure such a poor theory would survive having a solid theoretical background.
0shokwave10yActually, I think the proper case is "Two players play a one-player game they can either win or lose. One person follows, so far as able, the tenets of TDT. The other decides by flipping a coin. The coin-flipper outperforms the TDTer." I mention this because lots of decision theories struggle against a coin flipping opponent: tit-for-tat is a strong IPD strategy that does poorly against a coin-flipper.
2Random83210yIs there any decision strategy that can do well (let's define "well" as "better than always-defect") against a coin-flipper in IPD? Any decision strategy more complex than always-defect requires the assumption that your opponent's decisions can be at least predicted, if not influenced.
2dlthomas10yNo, of course not. Against any opponent whose output has nothing to do with your previous plays (or expected plays, if they get a peak at your logic), one should clearly always defect.
0Random83210yNot if their probability of cooperation is so high that the expected value of cooperation remains higher than that of defecting. Or if their plays can be predicted, which satisfies your criterion (nothing to do with my previous plays) but not mine. If someone defects every third time with no deviation, then I should defect whenever they defect. If they defect randomly one time in sixteen, I should always cooperate. (of course, always-cooperate is not more complex than always-defect.) ...I swear, this made sense when I did the numbers earlier today.
6orthonormal10yPermit me to substitute your question: TDT seems pretty neat philosophically, but can it actually be made to work as computer code? Answer: Yes. [http://wiki.lesswrong.com/wiki/Decision_theory#Sequence_by_orthonormal_.28Decision_Theories:_A_Semi-Formal_Analysis.29] (Sorry for the self-promotion, but I'm proud of myself for writing this up.) The only limiting factor right now is that nobody can program an efficient theorem-prover (or other equivalently powerful general reasoner), but that's not an issue with decision theory per se. (In other words, if we could implement Causal Decision Theory well, then we could implement Timeless Decision Theory well.) But in any case, we can prove theorems about how TDT would do if equipped with a good theorem-prover.

I thought about it some more and the relevant question is - how do we guess what are his abilities? And what is his aptitude at those abilities? Is there statistical methods we can use? (e.g. SPR) What would the outcome be? How can we deduce his utility function?

Normally, when one has e.g. high mathematical aptitude, or programming aptitude, or the like, as a teenager one still has to work on it and train (the brain undergoes significant synaptic pruning at about 20 years of age, limiting your opportunity to improve afterwards), and regardless of the final... (read more)

Meanwhile, I know that Thomas Friedman is an idiot.

It might well be that he's hopelessly biased and unreflexive on every significant issue, but this neither says a lot about his purely intellectual capabilities, nor does it make any claim that he makes automatically wrong; he might turn out to be biased in just the right way.

(I haven't ever heard of him before, just applying the general principles of evaluating authors.)

0[anonymous]10yI read Thomas Friedman as self-parody.

Threads like that make me want to apply Bayes theorem to something.

You start with probability 0.03 that Eliezer is sociopath - the baseline. Then you do Bayesian updates on answers to questions like: Does he imagine grandiose importance to him or is he generally modest/in line with actual accomplishments? Does he have grand plans out of the line with his qualifications and prior accomplishments, or are the plans grandiose? Is he talking people into giving him money as source of income? Is he known to do very expensive altruistic stuff that is larger than s... (read more)

It seems you are talking about high-functioning psychopaths, rather than psychopaths according to the diagnostic DSM-IV criteria. Thus the prior should be different from 0.03. Assuming a high-functioning psychopath is necessarily a psychopath then it seems it should be far lower than 0.03, at least from looking at the criteria:

A) There is a pervasive pattern of disregard for and violation of the rights of others occurring since age 15 years, as >indicated by three or more of the following: failure to conform to social norms with respect to lawful behaviors as indicated by repeatedly performing acts that are >grounds for arrest; deception, as indicated by repeatedly lying, use of aliases, or conning others for personal profit or pleasure; impulsiveness or failure to plan ahead; irritability and aggressiveness, as indicated by repeated physical fights or assaults; reckless disregard for safety of self or others; consistent irresponsibility, as indicated by repeated failure to sustain consistent work behavior or honor financial >obligations; lack of remorse, as indicated by being indifferent to or rationalizing having hurt, mistreated, or stolen from another; B) The individual is at least age 18 years. C) There is evidence of conduct disorder with onset before age 15 years. D) The occurrence of antisocial behavior is not exclusively during the course of schizophrenia or a manic episode."

0semianonymous10yHe is a high IQ individual, though. That is rare on its own. There are smart people who pretty much maximize their personal utility only.
9DanielVarga10yBe aware that the conditional independence of these features (Naive Bayes Assumption) is not true.
0semianonymous10yThey are not independent - the sociopathy (or lesser degree, narcissism) is a common cause.

I am talking about conditional independence. Let us assume that the answer to your first two questions is true, and now you have a posterior of 0.1 that he is a sociopath. Next you want to update on the third claim "Is he talking people into giving him money as source of income?". You have to estimate the ratio of people for whom the third claim is true, and you have to do it for two groups. But the two group is not sociopaths versus non-sociopaths. Rather, sociopaths for whom the first two claims are true versus non-sociopaths for whom the first two claims are true. You don't have any data that would help you to estimate these numbers.

-3semianonymous10yThe 'sociopath' label is not a well identified brain lesion; it is a predictor for behaviours; the label is used for the purpose of decreasing the computational overhead by quantizing the quality (and to reduce communication overhead). One could in principle go without this label and directly predict the likehood of unethical self serving act based on the prior observed behaviour, and that is ideally better but more computationally expensive and may result in much higher failure rate. This exchange is, by the way, why I do not think much of 'rationality' as presented here. It is incredibly important to be able to identify sociopaths; if your decision theory does not permit you to identify sociopaths as you strive for rigour that you can't reach, then you will be taken advantage of.
0lavalamp10yI think that's an overreaction... It's not that you can't do the math, it's that you have to be very clear on what numbers go where and understand which you have to estimate and which can be objectively measured.
-3semianonymous10yPeople do it selectively, though. When someone does IQ test and gets high score, you assume that person has high IQ, for instance, and don't postulate existence of 'low IQ people whom solved first two problems on the test', whom would then be more likely to solve other, different problems, while having 'low IQ', and ultimately score high while having 'low IQ'.

To explain the issue here in intuitive terms: let's say we have the hypothesis that Alice owns a cat, and we start with the prior probability of a person owning a cat (let's say 1 in 20), and then update on the evidence: she recently moved from an apartment building that doesn't allow cats to one that does (3 times more likely if she has a cat than if she doesn't), she regularly goes to a pet store now (7 times more likely if she has a cat than if she doesn't), and when she goes out there's white hair on her jacket sleeves (5 times more likely if she has a cat than if she doesn't). Putting all of these together by Bayes' Rule, we end up 85% confident she has a cat, but in fact we're wrong: she has a dog. And thinking about it in retrospect, we shouldn't have gotten 85% certainty of cat ownership. How did we get so confident in a wrong conclusion?

It's because, while each of those likelihoods is valid in isolation, they're not independent: there are a big chunk of people who move to pet-friendly apartments and go to pet stores regularly and have pet hair on their sleeves, and not all of them are cat owners. Those people are called pet owners in general, but even if we didn't know tha... (read more)

-2semianonymous10yThe cat is defined outside being a combination of traits of owner; that is the difference between the cat and IQ or any other psychological measure. If we were to say 'pet', the formula would have worked, even better if we had a purely black box qualifier into people who have bunch of traits vs people who don't have bunch of traits, regardless of what is the cause (a pet, a cat, a weird fetish for pet related stuff). It is however the case that narcissism does match sociopathy, to the point that difference between the two is not very well defined. Anyhow we can restate the problem and consider it a guess at the properties of the utility function, adding extra verbiage. The analogy on the math problems is good but what we are compensating for is miscommunication, status gaming, and such, by normal people. I would suggest, actually, not the Bayesian approach, but statistical prediction rule or trained neural network.
0othercriteria10yGiven the asymptotic efficiency of the Bayes decision rule in a broad range of settings, those alternatives would give equivalent or less accurate classifications if enough training data (and computational power) were available. If this argument is not familiar, you might want to consult Chapter 2 of The Elements of Statistical Learning [http://www-stat.stanford.edu/~tibs/ElemStatLearn/].
6lavalamp10yI don't think you understood DanielVarga's point. He's saying that the numbers available for some of those features already have an unknown amount of the other features factored in. In other words, if you update on each feature separately, you'll end up double-counting an unknown amount of the data. (Hopefully this explanation is reasonably accurate.) http://en.wikipedia.org/wiki/Conditional_independence [http://en.wikipedia.org/wiki/Conditional_independence]
-2semianonymous10yI did understand his point. The issue is that the psychological traits are defined as what is behind the correlation, what ever this may be - brain lesion A, or brain lesion B, or weird childhood, or the like. They are very broad and are defined to include the 'other features' It is probably better to drop the word 'sociopath' and just say - selfish - but then it is not immediately apparent why e.g. arrogance not backed by achievements is predictive of selfishness, even though it very much is, as it is a case of false signal of capability.
2lavalamp10yI don't think it matters how it is defined... One still shouldn't double count the evidence.
0semianonymous10yYou can eliminate the evidence that you consider double counted, for example grandiose self worth and grandiose plans, though those need to be both present because grandiose self worth without grandiose plans would just indicate some sort of miscommunication (and the self worth metric is more subjective), and are alone much poorer indicators than combined. In any case accurate estimation of anything of this kind is very difficult. In general one just adopts a strategy such that sociopaths would not have sufficient selfish payoff for cheating it; altruism is far cheaper signal for non-selfish agents; in very simple terms if you give someone $3 for donating $4 to very well verified charity, those who value $4 in charity above $1 in pocket, will accept the deal. You just ensure that there is no selfish gain in transactions, and you're fine; if you don't adopt anti cheat strategy, you will be found and exploited with very high confidence as unlike the iterated prisoner dilemma, cheaters get to choose whom to play with, and get to make signals that make easily cheatable agents play with them; a bad strategy is far more likely to be exploited than any conservative estimate would suggest.
5Mitchell_Porter10yCan you name one person working in AI, commercial or academic, whose career is centered on the issue of AI safety? Whose actual research agenda (and not just what they say in interviews) even acknowledges the fact that artificial intelligence is potentially the end of the human race, just as human intelligence was the end of many other species?
2gwern10yI noticed in a HN comment Eliezer claimed to have gotten a vasectomy; I wonder if that's consistent or inconsistent with sociopathy? I can come up with plausible stories either way.
8David_Gerard10yGiven the shortage of sperm donors [*], that strikes me as possibly foolish if IQ is significantly heritable. [*] I know there is in the UK - is there in California?
9gwern10yMy basic conclusion [http://www.gwern.net/Notes#the-morality-of-sperm-donation] is that it's not worth trying because there is an apparent over-supply in the US and you're highly unlikely to get any offspring.
9David_Gerard10yWhereas in the UK, if your sperm is good then you're pretty much certain to. (I recently donated sperm in the UK. They're ridiculously grateful. That at age 18 any offspring are allowed to know who the donor is seems to have been, in itself, enough to tremendously reduce the donation rate. So if you're in the UK, male, smart and healthy, donate sperm and be sure to spread your genes.)
8gwern10yReally? Pardon me if I'm wrong, but I was under the impression that you were in your 30s or 40s, which in the US would damage your chances pretty badly. Perhaps I should amend my essay if it's really that easy in the UK, because the difficulty of donating is the main problem (the next 2 problems are estimating the marginal increase in IQ by one donating, and then estimating the value of said marginal increase in IQ). Do you get notified when some of your sperm actually gets used or is it blind?

I'm 45, started this donation cycle at 44. Limit in the UK is 40-45 depending on clinic. I went to KCH, that link has all the tl;dr you could ever use on the general subject.

I thought I said this in email before ... the UK typically has ~500 people a year wanting sperm, but only ~300 donors' worth of sperm. So donate and it will be used if they can use it.

They don't notify, but I can inquire about it later and find out if it's been used. This will definitely not be for at least six months. The sperm may be kept and used up to about 10 years, I think.

My incentive for this was that I wanted more children but the loved one doesn't (having had two others before). The process is sort of laborious and long winded, and I didn't get paid. (Some reimbursement is possible, but it's strictly limited to travel expenses, and I have a monthly train ticket anyway so I didn't bother asking.) Basically it's me doing something that feels to me like I've spread my genes and is a small social good - and when I said this was my reason for donating, they said that's the usual case amongst donors (many of whom are gay men who want children but are, obviously, quite unlikely to have them in the usual sort... (read more)

5gwern10yI see, that's remarkably different from everything I've found about US donating. Thanks for summarizing it. Could you estimate your total time, soup to nuts, travel time and research included? I'm guessing perhaps 10-20 hours.
4David_Gerard10yResearch ... a few hours. Say three. Email exchange: not much. Visits: 2.5 hours travel time each journey (KCH is on the other side of London from E17), which was one two-hour appointment for "why are you doing this?", blood test and test sperm donation, a one-hour "are you absolutely OK with the ethical details of this?" (which leads me to think that people donating then changing their mind, which you can do any time until the donation is actually used, is a major pain in the backside for them), and four visits so far for actual donations (about 15 min each). Total including travel, which was most of it: 22 hours, if I've counted correctly.
1MixedNuts10yNote that Melinda Gates corresponds to the same criteria about as well.
4semianonymous10yShe did expensive altruistic stuff that was more expensive than expected self interested payoff, though; the actions that are more expensive to fake than the win from faking are a very strong predictor for non-psychopathy; the distinction between psychopath that is genuinely altruistic, and non-psychopath, is that of philosophical zombie vs human.
7MixedNuts10yEliezer either picked a much less lucrative career than he could have gotten with the same hours and enjoyment because he wanted to be altruistic, or I'm mistaken about career prospects for good programmers, or he's a dirty rotten conscious liar about his ability to program.
8semianonymous10yPeople don't gain ability to program out of empty air... everyone able to program has long list of various working projects that they trained on. In any case, programming is real work, it is annoying, it takes training, it takes education, it slaps your ego on the nose just about every time you hit compile after writing any interesting code. And the newbies are grossly mistaken about their abilities. You can't trust anyone to measure their skills accurately, let alone report them.
3MixedNuts10yAre you claiming (a non-negligible probability) that Eliezer would be a worse programmer if he'd decided to take up programming instead of AI research (perhaps because he would have worked on boring projects and given up?), or that he isn't competent enough to get hired as a programmer now?
7TheOtherDave10yIs it clear that he would have gotten the same enjoyment out of a career as a programmer?

First, please avoid even a well-intentioned discussion of politics here (Krugman is as political as it gets), as you will likely alienate a large chunk of your readers.

As for predictions, you are probably out of luck for anything convincing. Note that the AI and singularity research are not expected to provide any useful predictions for decades, the quantum sequence was meant as a background for other topics, and not as original research. Cognitive science and philosophy are the two areas where one can conceivable claim that EY made original contributions ... (read more)

We may need to update just how much of a mind-killer politics is here. The Krugman discussion stayed civil and on-topic.

(I had a longish comment here defending the politics taboo, then decided to remove it, because I've found that in the past the responses to defenses of the politics taboo, and the responses to those responses, have done too much damage to justify making such defenses. Please, though, don't interpret my silence from now on as assent to what looks to me like the continuing erosion or maybe insufficiently rapid strengthening of site quality norms.)

9praxis10yCivility and topicality of a discussion isn't a measure of how mind-killed that discussion is. I personally very much doubt that I could have discussed Krugman rationally, had I entered the discussion, though I certainly would have been polite about it. This has no consequence on whether politics is genuinely a mind-killer. I include this disclaimer because it has just occurred to me that (ironically) perhaps the "politics is a mind-killer" issue might be becoming LW's first really political issue, and prompt all the standard arguments-as-soldiers failures of rationality.
5NancyLebovitz10yWhat do you mean by mind-killing? Maybe no forward movement towards better understanding?
2NancyLebovitz10yI really did mean "how much of a mind-killer". We can handle mentions of politics better than some think, but (for example) voting vs. not voting + which candidate is the better/less worse choice would be a lot harder.

I think at LW, uttering rote admonitions has become a bigger mindkiller than politics itself.

Thomblake and I both noted that "politics is the mindkiller" is the mindkiller a few months ago. It would be nice if we could possibly ease off a bit on behaving quite so phobically about actually practical matters that people would be interested in applying rationality to, if we can stop it turning the site into a sea of blue and green.

-1ChrisHallquist10yNice post, but I think even that may not go far enough. Eliezer's original post didn't distinguish carefully between gratuitous "digs" and using political examples to illustrate a point. In this thread, if the issue of the success of political commentators in making predictions is a topic perpetualpeace1 knows well, it isn't necessarily wrong for him to use it as an example. If a substantial portion of LW readers are stopping reading when the encounter a thought on politics they dislike, might be worth confronting that problem directly.

One reason not to use political examples to illustrate a (nonpolitical) point is that it invites a lot of distracting nitpicking from those who identify with the targeted political group.

But another is that if you're trying to make a normative point to a broad audience, then alienating one subset and elevating another — for no good reason — is a losing strategy.

For instance, if you want to talk to people about improving rationality, and you use an example that revolves around some Marxists being irrational and some Georgists being rational, then a lot of the Marxists in the audience are just going to stop listening or get pissed off. But also, a lot of the Georgists are going to feel that they get "rationality points" just for being Georgists.

7shminux10yThat's why I did not repeat the cached rote admonition, but tried to explain why I did not think it was a good idea in this case. I'm happy to have been proven wrong in this particular instance, probably because most regulars here know to steer clear from getting sucked into "right vs left" debates.

[comment deleted]

[This comment is no longer endorsed by its author]Reply
3TheOtherDave10yI think your (b) is what's most relevant here. That is, I generally interpret the recurring "what if your thought leader is wrong?" threads that pop up here as an expression of the expectation that reasoning cannot be judged (either in the specific context, or more generally) except through evaluation of the results it endorses after they become actual. There are varying interpretations I make of that expectation, depending on specifics and on how charitable I'm feeling. Some of those interpretations I even agree with... for example, I would agree that it's much easier for me to fool myself into inaccurately thinking a line of reasoning either makes sense or doesn't make sense, than it is for me to fool myself into inaccurately thinking that a specific prediction either happened or didn't happen. (Both are possible, and I've likely done both in my time, but the latter is more difficult.)
[+][anonymous]10y -7