(At this point, I fear that I must recurse into a subsequence; but if all goes as planned, it really will be short.)

I once lent Xiaoguang "Mike" Li my copy of "Probability Theory: The Logic of Science".  Mike Li read some of it, and then came back and said:

"Wow... it's like Jaynes is a thousand-year-old vampire."

Then Mike said, "No, wait, let me explain that—" and I said, "No, I know exactly what you mean."  It's a convention in fantasy literature that the older a vampire gets, the more powerful they become.

I'd enjoyed math proofs before I encountered Jaynes.  But E.T. Jaynes was the first time I picked up a sense of formidability from mathematical arguments.  Maybe because Jaynes was lining up "paradoxes" that had been used to object to Bayesianism, and then blasting them to pieces with overwhelming firepower—power being used to overcome others.  Or maybe the sense of formidability came from Jaynes not treating his math as a game of aesthetics; Jaynes cared about probability theory, it was bound up with other considerations that mattered, to him and to me too.

For whatever reason, the sense I get of Jaynes is one of terrifying swift perfection—something that would arrive at the correct answer by the shortest possible route, tearing all surrounding mistakes to shreds in the same motion.  Of course, when you write a book, you get a chance to show only your best side.  But still.

It spoke well of Mike Li that he was able to sense the aura of formidability surrounding Jaynes.  It's a general rule, I've observed, that you can't discriminate between levels too far above your own. E.g., someone once earnestly told me that I was really bright, and "ought to go to college".  Maybe anything more than around one standard deviation above you starts to blur together, though that's just a cool-sounding wild guess.

So, having heard Mike Li compare Jaynes to a thousand-year-old vampire, one question immediately popped into my mind:

"Do you get the same sense off me?" I asked.

Mike shook his head.  "Sorry," he said, sounding somewhat awkward, "it's just that Jaynes is..."

"No, I know," I said.  I hadn't thought I'd reached Jaynes's level. I'd only been curious about how I came across to other people.

I aspire to Jaynes's level.  I aspire to become as much the master of Artificial Intelligence / reflectivity, as Jaynes was master of Bayesian probability theory.  I can even plead that the art I'm trying to master is more difficult than Jaynes's, making a mockery of deference.  Even so, and embarrassingly, there is no art of which I am as much the master now, as Jaynes was of probability theory.

This is not, necessarily, to place myself beneath Jaynes as a person—to say that Jaynes had a magical aura of destiny, and I don't.

Rather I recognize in Jaynes a level of expertise, of sheer formidability, which I have not yet achieved.  I can argue forcefully in my chosen subject, but that is not the same as writing out the equations and saying:  DONE.

For so long as I have not yet achieved that level, I must acknowledge the possibility that I can never achieve it, that my native talent is not sufficient.  When Marcello Herreshoff had known me for long enough, I asked him if he knew of anyone who struck him as substantially more natively intelligent than myself.  Marcello thought for a moment and said "John Conway—I met him at a summer math camp."  Darn, I thought, he thought of someone, and worse, it's some ultra-famous old guy I can't grab.  I inquired how Marcello had arrived at the judgment.  Marcello said, "He just struck me as having a tremendous amount of mental horsepower," and started to explain a math problem he'd had a chance to work on with Conway.

Not what I wanted to hear.

Perhaps, relative to Marcello's experience of Conway and his experience of me, I haven't had a chance to show off on any subject that I've mastered as thoroughly as Conway had mastered his many fields of mathematics.

Or it might be that Conway's brain is specialized off in a different direction from mine, and that I could never approach Conway's level on math, yet Conway wouldn't do so well on AI research.

Or...

...or I'm strictly dumber than Conway, dominated by him along all dimensions.  Maybe, if I could find a young proto-Conway and tell them the basics, they would blaze right past me, solve the problems that have weighed on me for years, and zip off to places I can't follow.

Is it damaging to my ego to confess that last possibility?  Yes.  It would be futile to deny that.

Have I really accepted that awful possibility, or am I only pretending to myself to have accepted it?  Here I will say:  "No, I think I have accepted it."  Why do I dare give myself so much credit?  Because I've invested specific effort into that awful possibility.  I am blogging here for many reasons, but a major one is the vision of some younger mind reading these words and zipping off past me.  It might happen, it might not.

Or sadder:  Maybe I just wasted too much time on setting up the resources to support me, instead of studying math full-time through my whole youth; or I wasted too much youth on non-mathy ideas.  And this choice, my past, is irrevocable.  I'll hit a brick wall at 40, and there won't be anything left but to pass on the resources to another mind with the potential I wasted, still young enough to learn.  So to save them time, I should leave a trail to my successes, and post warning signs on my mistakes.

Such specific efforts predicated on an ego-damaging possibility—that's the only kind of humility that seems real enough for me to dare credit myself.  Or giving up my precious theories, when I realized that they didn't meet the standard Jaynes had shown me—that was hard, and it was real.  Modest demeanors are cheapHumble admissions of doubt are cheap.  I've known too many people who, presented with a counterargument, say "I am but a fallible mortal, of course I could be wrong" and then go on to do exactly what they planned to do previously.

You'll note that I don't try to modestly say anything like, "Well, I may not be as brilliant as Jaynes or Conway, but that doesn't mean I can't do important things in my chosen field."

Because I do know... that's not how it works.

The Level Above Mine
New Comment
357 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

In a few years, you will be as embarrassed of these posts as you are today of your former claims of being an Algernon, or that a logical paradox would make an AI go gaga, the tMoL argumentation you mentioned the last days, the Workarounds for the Laws of Physics, Love and Life Just Before the Singularity and so on and so forth. Ask yourself: Will I have to delete this, too ?

And the person who told you to go to college was probably well-meaning, and not too far from the truth. Was it Ben Goertzel ?

Despite all fallibility of memory, I would be shocked to learn that I had ever claimed that a logical paradox would make an AI go gaga. Where are you getting this from?

Ben's never said anything like that to me. The comment about going to college was from an earnest ordinary person, not acquainted with me. And no, I didn't snap at them, or laugh out loud; it was well-intentioned advice. Going to college is a big choice for a lot of people, and this was someone who met me, and saw that I was smart, and thought that I seemed to have the potential to go to college.

Which is to imply that if there's a level above Jaynes, it may be that I won't understand it until I reach Jaynes's level - to me it will all just look like "going to college". If I recall my timeline correctly, I didn't comprehend Jaynes's level until I had achieved the level of thinking naturalistically; before that time, to achieve a reductionist view of intelligence was my whole aspiration.

Although I've never communicated with you in any form, and hence don't know what it's like for you to answer a question of mine, or correct a misconception (you have, but gradually), or outright refute a strongly held belief...or dissolve a Wrong Question...

...You're still definitely the person who strikes me as inhumanly genius - above all else.

Unfortunately for my peace of mind and ego, people who say to me "You're the brightest person I know" are noticeably more common than people who say to me "You're the brightest person I know, and I know John Conway". Maybe someday I'll hit that level. Maybe not.

Until then... I do thank you, because when people tell me that sort of thing, it gives me the courage to keep going and keep trying to reach that higher level.

Seriously, that's how it feels.

-3[anonymous]
I think maybe Being the Smartest Person is a fundamentally bad, unhelpful motivator, and you should get some cognitive therapy. Of course, you would immediately conclude (correctly) that you are smarter than your mental health professional and stop listening (stupidly and non-volitionally) to them. So this is probably a road you're going to have to walk. Here's hoping you don't have a horrible self- or other-destructive flameout.

You are the brightest person I know. And I know Dan Dennett, Max Tegmark, Robert Trivers, Marcello, Minsky, Pinker and Omohundro.

Unfortunately, those are non-math geniuses, so that speaks for only some sub-areas of cognition which, less strictly categorizable than the clearly scalable domain of math, are not subject to your proposed rule of "one standard deviation above you they blurr"

2JackEmpty
"Know" in the sense EY used it != have read, watched interviews, etc. I took it to mean more personal interaction (even if through comments online).
2lessdazed
Especially since "know of" exists as a common phrase to cover the meaning "have read, watched interviews, etc."

I have had classes with them, asked questions. and met them personally. I should have anticipated disbelief. And yes, I didn't notice that I categorized Marcello as non-math, sorry Marcello!

Oh. Cool! Less disbelief, more illusion of transparency.

If a randomly selected person says, "I know X (academically) famous people." I myself usually assume through impersonal means.

Update'd. Carry on :D

1lessdazed
Non-math geniuses who grok and advocate for unpopular reductionism are in one sense greater than mere superheroes who know the math.
-3[anonymous]
In another sense, non-math geniuses advocating for reductionism are no better than the anti-vaccine lobby.
5Luke_A_Somers
What sense is that?
3JohnWittle
The sense in which they did not come about their beliefs based on starting with sane priors which did not presuppose reductionism, and then update on evidence until they independently discovered reductionism. I disagree with the grandparent, however: I believe that (most) non-math-geniuses advocating for reductionism are more akin to Einstein believing in General Relativity before any novel predictions had been verified: recognizing the absurdity of all other proposed hypotheses is another way of coming about the correct beliefs.
-2whowhowho
The "absurdity" of non-reductionism seems to have evaded Robert Laughlin, Jaorn Lanier and a bunch of other smart people.
9JohnWittle
I did not say that non-reductionism is absurd. I said that "recognizing the absurdity of all other proposed hypotheses is another way of coming about the correct beliefs". Nonetheless, I do think that non-reductionism is absurd. I cannot imagine a universe which is not reductionistic. Can you explain to me how it might work? Edit: I googled "Robert Laughlin Reductionism" and actually found a longish paper he wrote about reductionism and his beliefs. I have some criticisms: Yudkowsky has a great refutation of using the description "emergent", at The Futility of Emergence, to describe phenomenon. From there: Further down in the paper, we have this: Every time he makes the specific claim that reductionism makes worse predictions than a belief in "emergent phenomenon" in which "organizational structure" is an additional property that all of reality must have, in addition to "mass" and "velocity", he cites himself for this. He also does not provide any evidence for non-reductionism over reductionism; that is, he cannot name a single prediction where non-reductionism was right, and reductionism was wrong. He goes on to say that reductionism is popular because you can always examine a system by looking at its internal mechanisms, but you can't always examine a system by looking at it from from a "higher" perspective. A good example, he says, is genetic code: to assume that dna is actually a complete algorithmic description of how to build a human body is an illogical conclusion, according to him. He would rather suppose that the universe contains rules like "When a wavefunction contains these particular factorizations which happen not to cancel out, in a certain organizational structure, use a different mechanism to decide possible outcomes instead of the normal mechanism" than suppose that the laws of physics are consistent throughout and contain no such special cases. From the standpoint of simplicity, reductionism is simpler than non-reductionism, since non-redu
1CCC
It's a bit of an aside to your main point, but there are good arguments to support the assertion that DNA is only a partial recipe for an organism, such as a human. The remaining information is present in the environment of the mothers' womb in other forms - for example, where there's an ambiguity in the DNA with regards to the folding of a certain protein, other proteins present in the womb may correct any incorrectly folded samples. To look at your main point; if I were to present an argument against reductionism, I would point to the personal computer. This is a device constructed in order to run software; that is, to follow a list of instructions that manipulate binary data. Once you have a list of all the instructions that the computer can follow, and what these instructions do, a thorough electrical analysis of the computer's circuitry isn't going to provide much new information; and it will be a lot more complicated, and harder to understand. There's a conceptual point, there, at the level of individual software instructions, where further reductionism doesn't help to understand the phenomenon, and does make the analysis more complicated, and harder to work with. A thorough electrical analysis is, of course, useful if one wishes to confirm that the stated behaviour of the basic software commands is both correctly stated, and free of unexpected side-effects. However, an attempt to describe (say) the rendering of a JPEG image in terms of which transistors are activated at which point is likely a futile exercise.
-3Morendil
Well, yes - but that arises from the fact that such devices are man-made, and (out of respect to our brains' limitations) designed to isolate the layers of explanation from one another - to obviate the need for a fully reductionistic account. The argument will not apply to things not man-made.
2CCC
The entire science of psychology is based on the idea that it is useful to apply high-level rules to the neural functioning of the human brain. If I decide to eat a cookie, then I explain it in high-level terms; I was hungry, the cookie smelt delicious. An analysis in terms of the effect of airborne particles originating from the cookie on my nasal passages, and subsequent alterations in the pattern of neural activations in my brain, can give a far more complicated answer to the question of why I ate the cookie; but, again, I don't see how such a more complicated analysis would be better. If I want to understand my motivations more fully, I can do so in terms of mental biases, subconscious desires, and so forth; rather than a neuron-by-neuron analysis of my own brain. And while it is technically true that I, as a human, am man-made (specifically, that I was made by my parents), a similar argument could be raised for any animal. Such situations are rare, but not entirely unknown.
4JohnWittle
I disagree with your entire premise. I think we should pin down this concept of "levels of perspective" with some good jargon at some point, but regardless... You can look at a computer from the level of perspective of "there are windows on the screen and I can move the mouse around. I can manipulate files on the hard drive with the mouse and the keyboard, and those changes will be reflected inside information boxes in the windows." This is the perspective most people see a computer from, but it is not a complete description of a computer (i.e. if someone unfamiliar with the concept of computers heard this description, they could not build a computer from base materials.) You might also see the perspective, "There are many tiny dots of light on a flat surface, lit up in various patterns. Those patterns are caused by electricity moving in certain ways through silica wires arranged in certain ways." This is, I think, one level lower, but an unfamiliar person could not build a computer from scratch from this description. Another level down, the description might be: "There is a CPU, which is composed of hundreds of thousands of transistors, arranged into logic gates such that when electricity is sent through them you can perform meaningful calculations. These calculations are written in files using a specific instruction set ("assembly language"). The files are stored on a disk in binary, with the disk containing many cesium atoms arranged in a certain order, which have either an extra electron or do not, representing 1 and 0 respectively. When the CPU needs to temporarily store a value useful in its calculations, it does so in the RAM, which is like the disk except much faster and smaller. Some of the calculations are used to turn certain square-shaped lights on a large flat surface blink in certain ways, which provides arbitrary information to the user". We are getting to the point where an unfamiliar human might be able to recreate a computer from scratch, and th
-3whowhowho
That's a fusion of reductionism and determinism. Reductionism ins't necessarily false in an indeterministic universe. What is more pertinent is being able to predict higher level properties and laws from lower level properties and laws. (synchronously, in the latter case).
4JohnWittle
No it isn't? I did not mean you would be able to make predictions which came true 100% of the time. I meant that your subjective anticipation of possible outcomes would be equal to the probability of those outcomes, maximizing both precision and accuracy.
-3whowhowho
Yes it is. "A property of a system is said to be emergent if it is in some sense more than the "sum" of the properties of the system's parts. An emergent property is said to be dependent on some more basic properties (and their relationships and configuration), so that it can have no separate existence. However, a degree of independence is also asserted of emergent properties, so that they are not identical to, or reducible to, or predictable from, or deducible from their bases. The different ways in which the independence requirement can be satisfied lead to various sub-varieties of emergence." -- WP Still deterinism, not reductionism. In a universe where *1aTthere are lower-level-properties .. *1b operating according to a set of deterministic laws. *2a There are also higher-level properties.. *2b irreducible to and unpredictable from the lower level properties and laws... *2c which follow their own deterministic laws. You would be able to predict the future with complete accuracy, given both sets of laws and two sets of starting conditions. Yet the universe being described is explicitly non-reductionistic.
1Kindly
I'm a bit confused. What exactly defines a "higher-level" property, if not that it can be reduced to lower-level properties?
-3whowhowho
eg: being macrscopic, featuring only in the special sciences
2JohnWittle
This all this means is that, in addition to the laws which govern low-level interactions, there are different laws which govern high-level interactions. But they are still laws of physics, they just sound like "when these certain particles are arranged in this particular manner, make them behave like this, instead of how the low-level properties say they should behave". Such laws are still fundamental laws, on the lowest level of the universe. They are still a part of the code for reality. But you are right: Which is what I said: Ergo, a reductionistic universe is also deterministic from a probabilistic standpoint, i.e. the lowest level properties and laws can tell you exactly what to anticipate, and with how much subjective probability.
-3whowhowho
Microphysical laws map microphysical states to other microphysical states.Top-down causation maps macrophysical states to microphysical states. In the sense that they are irreducible, yes. In the sense that they are concerned only with microphyics, no. "Deterministic" typically means that an unbounded agent will achieve probabilities of 1.0.
1JohnWittle
Can you name any examples of such a phenomenon? Oh, well in that case quantum physics throws determinism out the window for sure. I still think there's something to be said for correctly assigning subjective probabilities to your anticipations such that 100% of the time you think something will happen with a 50% chance, it happens half the time, i.e. you are correctly calibrated. An unbounded agent in our universe would be able to achieve such absolutely correct calibration; that's all I meant to imply.
-1CCC
You are right; my example was a bad one, and it does not support the point that I thought it supported. The mere fact that something takes unreasonably long to calculate does not mean that it is not an informative endeavour. (I may have been working from a bad definition of reductionism). Um. I suspect that this may have been poorly phrased. If I have a lump of carbon, quite a bit of water, and a number of other elements, and I just throw them together in a pile, they're unlikely to do much - there may be a bit of fizzing, some parts might dissolve in the water, but that's about it. Yet if I reorganise the same matter into a human, I have an organisation of matter that is able to enter into a debate about reductionism; which I don't think can be predicted by looking at the individual chemical elements alone. But that behaviour might still be predictable from looking at the matter, organised in that way, at its most basic level of perspective (given sufficient computing resources). Hence, I suspect that it is not a counter-example.
7EHeller
Not true. There is a reason no one uses quarks to describe chemistry. Its futile to describe whats happening in a superfluid helium in terms of individual particle movement. Far better to use a two fluid model, and vortices.
7Morendil
Let me amend that: the argument will not necessarily apply to things not man-made. There is a categorical difference in this respect between man-made things and the rest, and my intent was to say: "if you're going to put up an argument against reductionism, don't use examples of man-made things". Whereas we have good reasons to bar "leaky abstractions" from our designs, Nature labors under no such constraint. If it turns out that some particular process that happens in a superfluid helium can be understood only by referring to the quark level, we are not allowed to frown at Nature and say "oh, poor design; go home, you're drunk". For instance, it turns out we can almost describe the universe in the Newtonian model with its relatively simple equations, a nice abstraction if it were non-leaky, but anomalies like the precession of Mercury turn up that require us to use General Relativity instead, and take it into account when building our GPS systems. The word "futile" in this context strikes me as wishful thinking, projecting onto reality our parochial notion of how complicated a reductionistic account of the universe "should" be. Past experience tells us that small anomalies sometimes require the overthrow of entires swathes of science, in the name of reductionism: there keep turning up cases where science considers it necessary, not futile, to work things out in terms of the lower levels of description.
1EHeller
I think you are making a bad generalization when you turn to Newtonian mechanics vs. general relativity. There are important ways in which mesons and hadron are emergent from quarks that have no correspondence to the relationship between Newtonian mechanics and GR. As length scales increase, quarks go from being loosely bound fundamental degrees of freedom to not-even-good-degrees-of-freedom. At 'normal' length scales, free quarks aren't even allowed. The modern study of materials is also full of examples of emergence (it underlies much work on renormalization groups), although its farther from my expertise so the only example to spring to mind was liquid helium.
5TheOtherDave
As an aside to an aside, I wonder how much information about the DNA reading frame could in principle be extracted from the DNA of a female organism, given the knowledge (or the assumption) that mature females can gestate a zygote? Almost all possible reading frames would be discardable on the grounds that the resulting organism would not be able to gestate a zygote, of course, but I don't have any intuitive sense of how big the remaining search space would be. And as a nod towards staying on topic: Well, it will, and it won't. If what I mostly care about is the computer's behavior at the level of instructions, then sure, understanding the instructions gets me most of the information that I care about. Agreed. OTOH, if what I mostly care about is the computer's behavior at the level of electrical flows through circuits (for example, if I'm trying to figure out how to hack the computer without an input device by means of electrical induction, or confirm that it won't catch fire in ordinary use), then a thorough electrical analysis of the computer's circuitry provides me with tons of indispensible new information. What counts as "information" in a colloquial sense depends a lot on my goals. It might be useful to taboo the word in this discussion.
0CCC
My intuition says "very, very big". Consider: depending on womb conditions, the percentage of information expressed in the baby which is encoded in the DNA might change. As an extreme example, consider a female creature whose womb completely ignores the DNA of the zygote, creating instead a perfect clone of the mother. Such an example makes it clear that the search space is at least as large as the number of possible female creatures that are able to produce a perfect clone of themselves. I accept your point. Such an analysis does provide a more complete view of the computer, which is useful in some circumstances.
1TheOtherDave
Sure, I agree that one permissible solution is a decoder which produces an organism capable of cloning itself. And while I'm willing to discard as violating the spirit of the thought experiment decoder designs which discard the human DNA in its entirety and create a predefined organism (in much the same sense that I would discard any text-translation algorithm that discarded the input text and printed out the Declaration of Independence as a legitimate translator of the input text), there's a large space of possibilities here.
3CCC
Would you be willing to consider, i.e. not discard, a decoder that used the human DNA as merely a list of indexes, downloading the required genes from some sort of internal lookup table? By changing the lookup table, one can dramatically change the resulting organism; and having a different result for every viable human DNA is merely a resut of having a large enough lookup table. It would be, to extend your metaphor, like a text-translation algorithm that returned the Declaration of Independance if given as input Alice in Wonderland, and returned Alice in Wonderland if given Hamlet.
2TheOtherDave
(considers) I would like to say "no", but can't think of any coherent reason to discard such a design. Yeah, OK; point made.
-5EHeller
-1whowhowho
I did not say that non-reductionism is absurd. I said that "recognizing the absurdity of all other proposed hypotheses is another way of coming about the correct beliefs". Nonetheless, I do think that non-reductionism is absurd. I cannot imagine a universe which is not reductionistic. One formulation of reductionism is that natural laws can be ordered in a hierarchy, with the higher-level laws being predictable from, or reducible to, the lower ones. So emergentism, in the cognate sense, not working would be that stack of laws failing to collapse down to the lowest level. There's two claims there: one contentious, one not. That there are multiply-realisable, substrate-independent higher-level laws is not contentious. For instance, wave equations have the same form for water waves, sound waves and so on. The contentious claim is that this is ipso facto top-down causation. Substrate-independent laws are still reducible to substrates, because they are predictable from the behaviour of their substrates. I don't see how that refutes the above at all. For one thing, Laughlin and Ellis do have detailed examples of emergent laws (in their rather weak sense of "emergent"). For another, they are not calling on emergence itself as doing any explaining. "Emergence isn't explanatory" doesn't refute "emergence is true". For a third, I don't see any absurdity here. I see a one-word-must-have-one-meaning assumption that is clouding the issue. But where a problem is so fuzzilly defined that it is hard even to identify the "sides", then one can't say that one side is "absurd". Neither are supposed to make predictions. Each can be considered a methodology for finding laws, and it is the laws that do the predicting. Each can also be seen as a meta-level summary fo the laws so far found. EY can't do that for MWI either. Maybe it isn't all about prediction. That's robustly true. Genetic code has to be interpreted by a cellular environment. There are no self-decoding codes. Reudcti
2Oscar_Cunningham
Marcello is non-math?

For what it's worth, I've worked on a project and had lunch with Conway, and your ideas seem more prescient than his. But being a mathematician, I know people who are in turn far above Conway's level.

So how does it work, in your opinion? Because “I may not be as brilliant as Jaynes or Conway, but that doesn't mean I can't do important things in my chosen field,” sounds suspiciously similar to how Hamming asserts that it works in “You and Your Research.” I guess you have a different belief about how doing important things in your chosen field works, but I don't see that you've explained that belief here or anywhere else that I've seen.

I don't suppose Marcello is related to Nadja and Josh Herreshoff?

I don't know if it helps, but while I've appreciated the things I've learned from you, my limited interaction with you hasn't made me think you're the brightest person I know. I think of you as more or less at my level — maybe a couple of standard deviations above or below, I can’t really tell. Certainly you're sharp enough that I'd enjoy hanging out with you. (Let me know the next time you're in Argentina.)

P.S. the impugnment of your notability has now been removed from your Wikipedia page, apparently as a result of people citing you in their papers.

3logicaldash
I too would like to hear "how it works," because if I don't know how Eliezer thinks it works, it just sounds like he's defining the problem of Being a Great Researcher in the most intimidating way possible. Whatever way that may be. Inflating the problem like that is bad practice, for much the same reason that cheap gestures of humility are bad practice. I'm commenting on a two-year-old post, so I guess I shouldn't expect a response, but this post is linked from the getting-started page, so I was a bit disappointed that it ended with what looks a lot like a handwave at humility.

Wait wait wait wait. Eliezer...are you saying that you DON'T know everything????

~runs off and weeps in a corner in a fetal position~

CatAI (1998): "Precautions"/"The Prime Directive of AI"/"Inconsistency problem".

My memory may fail me, and the relevant archives don't go back that far, but I recall Ben (and/or possibly other people) suggesting you going to college, or at least enroll for a grad program in AI, on the Extropy chat list around 1999/2000. I think these suggestions were related to, but not solely based on, your financial situation at that time (which ultimately led to the creation of the SIAI, so maybe we should be glad it turned out the way it did, even if, in my opinion, following the advice would have been beneficial to you and your work.)

[+]Yep-80
[-]Eric5170

I definitely see the "levels" phenomenon very often. Most people I meet who see me play a musical instrument (or 5 or 10 different ones) think I must be a genius at music - unless they're a musician, then they recognize me as an amateur with enough money to buy interesting instruments and enough skill to get a basic proficiency at them quickly.

And even with standard measures of intellect like rationality or math... I don't know that many of my friends who have read any of this blog would recognize you as being smarter than me, despite the fact that you're enough levels above me that my opinion of you is pretty much what "Not You" said above.

I can keep up with most of your posts, but to be able to keep up with a good teacher, and to be that good teacher, is a gap of at least a few levels. But aspiring to your level (though I may not reach it) has probably been the biggest motivator for me to practice the art. I certainly won't be the one who zips by you, but you've at least pulled me up to a level where I might be able to guide one who will down a useful path.

Up to now there never seemed to be a reason to say this, but now that there is:

Eliezer Yudkowsky, afaict you're the most intelligent person I know. I don't know John Conway.

Your faith in math is misplaced. The sort of math smarts you are obsessed with just isn't that correlated with intellectual accomplishment. For accomplishment outside of math, you must sacrifice time that could be spent honing your math skills, to actually think about other things. You could be nearly the smartest math type guy anyone you meet know, and still not accomplish if math is not the key to your chosen subject.

It's interesting, actually. You're motivated by other peoples' low opinions of you -- this pressure you feel in your gut to prove Caledonian et al wrong -- so you've taken that is probably fairly standard human machinery and tried to do something remarkable with it.

My question is, are you still motivated by the doubt you feel about your native abilities, or have you passed into being compelled purely by your work?

Perhaps the truly refulgent (before they had so become) reached a progression tipping point at which they realized (right or wrong, ironically) that they were essentially beyond comparison, and hence stopped comparing.

Then they could allocate the scarce resources of time and thought exclusively to the problems they were addressing, thus actually attaining a level that truly was beyond comparison.

Jaynes was a really smart guy, but no one can be a genius all the time. He did make at least one notable blunder in Bayesian probability theory -- a blunder he could have avoided if only he'd followed his own rules for careful probability analysis.

You come across as very intelligent when you stick to your areas of expertise, like probability theory, AI and cognitive biases, but some of your more tangential stuff can seem a little naive. Compared to the other major poster on this blog, Robin, I'd say you come across as smarter but less "wise", if that means anything to you. I'm not even a huge fan of the notion of "wisdom", but if there's something you're missing, I think that's it.

[-]Rob350

If you haven't read it, Simonton's Origins of Genius draws a nice distinction between mental agility and long-term intellectual significance, and explores the correlation between the two. Not a terribly well-written book, but certainly thought-provoking.

@EY: We are the cards we are dealt, and intelligence is the unfairest of all those cards. More unfair than wealth or health or home country, unfairer than your happiness set-point. People have difficulty accepting that life can be that unfair, it's not a happy thought. "Intelligence isn't as important as X" is one way of turning away from the unfairness, refusing to deal with it, thinking a happier thought instead. It's a temptation, both to those dealt poor cards, and to those dealt good ones. Just as downplaying the importance of money is ... (read more)

4faul_sname
It's simply dissolving some cognitive illusions he shouldn't have had in the first place, but that most of us have probably had at some point in our lives. If you've got intelligence at 2 standard deviations above average, and you overestimate your own intelligence by one standard deviation (which is probably a pretty common mistake, and if anything underestimates the effect) than you'll see that you're probably the most intelligent person you interact with on a regular basis. If you're out at 3 standard deviations, it may not be until college that you see that some of your fellow students, or at least some of your professors, are indisputably smarter than you. If you're out at 4 or 5 standard deviations, as I imagine Eliezer is (I myself can't honestly peg myself past 3.5 standard deviations, which means I'm probably around 2 standard deviations above average and can't really distinguish beyond 2 standard deviations above my own level), I have some difficulty imagining what that must be like, only that even in the things you read you won't find many minds as formidable as your (perception of) your own, and even rarer will be minds that clearly surpass your own. But I think he is in the camp of trying to improve human intelligence (or at least human rationality, gwern seems to be the better poster child for improving human intelligence). Hence the sequences.
-3Peterdjones
Is a home-schooled person well positioned to judge that sort of thing? They're the smartest kind in a class of one.
5lavalamp
Not sure how homeschooling is relevant here, but speaking as a homeschooled person: it goes both ways, you're also the stupidest person in a class of one.
3Kawoomba
Sidenote: I'd homeschool my kids if it were allowed where I live.
2lavalamp
(This seems like the wrong thread for a protracted discussion but I'm happy to say more in an open thread or via PM if you want to hear more, although it sounds like it's a moot point for you.)
1Kawoomba
(I do want to hear more, go ahead using any means you'd like.)
3lavalamp
OK: http://lesswrong.com/r/discussion/lw/gbw/open_thread_january_1631_2013/8bdp

Eliezer, I've been watching you with interest since 1996 due to your obvious intelligence and "altruism." From my background as a smart individual with over twenty years managing teams of Ph.D.s (and others with similar non-degreed qualifications) solving technical problems in the real world, you've always struck me as near but not at the top in terms of intelligence. Your "discoveries" and developmental trajectory fit easily within the bounds of my experience of myself and a few others of similar aptitudes, but your (sheltered) arrogance has always stood out. I wish you continued progress, not so much in ever-sharper analysis, but in ever more effective synthesis of the leading-edge subjects you pursue.

How much do you worry about age 40? Is that just based on your father? Conway passed 40 before Marcello was born.

If not, why aren't you in the camp of those who wish to improve human intelligence?

I'll take this one because I'm almost certain Eliezer would answer the same way.

Working on AI is a more effective way of increasing the intelligence of the space and matter around us than increasing human intelligence is. The probability of making substantial progress is higher.

3Kingreaper
I disagree. Human intelligence is clearly misoptimised for many goals, and I see no clear evidence that it's easier to design a new intelligence from scratch than to optimise the human one. They have very different possible effects "FOOM!" vs. "We are awaiting GFDCA [Genetics, Food Drugs and Cybernetics Administration] approval of this new implant/chimerism/genehack", so the average impact of human-optimisation may be lower, but my probability estimate for human-improvement tech is much higher.

Wow, chill out, Eliezer. You're probably among the top 10, certainly in the top 20, most-intelligent people I've met. That's good enough for anything you could want to do. You are ranked high enough that luck, money, and contacts will all be more important factors for you than some marginal increase in intelligence.

First, same question as Douglas: what is it with the brick wall at 40?

Second: This is another great post, its rare for people to expose their thoughts about theirselves in such an open way. Congratulations!

Regarding your ability, I'm just a regular guy(studied Math in college) but your writings are the most inspiring I've ever read. So much self-reflection about intelligence and the thinking process. The insight about how certain mental processes feel is totally new to me. You have helped me a lot to identify my own blind spots and mistakes. Now I can look... (read more)

I second Robin's comment.

A friend of mine, Steve Jordan, once asked me just how smart I thought he and I were. I answered that I think that no-one is really as smart as the two of us both think we are. You see, for many many people it is possible to choose a weighting scheme among a dozen or so factors contribute to intellectual work such that they are the best. You simply define the vector to their point on the "efficient aptitude frontier" as "real intelligence". A dozen or so people associated with this blog and/or with SIAI and a smaller number who aren't appear to me to be on points of the "known to Michael Vassar efficient aptitude frontier", though not necessarily equally mission-critical points. For my "save the world dream team" I would pick a 25-year-old Steve Jobs over a 25-year-old Terrance Tao, though I'd like both of course.

Manuel, "enroll in a grad program for AI" != "you're smart, you should go to college".

Kragen, the short answer is, "It's easy to talk about the importance of effort if you happen to be Hamming." If you can make the ante for the high-stakes table, then you can talk about how little the ante counts for, and the importance of playing your cards well. But if you can't make the ante...

Robin, it's not blind faith in math or math for the sake of impressiveness, but a specific sense that the specific next problems I have to solve, will require more math than I've used up to this point. Not Andrew J. Wiles math, but Jaynes doesn't use Wiles-math either. I quite share your prejudice against math for the sake of looking impressive, because that gets you the wrong math. (Formality isn't about Precision?)

Ken, it's exclusively my work that gives me the motivation to keep working on something for years, but things like pride can give me the motivation to keep working on something for the next minute. I'll take whatever sources of motivation I can get (er, that aren't outright evil, of course).

Douglas, yes, my father changed at 40. But one of my primary sources... (read more)