(At this point, I fear that I must recurse into a subsequence; but if all goes as planned, it really will be short.)
I once lent Xiaoguang "Mike" Li my copy of "Probability Theory: The Logic of Science". Mike Li read some of it, and then came back and said:
"Wow... it's like Jaynes is a thousand-year-old vampire."
Then Mike said, "No, wait, let me explain that—" and I said, "No, I know exactly what you mean." It's a convention in fantasy literature that the older a vampire gets, the more powerful they become.
I'd enjoyed math proofs before I encountered Jaynes. But E.T. Jaynes was the first time I picked up a sense of formidability from mathematical arguments. Maybe because Jaynes was lining up "paradoxes" that had been used to object to Bayesianism, and then blasting them to pieces with overwhelming firepower—power being used to overcome others. Or maybe the sense of formidability came from Jaynes not treating his math as a game of aesthetics; Jaynes cared about probability theory, it was bound up with other considerations that mattered, to him and to me too.
For whatever reason, the sense I get of Jaynes is one of terrifying swift perfection—something that would arrive at the correct answer by the shortest possible route, tearing all surrounding mistakes to shreds in the same motion. Of course, when you write a book, you get a chance to show only your best side. But still.
It spoke well of Mike Li that he was able to sense the aura of formidability surrounding Jaynes. It's a general rule, I've observed, that you can't discriminate between levels too far above your own. E.g., someone once earnestly told me that I was really bright, and "ought to go to college". Maybe anything more than around one standard deviation above you starts to blur together, though that's just a cool-sounding wild guess.
So, having heard Mike Li compare Jaynes to a thousand-year-old vampire, one question immediately popped into my mind:
"Do you get the same sense off me?" I asked.
Mike shook his head. "Sorry," he said, sounding somewhat awkward, "it's just that Jaynes is..."
"No, I know," I said. I hadn't thought I'd reached Jaynes's level. I'd only been curious about how I came across to other people.
I aspire to Jaynes's level. I aspire to become as much the master of Artificial Intelligence / reflectivity, as Jaynes was master of Bayesian probability theory. I can even plead that the art I'm trying to master is more difficult than Jaynes's, making a mockery of deference. Even so, and embarrassingly, there is no art of which I am as much the master now, as Jaynes was of probability theory.
This is not, necessarily, to place myself beneath Jaynes as a person—to say that Jaynes had a magical aura of destiny, and I don't.
Rather I recognize in Jaynes a level of expertise, of sheer formidability, which I have not yet achieved. I can argue forcefully in my chosen subject, but that is not the same as writing out the equations and saying: DONE.
For so long as I have not yet achieved that level, I must acknowledge the possibility that I can never achieve it, that my native talent is not sufficient. When Marcello Herreshoff had known me for long enough, I asked him if he knew of anyone who struck him as substantially more natively intelligent than myself. Marcello thought for a moment and said "John Conway—I met him at a summer math camp." Darn, I thought, he thought of someone, and worse, it's some ultra-famous old guy I can't grab. I inquired how Marcello had arrived at the judgment. Marcello said, "He just struck me as having a tremendous amount of mental horsepower," and started to explain a math problem he'd had a chance to work on with Conway.
Not what I wanted to hear.
Perhaps, relative to Marcello's experience of Conway and his experience of me, I haven't had a chance to show off on any subject that I've mastered as thoroughly as Conway had mastered his many fields of mathematics.
Or it might be that Conway's brain is specialized off in a different direction from mine, and that I could never approach Conway's level on math, yet Conway wouldn't do so well on AI research.
...or I'm strictly dumber than Conway, dominated by him along all dimensions. Maybe, if I could find a young proto-Conway and tell them the basics, they would blaze right past me, solve the problems that have weighed on me for years, and zip off to places I can't follow.
Is it damaging to my ego to confess that last possibility? Yes. It would be futile to deny that.
Have I really accepted that awful possibility, or am I only pretending to myself to have accepted it? Here I will say: "No, I think I have accepted it." Why do I dare give myself so much credit? Because I've invested specific effort into that awful possibility. I am blogging here for many reasons, but a major one is the vision of some younger mind reading these words and zipping off past me. It might happen, it might not.
Or sadder: Maybe I just wasted too much time on setting up the resources to support me, instead of studying math full-time through my whole youth; or I wasted too much youth on non-mathy ideas. And this choice, my past, is irrevocable. I'll hit a brick wall at 40, and there won't be anything left but to pass on the resources to another mind with the potential I wasted, still young enough to learn. So to save them time, I should leave a trail to my successes, and post warning signs on my mistakes.
Such specific efforts predicated on an ego-damaging possibility—that's the only kind of humility that seems real enough for me to dare credit myself. Or giving up my precious theories, when I realized that they didn't meet the standard Jaynes had shown me—that was hard, and it was real. Modest demeanors are cheap. Humble admissions of doubt are cheap. I've known too many people who, presented with a counterargument, say "I am but a fallible mortal, of course I could be wrong" and then go on to do exactly what they planned to do previously.
You'll note that I don't try to modestly say anything like, "Well, I may not be as brilliant as Jaynes or Conway, but that doesn't mean I can't do important things in my chosen field."
Because I do know... that's not how it works.
In a few years, you will be as embarrassed of these posts as you are today of your former claims of being an Algernon, or that a logical paradox would make an AI go gaga, the tMoL argumentation you mentioned the last days, the Workarounds for the Laws of Physics, Love and Life Just Before the Singularity and so on and so forth. Ask yourself: Will I have to delete this, too ?
And the person who told you to go to college was probably well-meaning, and not too far from the truth. Was it Ben Goertzel ?
Despite all fallibility of memory, I would be shocked to learn that I had ever claimed that a logical paradox would make an AI go gaga. Where are you getting this from?
Ben's never said anything like that to me. The comment about going to college was from an earnest ordinary person, not acquainted with me. And no, I didn't snap at them, or laugh out loud; it was well-intentioned advice. Going to college is a big choice for a lot of people, and this was someone who met me, and saw that I was smart, and thought that I seemed to have the potential to go to college.
Which is to imply that if there's a level above Jaynes, it may be that I won't understand it until I reach Jaynes's level - to me it will all just look like "going to college". If I recall my timeline correctly, I didn't comprehend Jaynes's level until I had achieved the level of thinking naturalistically; before that time, to achieve a reductionist view of intelligence was my whole aspiration.
Although I've never communicated with you in any form, and hence don't know what it's like for you to answer a question of mine, or correct a misconception (you have, but gradually), or outright refute a strongly held belief...or dissolve a Wrong Question...
...You're still definitely the person who strikes me as inhumanly genius - above all else.
Unfortunately for my peace of mind and ego, people who say to me "You're the brightest person I know" are noticeably more common than people who say to me "You're the brightest person I know, and I know John Conway". Maybe someday I'll hit that level. Maybe not.
Until then... I do thank you, because when people tell me that sort of thing, it gives me the courage to keep going and keep trying to reach that higher level.
Seriously, that's how it feels.
You are the brightest person I know. And I know Dan Dennett, Max Tegmark, Robert Trivers, Marcello, Minsky, Pinker and Omohundro.
Unfortunately, those are non-math geniuses, so that speaks for only some sub-areas of cognition which, less strictly categorizable than the clearly scalable domain of math, are not subject to your proposed rule of "one standard deviation above you they blurr"
I have had classes with them, asked questions. and met them personally. I should have anticipated disbelief. And yes, I didn't notice that I categorized Marcello as non-math, sorry Marcello!
Oh. Cool! Less disbelief, more illusion of transparency.
If a randomly selected person says, "I know X (academically) famous people." I myself usually assume through impersonal means.
Update'd. Carry on :D
For what it's worth, I've worked on a project and had lunch with Conway, and your ideas seem more prescient than his. But being a mathematician, I know people who are in turn far above Conway's level.
So how does it work, in your opinion? Because “I may not be as brilliant as Jaynes or Conway, but that doesn't mean I can't do important things in my chosen field,” sounds suspiciously similar to how Hamming asserts that it works in “You and Your Research.” I guess you have a different belief about how doing important things in your chosen field works, but I don't see that you've explained that belief here or anywhere else that I've seen.
I don't suppose Marcello is related to Nadja and Josh Herreshoff?
I don't know if it helps, but while I've appreciated the things I've learned from you, my limited interaction with you hasn't made me think you're the brightest person I know. I think of you as more or less at my level — maybe a couple of standard deviations above or below, I can’t really tell. Certainly you're sharp enough that I'd enjoy hanging out with you. (Let me know the next time you're in Argentina.)
P.S. the impugnment of your notability has now been removed from your Wikipedia page, apparently as a result of people citing you in their papers.
Wait wait wait wait. Eliezer...are you saying that you DON'T know everything????
~runs off and weeps in a corner in a fetal position~
CatAI (1998): "Precautions"/"The Prime Directive of AI"/"Inconsistency problem".
My memory may fail me, and the relevant archives don't go back that far, but I recall Ben (and/or possibly other people) suggesting you going to college, or at least enroll for a grad program in AI, on the Extropy chat list around 1999/2000. I think these suggestions were related to, but not solely based on, your financial situation at that time (which ultimately led to the creation of the SIAI, so maybe we should be glad it turned out the way it did, even if, in my opinion, following the advice would have been beneficial to you and your work.)
I definitely see the "levels" phenomenon very often. Most people I meet who see me play a musical instrument (or 5 or 10 different ones) think I must be a genius at music - unless they're a musician, then they recognize me as an amateur with enough money to buy interesting instruments and enough skill to get a basic proficiency at them quickly.
And even with standard measures of intellect like rationality or math... I don't know that many of my friends who have read any of this blog would recognize you as being smarter than me, despite the fact that you're enough levels above me that my opinion of you is pretty much what "Not You" said above.
I can keep up with most of your posts, but to be able to keep up with a good teacher, and to be that good teacher, is a gap of at least a few levels. But aspiring to your level (though I may not reach it) has probably been the biggest motivator for me to practice the art. I certainly won't be the one who zips by you, but you've at least pulled me up to a level where I might be able to guide one who will down a useful path.
Up to now there never seemed to be a reason to say this, but now that there is:
Eliezer Yudkowsky, afaict you're the most intelligent person I know. I don't know John Conway.
Your faith in math is misplaced. The sort of math smarts you are obsessed with just isn't that correlated with intellectual accomplishment. For accomplishment outside of math, you must sacrifice time that could be spent honing your math skills, to actually think about other things. You could be nearly the smartest math type guy anyone you meet know, and still not accomplish if math is not the key to your chosen subject.
It's interesting, actually. You're motivated by other peoples' low opinions of you -- this pressure you feel in your gut to prove Caledonian et al wrong -- so you've taken that is probably fairly standard human machinery and tried to do something remarkable with it.
My question is, are you still motivated by the doubt you feel about your native abilities, or have you passed into being compelled purely by your work?
Perhaps the truly refulgent (before they had so become) reached a progression tipping point at which they realized (right or wrong, ironically) that they were essentially beyond comparison, and hence stopped comparing.
Then they could allocate the scarce resources of time and thought exclusively to the problems they were addressing, thus actually attaining a level that truly was beyond comparison.
Jaynes was a really smart guy, but no one can be a genius all the time. He did make at least one notable blunder in Bayesian probability theory -- a blunder he could have avoided if only he'd followed his own rules for careful probability analysis.
You come across as very intelligent when you stick to your areas of expertise, like probability theory, AI and cognitive biases, but some of your more tangential stuff can seem a little naive. Compared to the other major poster on this blog, Robin, I'd say you come across as smarter but less "wise", if that means anything to you. I'm not even a huge fan of the notion of "wisdom", but if there's something you're missing, I think that's it.
If you haven't read it, Simonton's Origins of Genius draws a nice distinction between mental agility and long-term intellectual significance, and explores the correlation between the two. Not a terribly well-written book, but certainly thought-provoking.
@EY: We are the cards we are dealt, and intelligence is the unfairest of all those cards. More unfair than wealth or health or home country, unfairer than your happiness set-point. People have difficulty accepting that life can be that unfair, it's not a happy thought. "Intelligence isn't as important as X" is one way of turning away from the unfairness, refusing to deal with it, thinking a happier thought instead. It's a temptation, both to those dealt poor cards, and to those dealt good ones. Just as downplaying the importance of money is ... (read more)
Eliezer, I've been watching you with interest since 1996 due to your obvious intelligence and "altruism." From my background as a smart individual with over twenty years managing teams of Ph.D.s (and others with similar non-degreed qualifications) solving technical problems in the real world, you've always struck me as near but not at the top in terms of intelligence. Your "discoveries" and developmental trajectory fit easily within the bounds of my experience of myself and a few others of similar aptitudes, but your (sheltered) arrogance has always stood out. I wish you continued progress, not so much in ever-sharper analysis, but in ever more effective synthesis of the leading-edge subjects you pursue.
How much do you worry about age 40? Is that just based on your father? Conway passed 40 before Marcello was born.
I'll take this one because I'm almost certain Eliezer would answer the same way.
Working on AI is a more effective way of increasing the intelligence of the space and matter around us than increasing human intelligence is. The probability of making substantial progress is higher.
Wow, chill out, Eliezer. You're probably among the top 10, certainly in the top 20, most-intelligent people I've met. That's good enough for anything you could want to do. You are ranked high enough that luck, money, and contacts will all be more important factors for you than some marginal increase in intelligence.
First, same question as Douglas: what is it with the brick wall at 40?
Second: This is another great post, its rare for people to expose their thoughts about theirselves in such an open way. Congratulations!
Regarding your ability, I'm just a regular guy(studied Math in college) but your writings are the most inspiring I've ever read. So much self-reflection about intelligence and the thinking process. The insight about how certain mental processes feel is totally new to me. You have helped me a lot to identify my own blind spots and mistakes. Now I can look... (read more)
I second Robin's comment.
A friend of mine, Steve Jordan, once asked me just how smart I thought he and I were. I answered that I think that no-one is really as smart as the two of us both think we are. You see, for many many people it is possible to choose a weighting scheme among a dozen or so factors contribute to intellectual work such that they are the best. You simply define the vector to their point on the "efficient aptitude frontier" as "real intelligence". A dozen or so people associated with this blog and/or with SIAI and a... (read more)
Manuel, "enroll in a grad program for AI" != "you're smart, you should go to college".
Kragen, the short answer is, "It's easy to talk about the importance of effort if you happen to be Hamming." If you can make the ante for the high-stakes table, then you can talk about how little the ante counts for, and the importance of playing your cards well. But if you can't make the ante...
Robin, it's not blind faith in math or math for the sake of impressiveness, but a specific sense that the specific next problems I have to solve, will require more math than I've used up to this point. Not Andrew J. Wiles math, but Jaynes doesn't use Wiles-math either. I quite share your prejudice against math for the sake of looking impressive, because that gets you the wrong math. (Formality isn't about Precision?)
Ken, it's exclusively my work that gives me the motivation to keep working on something for years, but things like pride can give me the motivation to keep working on something for the next minute. I'll take whatever sources of motivation I can get (er, that aren't outright evil, of course).
Douglas, yes, my father changed at 40. But one of my primary sources... (read more)
I'm curious if this is still your sense, and if so, what kind of math are you talking about?
My sense is that currently the main problems in FAI are philosophical. Skill in math is obviously very useful, but secondary to skill in philosophy, because most of the time it's still "I have no idea how to approach this problem" instead of "Oh, if I can just solve this math problem, everything will be clear".
Marcello observed "In terms of philosophical intuition, you are head and shoulders above Conway." Making progress in FAI theory seems to require a combination of rationality, good philosophical intuition, math talent, motivation, and prerequisite background knowledge. (Am I leaving out anything?) Out of these, perhaps good philosophical intuition is rarest, in large part because we don't know how to teach it (or screen for it at a young age). Is this a problem you've considered?
Did you read the rest of that thread where I talked about how in cryptography we often used formalizations of "security" that were discovered to be wrong years later, and that's despite having hundreds of people in the research community constantly trying to attack each other's ideas? I don't see how formalizing Friendliness could be not just easier and less error prone than formalizing security, but so much so that just one person is enough to solve all the problems with high confidence of correctness.
I mean questions like your R1 and R2, your "nonperson predicate", how to distinguish between moral progress and moral error / value drift, anthropic reasoning / "reality fluid". Generally, all the problems that need to be solved for building an FAI besides the math and the programming.
Yes, formalizing Friendliness is not the sort of thing you'd want one person doing. I agree. I don't consider that "philosophy", and it's the sort of thing other FAI team members would have to be able to check. We probably want at least one high-grade actual cryptographer.
Of the others, the nonperson predicate and the moral-progress parts are the main ones where it'd be unusually hard to solve and then tell that it had been solved correctly. I would expect both of those to be factorable-out, though - that all or most of the solution could just be published outright. (Albeit recent experience with trolls makes me think that no insight enabling conscious simulations should ever be published; people would write suffering conscious simulations and run them just to show off... how confident they were that the consciousness theory was wrong, or something. I have a newfound understanding of the utter... do-anything-ness of trolls. This potentially makes it hard to publicly check some parts of the reasoning behind a nonperson predicate.) Anthropic reasoning / "reality fluid" is the sort of thing I'd expect to be really obvious in retrospect once solved. R1 and R2 should be both obvious in retrospect, and publishable.
I have hopes that an upcoming post on the Lob Problem will offer a much more concrete picture of what some parts of the innards of FAI development and formalizing look like.
In principle, creating a formalization of Friendliness consists of two parts, conceptualizing Friendliness, and translating the concept into mathematical language. I'm using "philosophy" and "formalizing Friendliness" interchangeably to refer to both of these parts, whereas you seem to be using "philosophy" to refer to the former and "formalizing Friendliness" for the latter.
I guess this is because you think you can do the first part, then hand off the second part to others. But in reality, constraints about what kinds of concepts can be expressed in math and what proof techniques are available means that you have to work from both ends at the same time, trying to jointly optimize for philosophical soundness and mathematical feasibility, so there is no clear boundary between "philosophy" and "formalizing".
(I'm inferring this based on what happens in cryptography. The people creating new security concepts, the people writing down the mathematical formalizations, and the people doing the proofs are usually all the same, I think for the above reason.)
My psychological model says that all trolls are of that kind; some trolls just work harder than others. They all do damage in exchange for attention and the joy of seeing others upset, while exercising the limitless human ability to persuade themselves it's okay. If you make it possible for them to do damage on their home computers with no chance of being arrested and other people being visibly upset about it, a large number will opt to do so. The amount of suffering they create can be arbitrarily great, so long as they can talk themselves into believing it doesn't matter for and other people are being visibly upset to give them the attention-reward.
4chan would have entire threads devoted to building worse hells. Yes. Seriously. They really would. And then they would instantiate those hells. So if you ever have an insight that constitutes incremental progress toward being able to run lots of small, stupid, suffering conscious agents on a home computer, shut up. And if somebody actually does it, don't be upset on the Internet.
They really would at that. It seems you are concerned here about malicious actual trolls specifically. I suppose if the technology and knowledge was disseminated to that degree (before something actually foomed) then that would be the most important threat. My first thoughts had gone towards researchers with the capabilities and interest to research this kind of technology themselves who are merely callous and who are indifferent to the suffering of their simulated conscious 'guinea pigs' for the aforementioned .
At what level of formalization does this kind of 'incremental progress' start to count? I ask because your philosophical essays on reductionism, consciousness and zombies is something that seems to be incremental progress towards that end (but which I certainly wouldn't consider a mistake to publish or a net risk).
Why do you always have to ask subtly hard questions? I can just see see your smug face, smiling that smug smile of yours with that slight tilt of the head as we squirm trying to rationalize something up quick.
Here's my crack at it: They don't have what we currently think is the requisite code structure to "feel" in a meaningful way, but of course we are too confused to articulate the reasons much further.
Vassar - your English is encrypted - more an assumption of intelligence than a sign.
EY - I admire your work. Along with Robin this is the best Show in Town and I will miss it, when it stops.
I actually doubt whether you are accomplishing anything - but this does not seem so important to me, because the effort itself is worthwhile. And we are educated along the way.
This is a youthful blog with youthful worries. From the vantage point of age worrying about intelligence seems like a waste of time and unanswerable to boot.
But those are the stones in your shoes.
Can you be concrete and specific about where Eliezer is or has been arrogant?
"Most intelligent people I've met" is not informative, we need to give quantitative estimates. My estimate is calibrated based on knowing people who passed various screenings, such as math, physics and programming contests (including at international level), test results on screening exams to top universities, performance in hard university courses, people starting to grasp research and programming, etc. Based on population of regions covered by various screenings, and taking age, gender and different background into account, I can approximately ... (read more)
My own potential intelligence does worry me fairly often. I am currently studying to become an engineer and hope to work on some of the awesome ideas I read about on sites like this. The thing is though, I wasted the first twenty third years of my life. I am currently at twenty-five years old and I have been forced to pretty much start from scratch on everything from social skills to education and after two years I think I am making some headway. I am even starting to understand what Eliezer talks about in all these posts and apply it to my own life as bes... (read more)
Let me give a shout out to my 1:50 peeps! I can't even summarize what EY has notably accomplished beyond highlighting how much more likely he is to accomplish something. All I really want is for Google to stop returning pages that are obviously unhelpful to me, or for a machine to disentangle how the genetic code works, or a system that can give absolute top notch medical advice, or something better than the bumbling jackasses[choose any] that manage to make policy in our country. Give me one of those things and you will be one in a million, baby.
I suppose you could google "(arrogant OR arrogance OR modesty) eliezer yudkowsky" and have plenty to digest. Note that the arrogance at issue is neither dishonest nor unwarranted, but it is an impairment, and a consequence of trade-offs which, from within a broader context, probably wouldn't be taken in the same way.
That's as far as I'm willing to entertain this line of inquiry, which ostensibly neutral request for facts appears to belie an undercurrent of offense.
Okay, I realize you're going to read that and say, "It's obviously not good enough for things requiring superhuman intelligence!"
I meant that, if you compare your attributes to those of other humans, and you sort those attributes, with the one that presents you the most trouble in attaining your goal at the top, intelligence will not be near the top of that list for you, for any goal.
I suppose you could google "(arrogant OR arrogance OR modesty) eliezer yudkowsky" and have plenty to digest.
Well, I was asking you, not google. But it seems that you are not willing to stand behind your words, making claims then failing to provide evidence when asked. Refering to a third party is an evasive maneuver. Show us your cards!
That's as far as I'm willing to entertain this line of inquiry, which ostensibly neutral request for facts appears to belie an undercurrent of offense.
That's your supposition.
Eliezer, can you clarify what you mean by "You'll note that I don't try to modestly say anything like, "Well, I may not be as brilliant as Jaynes or Conway, but that doesn't mean I can't do important things in my chosen field."
Because I do know... that's not how it works."
Vladimir Nesov: thanks for your comment. I found it insightful.
You say 'That's not how it works.' But I think that IS how it works!
If progress were only ever made by people as smart as E.T. Jaynes, humanity would never have gotten anywhere. Even with fat tails, intelligence is still roughly normally distributed, and there just aren't that many 6 sigma events. The vast majority of scientific progress is incremental, notwithstanding that it's only the revolutionary achievements that are salient.
The real question is, do you want Friendly A.I. to be achieved? Or do you just want friendly A.I. to be achieved by YOU? There'... (read more)
I find myself, except in the case of people with obvious impairments, completely unable to determine how intelligent someone is by interacting with them. Sometimes I can determine who is capable of performing specific tasks, but I have little confidence in my ability to assess "general intelligence".
To some extent, this is because different people have acquired different skills. Archimedes of Syracuse may have been the greatest mathematician in history, but he wouldn't be able to pass the exams in a high school calculus class. Obviously, the reas... (read more)
I believe that you don't really understand something until you can explain it to someone else, and have them understand it, too.
There's basically two reasons to get called arrogant. One is acting like you're better when you aren't. The other is refusing to politely pretend that the inferential chasm is small. Given where E is and where the mass of humanity are, if I had to make blind-guess assignments for 100 accusers picked at random, and I assigned them all into the "inferential distance" bin, I don't think I'd be wrong once. So, a person asking to be put, or to put some accuser into the "undeserved airs" bin, had better show some sharp evidence!
"Math is a game for the young."
"Perhaps it is the fear of being too late that is causing you distress. Perhaps you fear that humanity is going to be destroyed because you didn't build an FAI soon enough. Perhaps you fear that your life will end some 10,000 years sooner than you'd like."
Humanity's alleged demise is not the only possible way he could be too late. I wonder where Eliezer would turn his attention if someone (or some group) solved the problems of FAI before him.
Eliezer has written a number of times about how comparing your intelligence and rationality to those aroun... (read more)
Eliezer: It seems to me that uncertainty about your abilities is dwarfed by uncertainty about the difficulty of the problem.
Doug S: The median college graduate in a technical field probably would test in the 95th percentile on most IQ tests and at the 98th percentile on tests weighted heavily towards non-vocabulary crystalline g
Eliezer: Not sure to what extent this helps or answers your questions, but I increasingly as of late find that much of my current "cached wisdom" seems to be derived from stuff you've said.
As far as as actually finding the next generation or whatever, maybe some people here that know how ought to start some "private school for the gifted" that explicitly is meant to try to act almost like a Bayes Dojo or whatever and otherwise train up people in really precise thinking?
While Conway has a huge jump on you in mathematical ability, and I'm pretty sure you're not going to catch up to him, rest assured that you are not strictly dumber than Conway in every respect.
You should bear in mind how the statement "Maybe anything more than around one standard deviation above you starts to blur together, though that's just a cool-sounding wild guess" might apply to me. If your guess is literally true, then, because math is my strong-suit, high mathematical ability is the smartest kind of smart that I can detect at all. For m... (read more)
Eliezer: Look on the bright side, you haven't yet relegated yourself to being a mere administrator and occasional sounding board for others' AI research projects! Ego subjugation is a bitch, but it can have minor rewards of self-satisfaction when actions driven by pressure-free buckshot mental synthesis actually bear fruit. I don't envy that it's of no help to you that the luxury of being carefree relies on the knowledge that smarter people are doing the heavy lifting, and today you're at the top tier of that brain chain!
Maksym: We actually do need someone to translate all this OB stuff very badly, though maybe it's desirable to wait for the book. Still, someone should be presenting it. As for convincing smart college students, there are three fairly separate barriers here, those to rationality, those of information and those to action. I recommend working on barriers to rationality and action first and in conjunction, belief second, and let people find the info themselves. Politics is the natural subject to frame as rationality. Simply turn every conversation where ... (read more)
Dude, you honestly make me ill sometimes. You spoke nothing of the circumstances that got these people to where they are or where they came from. There are people just as "sparkly" and some smarter than these people who have not had the opportunity that these people have. You are blinded by your arrogance and are locked in the present time. You are a smart guy, but you would have a lot to gain in building interpersonal wisdom.
The sparkle you describe is meaningless; non-sparkling borderline-autistic types do just as fine work as the most invigoratingly sparkling individuals. I choose to sparkle through my work, in quiet solitude, not through swaying my limbs excitedly, motor-mouthing like a sports commentator on amphs.
Its a benefit for me to read this post having not read your others, because I can give you an untainted view of it. You are too concerned with intelligence. As long as you stay in this state, you are unusable, and pass up opportunities on becoming usable.
Snap out of it. Accept that there are more intelligent people than you, and they are not flailing, they just get on with it.
Again, I have difficulty understanding why so many people place such a high value on 'intelligence' for its own sake, as opposed to a means to an end. If Eliezer is worried that he does not have enough mathematical intelligence to save the universe from someone else's misdesigned AI, than this is indeed a problem for him, but only because the universe will not be saved. If someone else saves the universe instead, Eliezer should not mind, and should go back to writing sci-fi novels. Why should Eliezer's ego cry at the thought of being upstaged? He shoul... (read more)
Of course I want there to be someone smarter than me to take over, from an altruistic perspective. Or even from just a selfish perspective of being scared, wanting a vacation, and feeling a bit isolated.
And of course if that actually happened, it would be a severe blow to my ego.
And so long as I can do the expected-utility-maximizing thing and invest the appropriate amount of effort into preparing for the possibility without betting the whole farm on it, I have no intention of hacking at my emotions on either score.
I know how you feel, in a couple ways. My high-school guidance counselor looked at my middle school transcript and told me I might realistically aspire to go to a UC school (as opposed to a school in the Cal State system). (I ended up going to Harvard and Caltech.) On the other hand, the year I finished my Ph.D. (at the age of 29) one of my college acquaintances, a brilliant mathematician, became one of the youngest full professors in the history of Princeton University, and when my Ph.D. advisor was 29 he had already been a professor at Caltech for sev... (read more)
Do other people agree? If so, what do you propose distinguishes between intelligence/mathematical ability and athletic ability?
It is possible for a person to produce an accurate evaluation of a subset of their own intellectual skills, but certain skills cannot be evaluated, because presumptions about those skills are required for the evaluation to take place. You should not ask questions about subjects in which you presume you already know the answers, and you cannot ask questions about subjects where answers must be presumed in order to be able to ask at all.
Lara, I don't think they value it "for its own sake" as opposed to as a means to an end; rather, they see it as a necessary condition for achieving their ends, and are worried they don't have what it takes. Nothing but an anxiety trip.
And of course, there's also the ego thing -- when people build superiority over others into their self-image. This is counterproductive, of course. When someone else demonstrates that they're "smarter" than you by offering unexpected insight, you don't fatalistically wallow in jealous misery; you listen to... (read more)
I understand the anxiety issues of, 'Do I have what it takes to accomplish this..."
I don't understand why the existence of someone else who can would damage Eliezer's ego. I can observe that many other people's sense of self is violated if they find out that someone else is better at something they thought they were the best at-- the football champion at HS losing their position at college, etc. However, in order for this to occur, the person needs to 1) in fact misjudge their relative superiority to others, and 2) value the supe... (read more)
I have no idea if it's a natural human quality. It's surely one of my qualities. It's not that I would permit my mind to think verbal thoughts like "How good it is to be above others." But there's a zest in being the best. It feels good to complete a difficult race and it feels good to win a gold medal; they are separate, different good feelings. I can imagine people who would only care about having completed the challenge, but they wouldn't be me.
Since my mind doesn't want whatever I choose it to want, I accept that both desires are a part ... (read more)
I can imagine people who would only care about having completed the challenge, but they wouldn't be me.
I'm not sure there are any people like this who are capable of occasionally winning. OTOH, the prospect of never winning might force someone to rationalize themselves into this position.
The proof is in the math and/or in the protopudding, is it not? There are people/groups who already have either or both. If you have neither, what's your sense of relative achievement/skill/IQ based on?
What (math &/ prototype) do you have? If none, what do you plan to have, when? It seems you'd have to blaze past those who already have their stuff out in the real world behaving ever more AGI-ishly by the day, to meet your criteria for success. A tall order to be sure.
Some else wrote
This is a youthful blog with youthful worries. From the vantage point of age worrying about intelligence seems like a waste of time and unanswerable to boot.
and I find this observation insightful, and even a bit understated.
Increasingly, as one ages, one worries more about what one DOES, rather than about abstract characterizations of one's capability.
Obviously, one reason these sorts of questions about comparative general intelligence are unanswerable is that "general intelligence" is not really a rigorously def... (read more)
Achieving great things seems always to be a mixture of general intelligence, specialized intelligence, wise choice of the right problems to work on, and personality properties like persistence ...
With a pinch of being in the right place and the right time, bake on 350 for 10-30 years.
I kind of disagree with you. First, what we call "general intelligence" is itself a form of specialized intelligence: specializing optimizing successful outcomes in real time in our apparent reality. so the mix you recommend in "achieving great things" would itself be "general intelligence", not general intelligence plus something else (other than luck).
Since most people who "achieve great things" seem to me to be playing life at least in part as a poker game (they don't seem to put all their cards out on the ta... (read more)
Increasingly, as one ages, one worries more about what one DOES, rather than about abstract characterizations of one's capability.
This definitely happened to me. Between the ages of about 10 - 14, I was utterly obsessed with finding out what my IQ was. Somehow, somewhere along the way, I'd picked up the notion that Smartness in quantity was the most important thing a person could possibly have.
And it drove me frankly batty not knowing how much Smartness I had, because (a) I was insecure and felt like I needed to find out I had a "high enough" ... (read more)
Eliezer, don't think to yourself that you only have until you are 40. As somebody else noted and you didn't acknowledge, Marcello was not yet born when Conway passed 40. You mentioned your father, and I don't know the specifics, but surely you know that plenty of people have done great work, sometimes their best, past 40, and that with every passing year, due to advances in health, medicine, etc., "youth" extends further and further into our life.
And as another poster mentioned, I have almost no doubt that Von Neumann would have blown Einstein (p... (read more)
Supporting Ben Goertzel's comment:
Michael Shermer revised his book, Why People Believe Weird Things, to contain a chapter called âWhy Smart People Believe Weird Thingsâ. In it, he quotes studies by Hudson, Getzels, and Jackson showing that âcreativity and intelligence are relatively orthogonal (i.e., unrelated statistically) at high levels of intelligence. Intuitively, it seems like the more intelligent people are the more creative they will be. In fact, in almost any profession significantly affected by intelligence, once you are at a certain level ... (read more)
Actually RU, that's a good approximation for many/most professions, but not all that good an approximation.
gives more detail, showing a significant marginal impact from, at the least, 99.99th percentile math achievement at age 12 relative to merely 99.8th percentile math achievement at age 12.
Is this study talking about Nobel Prize winners - or better yet, Fields Medal-winning mathematicians? Or just authors or something? I'm about ready to say "I defy the data; what about von Neumann?" Maybe there are people who can achieve through diligence what others achieve by genius, but to say that genius doesn't help at all... I defy the data.
(If you told me that IQ didn't make a difference past 140, I'd be quite willing to believe that IQ tests don't work past 140. Richard Feynman's measured IQ was 137, which as John K Clark observed, says more about IQ tests than it does about Feynman.)
Feynman's measured IQ was 123, not 137. And we already know that IQ tests do not measure vitally important aspects of cognition -- in Feynman's case especially, he was quite strong in those aspects while being weak in the aspects measured. (At least, I know that. What the rest of you know is less certain.)
This is one of the primary reasons why people who think we can use IQ scores as a representation for the higher-level aspects we can't measure well (because they're supposedly correlated with IQ) are wrong. (I'm looking at you, Vasser.)
IQ tests do not... (read more)
You don't even know that. This sort of thing is why no one here likes you. Here, let me provide some more details about that IQ score you put such weight on as a criticism. To quote a previous comment of mine on this topic:
There's another aspect of the shortcomings of IQ tests that people might not be aware of. Cognition is quite flexible, and abstract problem-solving ability can be met by many combinations of underlying, modular capacities. A person lacking in certain respects can make up for the lack, at the price, perhaps, of thinking a little more slowly.
Take me for an example. On the WISC-III IQ test, my combined score is 145. There are two composite scores that the combined score is made up of, the verbal score (I got 155, the maximum possible on that test) and the performance score (I got 125). There are also a number of different individual capacity scores. On most, I scored above the 95 percentile. On two or three, I scored right in the middle, and in one (visual short term memory) I scored in the first percentile.
Let me repeat that. I scored in the first percentile for the capacity to keep visual information in my short-term memory. (I scored in the 97th for aural short term memory, and 99.9th for linguistic.) How does that change how I solve problems, how I think about the world? Well, I perform many tasks about twice as slowly (but just as accurately) as others with my composite IQ. I hav... (read more)
Gentlemen - Let me propose that the heart of serious intellectual achievement is synthesis, creativity, simplicity.
These are factors that actually increase with age and are not "IQ" or "g" driven. In fact I believe Edward de Bono argued that creativity drops at IQ 125 or so: maybe because people begin to fall into an "expert trap," where they have to maintain their previous work and expert status more than anything else.
Creativity need not decline with age at all - if you can avoid common habit errors.
My objection to Vassar... (read more)
I don't believe IQ tests measure everything. There's a certain feeling when being creative, and when completing these tests I have not felt it, so I don't think it's measuring it.
Also I am not sure intelligence is general. At the level of ordinary life it certainly is, but geniuses are always geniuses at something, e.g. maths, physics, composing. Why aren't they geniuses at everything.
I think you're on the right path, frelkins, but this?
all these "tests" are highly flawed and biased - they consistently disfavor certain people and favor others.
How does the latter follow at all? If we had a test that measures everything you think constitutes real intelligence, it would consistently disfavor certain people and favor others. It would disfavor stupid people and favor smart people. That's the point of an intelligence test.
Does anyone have a reputable source for Feynman's 137? google makes it look very concentrated in this group, probably the result of a single confabulation.
Sykes and Gleick's biographies both give 12x. Sykes quotes Feynman's sister remembering sneaking into the records as a child. This seems important to me: Feynman didn't just fabricate the 12x.
Math smarts are not the most important thing. Basic reasoning skills are vital (even if they are based on heuristics that are sometimes wrong), management skills are extremely important, intelligence augmentation skills are a must, touchtyping is very useful, etc.
Overall you should think not in terms of competitiveness (whether you are smarter than everybody else), but in terms of co-operation (how you can complement others, how they can contribute their skills to complement yours).
And for the record, I don't think you are the smartest person I know (although you are very smart). I suspect that I may have a better skillset than you do. :)
Since this is now kinda on-topic... I don't think Eliezer Yudkowsky is considerably more intelligent that I am. I'm aware of Dunning-Kruger effect, but the interesting part is that I simply don't find any way to overcome this. I'm fairly intelligent, but since people around here regard my barely-MENSA(probably not even that) -level of IQ a minium requirement to even read this blog, the situation I'm in is fairly interesting. I see repeated claims of super-intelligence, but I can see just someone who has had few more years to hone his skills and who has was... (read more)
a friend of mine thought this was relevant: “Mediocrity knows nothing higher than itself, but talent instantly recognizes genius.” - Conan Doyle
I find the idea that there are a lot of more intelligent people in the world than me comforting, especially in my chosen fields. Not because I feel this gives me an excuse to slack off and let them do the hard work, but because competition seems to drive me and keep me happier than anything else. Since finding lesswrong and related sites where people discuss AI, programming, and rationality, my efforts have improved considerably. I am far from competing with most of the people here, particularly you, but at least I have mental patterns I can model to improve.
I know people with greater mental horsepower than you, but none of them ever persisted at any problems that are hard enough to test the limits of their abilities.
I doubt that Jaynes became Jaynes by aspiring to a level. Too bad we can't ask him.
Don't despair of surpassing Jaynes. He, and a great many others, have given you a leg up that Jaynes never had. People seem formidable because they're practiced in mental kung fu that you don't know. Darwin is remembered for an idea you can teach an 8 year old today.
I suspect you and Luke do not share a referent for "better philosophy" here. In particular, I doubt either Luke or Eliezer would agree that the ability to write clearly, or to analyze and formulate arguments for purposes of compellingly engaging with existing arguments in the tradition of analytic philosophy, is the rare skill that Luke is talking about.
Trying to have a conversation about how hard it is to find an X without common referents for X is not likely to lead anywhere productive.
You're right, I should say more about what I mean by "Eliezer-level philosophical ability." Clearly, I don't mean "writing clarity," as many of my favorite analytic philosophers write more clearly than Eliezer does.
It'll take me some time to prepare that explanation. For now, let me show some support for your comment by linking to another example of Eliezer being corrected by a professional philosopher.
I'm not claiming that clarity isn't a benefit, and as far as I can tell nobody else is either.
I agree that it's not hard to write "someone who can do philosophy well in the LessWrongian style".
And sometimes one person can miscommunicate all by themselves.
Also because it irritates me that this site is scattered with comments at anything from -3 to +15 (not exact figures) that criticize cryonics/ASI/other things lots of us believe in, LW policies, or EY, and then talk about how they're going to get downvoted into oblivion for speaking out against the consensus.
[Edited for formatting.]
EDIT: there goes another conversation. Thank you karma toll.
I'm not downvoting for disagreement, I'm downvoting for absurd claims without any damn evidence. If you had provided, say, an example of a LW user who is better at philosophy - as opposed to a terminology quibble - then I would not have downvoted even if I didn't think it was sufficient.... (read more)
This restriction applies as intended, don't evade it.
(You now have minus 250 30-day Karma, so I'll start banning/hiding some of your comments (they will remain accessible from your user page).)
I won't play the definitional games -- you yourself talked about "low-level trolling" which you excused as "teasing", and so you could have used your definition, whatever it is. But you didn't; and instead you avoided promising not to troll or stating that you've not trolled before.
I won't respond to you again, atleast until such a promise has been made, and perhaps not even then.
This link seems not to answer the comment ,:-. is this mistaken or did EY use that fallacy?
Given the fairly uniform negative reaction to your posts, judging by your 30-day karma, you may want to consider looking for a forum where you will be better appreciated.
You accused Eliezer of committing the noncentral fallacy, you did not demonstrate that he committed it.
Eliezer posited a categorization of "trolls" defined by the practice of deliberately pissing people off on the internet, of which people who incite others to commit suicide are an extreme example. If this is the uniting quality of "trolls," then it's not unreasonable to conclude that we don't want any of them in the community, not just the more extreme examples.
Not all diseases will kill you or make you wish you were dead, so it may ... (read more)
"Here is a very simple example of Bayesian reasoning, that most people are in fact capable of. Suppose we draw a random number between 1 and a million; the prior for any particular number between 1 and a million is straightforwardly very low - one in a million, of course. Now, I have just generated the number 493250 using random.org. Surely this prior of 1 in a million that I have generated any specific number like 493250 is low enough to not be overcome ... (read more)
When did I claim no one at SI held your views? That would've been hard since you refused to use standard terminology like SIA or SSA which I could then go 'ah yes, that's Bostrom's current view'.... (read more)
When did I claim no one at SI held your views on anthropics? And I really don't think anthropics could be called straightforward by anyone.
Congratulations, you understood the point. Similarly, decent arguments are highly diagnostic of philosophical ability... (read more)
What evidence gave you this impression?
How much of CSA have you read? Search for the sweet-spot just before Luke discovered LW and you should find high level philosophy going on.
Whatever caused your slide into jadedness?
By arse standards, most philosophy grads can't find LW.
(Sorry, what was this permutation meant to accomplish?)
I choose the profession as my example because I know a lot more computer programmers than any other single profession.
Almost all the computer programmers I know are not self-obsessed jerks.
I'll answer your questions when you answer mine.
Says the person whose whole argument of opposition to compatibilism was basically the cry "but where is the choice?!?"
There are always higher levels. If nothing else, you can invent them yourself.
That's what came to mind after reading this post, after reflexively comparing how intelligent I think I am to how intelligent I perceive the author and commenters to be.
Another thing that came to mind was a grumpy sense that the whole issue had not been framed in a useful way, and an urge to meddle with how the ideas are arranged.
When I see someone's work who is at higher levels than my own current understanding and abilities allow me to achieve, (notice I am not phrasing that ... (read more)
Can anyone tell me whether Jaynes' book can be read and understood without any particular formal training? I do know the basic concepts of probability, and I usually score around the 85th percentile on math tests... And how hard/time-consuming exactly will the book be? I am employed in a somewhat high pressure job on a full time basis...
Maybe not in your field, but that is how it usually works, isn't it?
(the rest of this comment is basically an explanation of comparative advantage)
Anybody can take the load off of someone smarter, by doing the easiest tasks that have been taking their time.
As a most obvious example, a brilliant scientist's secretary. A... (read more)
Just reminded me of a Lord Acton's quotes : "Judge character at its worst, but talent at its best." (Paraphrased from memory)
The 'thousand years old' vampire impression could be close to truth. My understanding is that people like Jaynes think very long hours, and can clock as much relevant brain time by their thirties as a person of lesser mental endurance may clock in centuries. And it is entirely possible that Jaynes did as much math by the time he wrote the book (correct math - checked and verified) as a hobbyist would in thousands years.
Contrarian view about Jaynes' super-smartness,- from David Chapman.
Apparently, Jaynes "was completely confused about the relationship between probability theory and logic." and "There’s strong evidence that when people tried to de-confuse him, he pig-headedly refused to listen."
Honestly, my favorite thread I've read so far. I lived a similar scenario so many time, and while I doubt any of my "level above mine models" are anywhere near Jeynes, I'm very proud knowing I did manage to catch up and even surepress some. In some parts, thanks to 'less wrong' and 'Rationality from AI to Zombie'.
So thank you (Eliezer and many others on this blog) for sharing your experience and knowledge. You're some of my best teachers
All this time, and I've never thought once that Eliezer could be thinking about other people nearly the exact way I thought about him.