Ben_Goertzel
Ben_Goertzel has not written any posts yet.

Ben_Goertzel has not written any posts yet.

Vassar wrote:
I think it somewhat unlikely there are creationists at your level (Richard Smalley included) and would be astounded if there were any at mine. Well... I mean avowed and sincere biblical literalists, there might be all sorts of doctrines that could be called creationist.
I have no clear idea what you mean by "level" in the above...
IQ?
Demonstrated scientific or mathematical accomplishments?
Degree of agreement with your belief system? ;-)
-- Ben G
Eliezer said:
To all claiming that the judgment is too subtle to carry out, agree or disagree: "Someone could have the knowledge and intelligence to synthesize a mind from scratch on current hardware, reliably as an individual rather than by luck as one member of a mob, and yet be a creationist."
Strongly agree.
I'm not making any specific judgments about the particular Creationist you have in mind here (and I'm pretty sure I know who you mean)... but I see no reason to believe that Creationism renders an individual unable to solve the science and engineering problems involved in creating AGI. Understanding mind is one thing ... beliefs about cosmogony are another...
I note... (read more)
Eliezer: One comment is that I don't particularly trust your capability to assess the insights or mental capabilities of people who think very differently from yourself. It may be that the people whose intelligence you most value (who you rate as residing on "high levels", to quasi-borrow your terminology) are those who are extremely talented at the kind of thinking you personally most value. Yet, there may be many different sorts of intelligent human thinking, some of which you may not excel at, may understand relatively little of, and may not be particularly good at assessing in others. And, it's not yet clear whether the style of intelligence that... (read 408 more words →)
First a comment on a small, specific point you made: I have met a large number of VC's during the last 11 years, and in terms of intelligence and insight I really found them to be all over the map. Some brilliant, wide-ranging thinkers ... some narrow-minded morons. Hard to generalize.
Regarding happiness, if you're not familiar with it you might want to look at the work on flow and optimal experience:
http://www.amazon.com/Flow-Psychology-Experience-Mihaly-Csikszentmihalyi/dp/0060920432
which is likely relevant to why many successful CEO's would habitually feel happy...
Also, there have been many psychological studies of the impact of wealth on happiness, and one result I remember is that, once a basic level of wealth that avoids... (read more)
Some else wrote
"
This is a youthful blog with youthful worries. From the vantage point of age worrying about intelligence seems like a waste of time and unanswerable to boot.
"
and I find this observation insightful, and even a bit understated.
Increasingly, as one ages, one worries more about what one DOES, rather than about abstract characterizations of one's capability.
Obviously, one reason these sorts of questions about comparative general intelligence are unanswerable is that "general intelligence" is not really a rigorously defined concept -- as you well know! And the rigorous definitions that have been proposed (e.g. in Legg and Hutter's writing, or my earlier writings, etc.) are basically... (read more)
As my name has come up in this thread I thought I'd briefly chime in. I do believe it's reasonably likely that a human-level AGI could be created in a period of, let's say, 3-10 years, based on the OpenCogPrime design (see http://opencog.org/wiki/OpenCog_Prime). I don't claim any kind of certitude about this, it's just my best judgment at the moment.
So far as I can recall, all projections I have ever made about the potential of my own work to lead to human-level (or greater) AGI have been couched in terms of what could be achieved if an adequate level of funding were provided for... (read 462 more words →)
I think there is a well-understood, rather common phrase for the approach of "thinking about AGI issues and trying to understand them, because you don't feel you know enough to build an AGI yet."
This is quite simply "theoretical AI research" and it occupies a nontrivial percentage of the academic AI research community today.
Your (Eliezer's) motivations for pursuing theoretical rather than practical AGI research are a little different from usual -- but, the basic idea of trying to understand the issues theoretically, mathematically and conceptually before messing with code, is not terribly odd....
Personally I think both theoretical and practical AGI research are valuable, and I'm glad both are being pursued.
I'm a bit of... (read more)