A deeply satisfying view on intelligence here:

http://www.insidescience.org/content/physicist-proposes-new-way-think-about-intelligence/987/

 

 

 

 

New to LessWrong?

New Comment
5 comments, sorted by Click to highlight new comments since: Today at 8:04 PM

Yes, I see that now. Still it is an important open question. And the whole raison d'etre of MIRI, FAI and so on hangs in this.

If they (authors) are basically right, then it's a game changer. I think, they are.

If they (authors) are basically right, then it's a game changer.

This is true of all new ideas about A(G)I, including past ones that fizzled, which is all of them so far. One might conclude that this one is likely to fizzle, except that there are anthropic issues about alternate histories in which one of these advances foomed instead of fizzling. I am not sure how to handle that.

Is there any reason to think that this new idea has something that all previous ideas lacked?

I wouldn't say, that all those ideas fizzled. They brought us some great results, don't forget to give credits to those who deserve.

But if you want to understand seeing, you have to understand optics. If you want to understand motioning, you have to understand mechanics. You have to understand the physics behind, any biology, physiology or anthropology is not enough. The same goes with flying. It's aerodynamics which enables flying.

Animals were always just users of the underlying physics, clumsy users in fact. Not at all the inventors of breading (oxidation), swimming (moving through liquids) and so on. Evolution carved animal shapes into the surrounding physics.

It is likely that one has to understand the physics behind the thinking to really understand and replicate it.

I wouldn't go into details. If is it really necessary and enough for a process to be intelligent to be able to maximize the entropy. Might be some subset of possible futures to maintain and not the whole set. Or some other re-conditioning.

But it very likely takes the thermodynamics to really understand the matter. A "cognition" is not a very fruitful term. As many others are not. It's a wrong level of describing the problem, I think.

Seems like this could be another basic AI drive, but would still be orthogonal to most of human value.