@Aron, wow, from your initial post I thought I was giving advice to an aspiring undergraduate, glad to realize I'm talking to an expert :-)
Personally I continually bump up against performance limitations. This is often due to bad coding on my part and the overuse of Matlab for loops but I still have the strong feeling that we need faster machines. In particular, I think full intelligence will require processing VAST amounts of raw unlabelled data (video, audio, etc) and that will require fast machines. The application of statistical learning techniques to vast unlabeled data streams is about to open new doors. My take on this idea is spelled out better here.
Aron, I don't think anyone really knows the general requirements for AGI, and therefore nobody knows what (if any) kind of specialized hardware is necessary. But if you're a hardware guy and you want something to work on, you could read Pearl's book (mentioned above) and find ways to implement some of the more computationally intensive inference algorithms in hardware. You might also want to look up the work by Geoff Hinton et al on reduced Boltzmann machines and try to implement the associated algorithms in hardware.
Eliezer, of course in order to construc...
I mean that a superintelligent AI should be able to induce the Form of the Good from extensive study of humans, human culture, and human history. The problem is not much different in principle from inducing the concept of "dog" from many natural images, or the concept of "mass" from extensive experience with physical systems.
@Eliezer - I think Shane is right. "Good" abstractions do exist, and are independent of the observer. The value of an abstraction relates to its ability to allow you to predict the future. For example, "mass" is a good abstraction, because when coupled with a physical law it allows you to make good predictions.
If we assume a superintelligent AI, we have to assume that the AI has the ability to discover abstractions. Human happiness is one such abstraction. Understanding the abstraction "happiness" allows one to predict certain...
"Yeah? Let's see your aura of destiny, buddy."
I don't want to see your aura of destiny. I just want to see your damn results! :-)
In my view, the creation of an artificial intelligence (friendly or otherwise) would be a much more significant achievement than Einstein's, for the following reason. Einstein had a paradigm: physics. AI has no paradigm. There is no consensus about what the important problems are. In order to "solve" AI, one not only has to answer a difficult problem, one has to begin by defining the problem.
This may be nitpicking and I agree with your overarching point, but I think you're drawing a false dichotomy between Science and Bayes. Science is the process of constructing theories to explain data. The theory must optimize a tradeoff between two terms:
1) ability to explain data 2) compactness of the theory
If one is willing to ignore or gloss over the second requirement, the process becomes nonsense. One can easily construct a theory of astrology which explains the motion of the planets, the weather, the fates of lovers, and violence in the Middle East. It just won't be a compact theory. So Science and Bayes are one and the same.
I suggest a lot of caution in thinking about how entropy appears in thermodynamics and information theory. All of statistical mechanics is based on the concept of energy, which has no analogue in information theory. Some people would suggest that for this reason the two quantities should not be called by the same term.
the "temperature" isn't a uniform speed of all the molecules, it's an average speed of the molecules, which in turn corresponds to a predictable statistical distribution of speeds
I assume you know this, but some readers may not: te...
Prof. Jaynes would doubtless be surprised by the power of algorithms such as Markov Chain Monte Carlo, importance sampling, and particle filtering. The latter method is turning out to be one of the most fundamental and powerful tools in AI and robotics. A particle filter-like process has also been proposed to lie at the root of cognition, see Lee and Mumford "Hierarchical Bayesian Inference in the Visual Cortex".
The central difficulty with Bayesian reasoning is its deep, deep intractability. Some probability distributions just can't be modeled, other than by random sampling.
Another way to think about probabilities of 0 and 1 is in terms of code length.
Shannon told us that if we know the probability distribution of a stream of symbols, then the optimal code length for a symbol X is: l(X) = -log p(X)
If you consider that an event has zero probability, then there's no point in assigning a code to it (codespace is a conserved quantity, so if you want to get short codes you can't waste space on events that never happen). But if you think the event has zero probability, and then it happens, you've got a problem - system crash or som...
That which I cannot eliminate may be well worth reducing.
I wish this basically obvious point was more widely appreciated. I've participated in dozens of conversations which go like this:
Me: "Government is based on the principle of coercive violence. Coercive violence is bad. Therefore government is bad." Person: "Yeah, but we can't get rid of government, because we need it for roads, police, etc." Me: " $%&*@#!! Of course we can't get rid of it entirely, but that doesn't mean it isn't worth reducing!"
Great post. I encourage you to expand on the idea of the Quantitative Way as applied to areas such as self improvement and everyday life.
If you want to object to Objectivism (hah) you should do so by discussing the ideas themselves, perhaps by citing passages that highlight basic ideas of the theory. Details of Rand's personal life are irrelevant. Hug the query.
There is an interesting kernel of an idea here: how can one establish a self-renewing philosophy? How can an intellectual leader construct a set of principles which specifically allow for their own revision? Of course, this is very similar to the question of how one can construct a Friendly AI, and the question of how one can construct a Friendly government.
Persons interested in the concept of super-stimuli should note the work of music scientist Phillip Dorrell, who argues that music is a super-stimulus for language:
He also speculates that software developers will soon be able to construct algorithms to produce "strong" music, that is, music which is better than any thusfar created by humans. This will bring about obvious addiction problems similar to those mentioned above relating to video games.
If you think about this for a while, you'll begin to realize how much of our civilization is based on lost purposes.
Why do men wear ties? Why do we build houses out of the one substance that rots and burns? Why do we type with QWERTY keyboards? Why isn't the department of defense also responsible for homeland security (what does defense mean, if not homeland security)?
I am suspicious of attempts to define intelligence for the following reason. Too often, they lead the definer down a narrow and ultimately fruitless path. If you define intelligence as the ability to perform some function XYZ, then you can sit down and start trying to hack together a system that does XYZ. Almost invariably this will result in a system that achieves some superficial imitation of XYZ and very little else.
Rather than attempting to define intelligence and move in a determined path toward that goal, we should look around for novel insights and ... (read more)