Consider getting back into neuroscience!
AGI as a project is trying to make machines that can do what brains do. One great way to help that project is to study how the brains themselves work. Many many key ideas in AI come from ideas in neuroscience or psychology, and there are plenty of labs out there studying the brain with AI in mind.
Why am I telling you this? You claim that you'd like to be an AI researcher, but later you imply that you're new to computer programming. As mentioned in some of the comments, this is likely to present a large barrier to "pure" AI research in an academic CS department, corporation, etc. A computational psychology or neuroscience lab is likely to be much more forgiving for a newbie programmer (though you'll still have to learn). The major things that graduate programs in computational neuro look for are strong math skills, an interest in the mind/brain, and a bit of research experience. It sounds like you've got the first two covered, and the last can be gained by joining an academic lab as a research assistant on a temporary basis.
If you're considering giving academia another shot, its worth thinking about neuro and psych (and indeed linguistics and cognitive science, or a different/better philosophy program) as well as computer science and pure AI.
Good luck!
Seems to me we've got a gen-u-ine semantic misunderstanding on our hands here, Tim :)
My understanding of these ideas is mostly taken from reinforcement learning theory in AI (a la Sutton & Barto 1998). In general, an agent is determined by a policy pi that determines the probability that the agent will make a particular action in a particular state, P = pi(s,a). In the most general case, Pi can also depend on time, and is typically quite complicated, though usually not complex ;).
Any computable agent operating over any possible state and action space can be represented by some function pi, though typically folks in this field deal in Markov Decision Processes since they're computationally tractable. More on that in the book, or in a longer post if folks are interested. It seems to me that when you say "utility function", you're thinking of something a lot like pi. If I'm wrong about that, please let me know
When folks in the RL field talk about "utility functions", generally they've got something a little different in mind. Some agents, but not all of them, determine their actions entirely using a time-invariant scalar function U(s) over the state space. U takes in future states of the world and outputs the reward that the agent can expect to receive upon reaching that state (loosely "how much the agent likes s"). Since each action in general leads to a range of different future states with different probabilities, you can use U(s) to get an expected utility U'(a,s):
U'(a,s) = sum((p(s,a,s')*U(s')),
where s is the state you're in, a is the action you take, s' are the possible future states, and p is the probability than action a taken in state s will lead to state s'. Once your agent has a U', some simple decision rule over that is enough to determine the agent's policy. There are a bunch of cool things about agents that do this, one of which (not the most important) is that their behavior is much easier to predict. This is because behavior is determined entirely by U, a function over just the state space, whereas Pi is over the conjunction of state and action spaces. From a limited sample of behavior, you can get a good estimate of U(s), and use this to predict future behavior, including in regions of state and action space that you've never actually observed. If your agent doesn't use this cool U(s) scheme, the only general way to learn Pi is to actually watch the thing behave in every possible region of action and state space. This I think is why von Neumann was so interested in specifying exactly when an agent could and could not be treated as a utility-maximizer.
Hopefully that makes some sense, and doesn't just look like an incomprehensible jargon-filled snow job. If folks are interested in this stuff I can write a longer article about it that'll (hopefully) be a lot more clear.
Introduction to Neuroscience
Recommendation: Neuroscience:Exploring the Brain by Bear, Connors, Paradiso
Reasons: BC&P is simply much better written, more clear, and intelligible than it's competitors Neuroscience by Dale Purves and Fundamentals of Neural Science by Eric Kandel. Purves covers almost the same ground, but is just not written well, often just listing facts without really attempting to synthesize them and build understanding of theory. Bear is better than Purves in every regard. Kandel is the Bible of the discipline, at 1400 pages it goes into way more depth than either of the others, and way more depth than you need or will be able to understand if you're just starting out. It is quite well-written, but it should be treated more like an encyclopedia than a textbook.
I also can't help recommending Theoretical Neuroscience by Peter Dayan and Larry Abbot, a fantastic introduction to computational neuroscience, Bayesian Brain, a review of the state of the art of baysian modeling of neural systems, and Neuroeconomics by Paul Glimcher, a survey of the state of the art in that field, which is perhaps the most relevant of all of this to LW-type interests. The second two are the only books of their kind, the first has competitors in Computational Explorations in Cognitive Neuroscience by Randall O'Reilly and Fundamentals of Computational Neuroscience by Thomas Trappenberg, but I've not read either in enough depth to make a definitive recommendation.
Theoretical Neuroscience by Dayan and Abbot is a fantastic introduction to comp neuro, from single-neuron models like Hodgkin-Huxley through integrate-and-fire and connectionist (including Hopfield) nets up to things like perceptrons, reinforcement learning models. Requires some comfort with Calculus.
Computational Exploration in Cog Neuro by Randall O'Reilly purports to cover the similar material on a slightly more basic level, including lots of programming exercises. I've only skimmed it, but it looks pretty good. Kind of old, though, supposedly Randy's working on a new edition that should be out soon.
You can construct a set of values and a utility function to fit your observed behavior, no matter how your brain produces that behavior.
I'm deeply hesitant to jump into a debate that I don't know the history of, but...
Isn't it pretty generally understood that this is not true? The Utility Theory folks showed that behavior of an agent can be captured by a numerical utility function iff the agent's preferences conform to certain axioms, and Allais and others have shown that human behavior emphatically does not.
Seems to me that if human behavior were in general able to be captured by a utility function, we wouldn't need this website. We'd be making the best choices we could, given the information we had, to maximize our utility, by definition. In other words, "instrumental rationality" would be easy and automatic for everyone. It's not, and it seems to me a big part of what we can do to become more rational is try and wrestle our decision-making algorithms around until the choices they make are captured by some utility function. In the meantime, the fact that we're puzzled by things like moral dilemmas looks like a symptom of irrationality.
"A scientific theory should be as simple as possible, but no simpler."
Einstein
Nice article!
Folks who are interested in this kind of thing might also be interested to see the Koch Lab's online demos of CFS, which you can experience for yourself if you happen to have some old-style blue-red 3D glasses kicking around. This is the method where you show an image to the nondominant eye, and a crazy high-contrast flashing stimulus to the dominant eye, and the subject remains totally unaware of the image for (up to) minutes. Pretty fun stuff :) http://www.klab.caltech.edu/~naotsu/CFS_color_demo.html
You might also be interested in Giulio Tononi's "Integrated Information" theory of consciousness. The gist is that a brain is "conscious" of features in the world to the extent that it is properly causally entangled with those features, and has represents a large amount of information about the world in a deeply entangled way. Not easy to explain in a few sentences, but it seems to me to be a deeper theory that is perhaps related to this "Global Workspace" idea. I think you can find his most well-known paper at: http://www.sciencemag.org/content/282/5395/1846.short, many more available by poking around google scholar.
You have presented a very clear and very general description of the Reinforcement Learning problem.
I am excited to read future posts that are similarly clear and general and describe various solutions to RL. I'm imagining the kinds of things that can be found in the standard introduction, and hoping for a nonstandard perspective that might deepen my understanding.
Perhaps this is what Richard is waiting for as well?