When I first learned about Friendly AI, I assumed it was mostly a programming problem. As it turns out, it's actually mostly a math problem. That's because most of the theory behind self-reference, decision theory, and general AI techniques haven't been formalized and solved yet. Thus, when people ask me what they should study in order to work on Friendliness theory, I say "Go study math and theoretical computer science."
But that's not specific enough. Should aspiring Friendliness researchers study continuous or discrete math? Imperative or functional programming? Topology? Linear algebra? Ring theory?
I do, in fact, have specific recommendations for which subjects Friendliness researchers should study. And so I worked with a few of my best interns at MIRI to provide recommendations below:
- University courses. We carefully hand-picked courses on these subjects from four leading universities — but we aren't omniscient! If you're at one of these schools and can give us feedback on the exact courses we've recommended, please do so.
- Online courses. We also linked to online courses, for the majority of you who aren't able to attend one of the four universities whose course catalogs we dug into. Feedback on these online courses is also welcome; we've only taken a few of them.
- Textbooks. We have read nearly all the textbooks recommended below, along with many of their competitors. If you're a strongly motivated autodidact, you could learn these subjects by diving into the books on your own and doing the exercises.
Have you already taken most of the subjects below? If so, and you're interested in Friendliness research, then you should definitely contact me or our project manager Malo Bourgon (malo@intelligence.org). You might not feel all that special when you're in a top-notch math program surrounded by people who are as smart or smarter than you are, but here's the deal: we rarely get contacted by aspiring Friendliness researchers who are familiar with most of the material below. If you are, then you are special and we want to talk to you.
Not everyone cares about Friendly AI, and not everyone who cares about Friendly AI should be a researcher. But if you do care and you might want to help with Friendliness research one day, we recommend you consume the subjects below. Please contact me or Malo if you need further guidance. Or when you're ready to come work for us.
|
Cognitive Science
|
If you're endeavoring to build a mind, why not start by studying your own? It turns out we know quite a bit: human minds are massively parallel, highly redundant, and although parts of the cortex and neocortex seem remarkably uniform, there are definitely dozens of special purpose modules in there too. Know the basic details of how the only existing general purpose intelligence currently functions. |
|
Heuristics and Biases
|
While cognitive science will tell you all the wonderful things we know about the immense, parallel nature of the brain, there's also the other side of the coin. Evolution designed our brains to be optimized at doing rapid thought operations that work in 100 steps or less. Your brain is going to make stuff up to cover up that its mostly cutting corners. These errors don't feel like errors from the inside, so you'll have to learn how to patch the ones you can and then move on.
|
|
Functional Programing
|
There are two major branches of programming: Functional and Imperative. Unfortunately, most programmers only learn imperative programming languages (like C++ or python). I say unfortunately, because these languages achieve all their power through what programmers call "side effects". The major downside for us is that this means they can't be efficiently machine checked for safety or correctness. The first self-modifying AIs will hopefully be written in functional programming languages, so learn something useful like Haskell or Scheme. |
|
Discrete Math
|
Much like programming, there are two major branches of mathematics as well: Discrete and continuous. It turns out a lot of physics and all of modern computation is actually discrete. And although continuous approximations have occasionally yielded useful results, sometimes you just need to calculate it the discrete way. Unfortunately, most engineers squander the majority of their academic careers studying higher and higher forms of calculus and other continuous mathematics. If you care about AI, study discrete math so you can understand computation and not just electricity.
|
|
Linear Algebra
|
Linear algebra is the foundation of quantum physics and a huge amount of probability theory. It even shows up in analyses of things like neural networks. You can't possibly get by in machine learning (later) without speaking linear algebra. So learn it early in your scholastic career. |
|
Set Theory
|
Like learning how to read in mathematics. But instead of building up letters into words, you'll be building up axioms into theorems. This will introduce you to the program of using axioms to capture intuition, finding problems with the axioms, and fixing them. |
|
Mathematical Logic
|
The mathematical equivalent of building words into sentences. Essential for the mathematics of self-modification. And even though Sherlock Holmes and other popular depictions make it look like magic, it's just lawful formulas all the way down. |
|
Efficient Algorithms and Intractable Problems
|
Like building sentences into paragraphs. Algorithms are the recipes of thought. One of the more amazing things about algorithm design is that it's often possible to tell how long a process will take to solve a problem before you actually run the process to check it. Learning how to design efficient algorithms like this will be a foundational skill for anyone programming an entire AI, since AIs will be built entirely out of collections of algorithms. |
|
Numerical Analysis
|
There are ways to systematically design algorithms that only get things slightly wrong when the input data has tiny errors. And then there's programs written by amateur programmers who don't take this class. Most programmers will skip this course because it's not required. But for us, getting the right answer is very much required. |
|
Computability and Complexity
|
This is where you get to study computing at it's most theoretical. Learn about the Church-Turing thesis, the universal nature and applicability of computation, and how just like AIs, everything else is algorithms... all the way down. |
|
Quantum Computing
|
It turns out that our universe doesn't run on Turing Machines, but on quantum physics. And something called BQP is the class of algorithms that are actually efficiently computable in our universe. Studying the efficiency of algorithms relative to classical computers is useful if you're programming something that only needs to work today. But if you need to know what is efficiently computable in our universe (at the limit) from a theoretical perspective, quantum computing is the only way to understand that. |
|
Parallel Computing
|
There's a good chance that the first true AIs will have at least some algorithms that are inefficient. So they'll need as much processing power as we can throw at them. And there's every reason to believe that they'll be run on parallel architectures. There are a ton of issues that come up when you switch from assuming sequential instruction ordering to parallel processing. There's threading, deadlocks, message passing, etc. The good part about this course is that most of the problems are pinned down and solved: You're just learning the practice of something that you'll need to use as a tool, but won't need to extend much (if any). |
|
Automated Program Verification
|
Remember how I told you to learn functional programming way back at the beginning? Now that you wrote your code in functional style, you'll be able to do automated and interactive theorem proving on it to help verify that your code matches your specs. Errors don't make programs better and all large programs that aren't formally verified are reliably *full* of errors. Experts who have thought about the problem for more than 5 minutes agree that incorrectly designed AI could cause disasters, so world-class caution is advisable. |
|
Combinatorics and Discrete Probability
|
Life is uncertain and AIs will handle that uncertainty using probabilities. Also, probability is the foundation of the modern concept of rationality and the modern field of machine learning. Probability theory has the same foundational status in AI that logic has in mathematics. Everything else is built on top of probability. |
|
Bayesian Modeling and Inference
|
Now that you've learned how to calculate probabilities, how do you combine and compare all the probabilistic data you have? Like many choices before, there is a dominant paradigm (frequentism) and a minority paradigm (Bayesianism). If you learn the wrong method here, you're deviating from a knowably correct framework for integrating degrees of belief about new information and embracing a cadre of special purpose, ad-hoc statistical solutions that often break silently and without warning. Also, quite embarrassingly, frequentism's ability to get things right is bounded by how well it later turns out to have agreed with Bayesian methods anyway. Why not just do the correct thing from the beginning and not have your lunch eaten by Bayesians every time you and them disagree? |
|
Probability Theory
|
No more applied probability: Here be theory! Deep theories of probabilities are something you're going to have to extend to help build up the field of AI one day. So you actually have to know why all the things you're doing are working inside out. |
|
Machine Learning
|
Now that you chose the right branch of math, the right kind of statistics, and the right programming paradigm, you're prepared to study machine learning (aka statistical learning theory). There are lots of algorithms that leverage probabilistic inference. Here you'll start learning techniques like clustering, mixture models, and other things that cache out as precise, technical definitions of concepts that normally have rather confused or confusing English definitions. |
|
Artificial Intelligence
|
We made it! We're finally doing some AI work! Doing logical inference, heuristic development, and other techniques will leverage all the stuff you just learned in machine learning. While modern, mainstream AI has many useful techniques to offer you, the authors will tell you outright that, "the princess is in another castle". Or rather, there isn't a princess of general AI algorithms anywhere -- not yet. We're gonna have to go back to mathematics and build our own methods ourselves. |
|
Incompleteness and Undecidability
|
Probably the most celebrated results is mathematics are the negative results by Kurt Goedel: No finite set of axioms can allow all arithmetic statements to be decided as either true or false... and no set of self-referential axioms can even "believe" in its own consistency. Well, that's a darn shame, because recursively self-improving AI is going to need to side-step these theorems. Eventually, someone will unlock the key to over-coming this difficulty with self-reference, and if you want to help us do it, this course is part of the training ground. |
|
Metamathematics
|
Working within a framework of mathematics is great. Working above mathematics -- on mathematics -- with mathematics, is what this course is about. This would seem to be the most obvious first step to overcoming incompleteness somehow. Problem is, it's definitely not the whole answer. But it would be surprising if there were no clues here at all. |
|
Model Theory
|
One day, when someone does side-step self-reference problems enough to program a recursively self-improving AI, the guy sitting next to her who glances at the solution will go "Gosh, that's a nice bit of Model Theory you got there!"
|
|
Category Theory
|
Category theory is the precise way that you check if structures in one branch of math represent the same structures somewhere else. It's a remarkable field of meta-mathematics that nearly no one knows... and it could hold the keys to importing useful tools to help solve dilemmas in self-reference, truth, and consistency. |
Outside recommendations |
|
|
|
Harry Potter and the Methods of Rationality
|
Highly recommended book of light, enjoyable reading that predictably inspires people to realize FAI is an important problem AND that they should probably do something about that.
|
|
Global Catastrophic Risks
|
A good primer on xrisks and why they might matter. SPOILER ALERT: They matter.
|
|
The Sequences
|
Rationality: the indispensable art of non-self-destruction! There are manifold ways you can fail at life... especially since your brain is made out of broken, undocumented spaghetti code. You should learn more about this ASAP. That goes double if you want to build AIs.
|
|
Good and Real
|
A surprisingly thoughtful book on decision theory and other paradoxes in physics and math that can be dissolved. Reading this book is 100% better than continuing to go through your life with a hazy understanding of how important things like free will, choice, and meaning actually work.
|
|
MIRI Research Papers
|
MIRI has already published 30+ research papers that can help orient future Friendliness researchers. The work is pretty fascinating and readily accessible for people interested in the subject. For example: How do different proposals for value aggregation and extrapolation work out? What are the likely outcomes of different intelligence explosion scenarios? Which ethical theories are fit for use by an FAI? What improvements can be made to modern decision theories to stop them from diverging from winning strategies? When will AI arrive? Do AIs deserve moral consideration? Even though most of your work will be more technical than this, you can still gain a lot of shared background knowledge and more clearly see where the broad problem space is located.
|
|
Universal Artificial Intelligence
|
A useful book on "optimal" AI that gives a reasonable formalism for studying how the most powerful classes of AIs would behave under conservative safety design scenarios (i.e., lots and lots of reasoning ability).
|
Do also look into: Formal Epistemology, Game Theory, Decision Theory, and Deep Learning.
Allow me to add another clarification.
Earlier, I explained to someone that most of the problems in Eliezer's forthcoming Open Problems in Friendly AI sequence are still at the stage of being philosophy problems. Why, then, do Louie and I talk about FAI being "mostly a math problem, not a programming problem"?
The point we're trying to make is that Friendly AI, as we understand it, isn't chiefly about hiring programmers and writing code. Instead, it's mostly about formalizing problems (in reflective reasoning, decision theory, etc.) into math problems, and then solving those math problems. The formalization step itself will likely require the invention of new math — not so much clever programming tricks (though those may be required at a later stage).
Most of the "open FAI problems" are still at the stage of philosophy because, as Louie says, "most of the theory behind self-reference, decision theory, and general AI techniques haven't been formalized.... yet." But we think those philosophical problems will be formalized into math problems, sometimes requiring new math.
So we're not (at this stage) looking to hire great programmers. We're looking to hire gr... (read more)
I wish you'd stop saying that (without justification or clarification). Modern math seems quite powerful enough to express most problems, so the words "new" and "fundamental" sound somewhat suspicious. Is this "new fundamental math" something like the invention of category theory? Probably not. Clarifying the topic of Friendly AI would almost certainly involve nontrivial mathematical developments, but in the current state of utter confusion it seems premature to characterize these developments as "fundamental".
We don't know how it turns out, what we know is that only a mathematical theory would furnish an accurate enough understanding of the topic, and so it seems to be a good heuristic to have mathematicians work on the problem, because non-mathematicians probably won't be able to develop a mathematical theory. In addition, we have some idea about the areas where additional training might be helpful, such as logic, type theory, formal languages, probability and computability.
You're right, the word "fundamental" might suggest the wrong kinds of things. I'm not at all confident that Friendly AI will require the invention of something like category theory. So, I've removed the word "fundamental" from the above comment.
Should I contact you if I'm familiar with some of this material (mostly the purer mathematics, leaning away from computer science and towards category theory, plus MoR and the Sequences) and willing to learn the rest? Does SI offer summer internships?
Yep, SI has summer internships. You're already in Berkeley, right?
Drop me an email with the dates you're available and what you'd want out of an internship. My email and Malo's are both on our internship page:
http://singularity.org/interns/
Look forward to hearing from you.
The list doesn't include anything in the way of game theory, social choice, or mechanism design, which is going to be crucial for an AI that interacts with other agents or tries to aggregate preferences.
Relevant book recommendations (all available at links as pdfs):
Please be careful about exposing programmers to ideology; it frequently turns into politics kills their minds. This piece in particular is a well-known mindkiller, and I have personally witnessed great minds acting very stupid because of it. The functional/imperative distinction is not a real one, and even if it were, it's less important to provability than languages' complexity, the quality of their type systems and the amount of stupid lurking in their dark corners.
How is the distinction between functional and imperative programming languages "not a real one"? I suppose you mean that there's a continuum of language designs between purely functional and purely imperative. And I've seen people argue that you can program functionally in python or emulate imperative programming in Haskell. Sure. That's all true. It doesn't change the fact that functional-style programming is manifestly more machine checkable in the average (and best) case.
Agreed. The most poorly programmed functional programs will be harder to machine check than the mostly skillfully designed imperative programs. But I think for typical programming scenarios or best case scenarios, functional-style programming makes it hands-down more natural to write the correct kind of structures that can be reliably machine checked and imperative programming languages just don't.
The entry level functional programming course is going to focus on all the right things: type th... (read more)
"Not a real one" is sort of glib. Still, I think Jim's point stands.
The two words "functional" and "imperative" do mean different things. The problem is that, if you want to give a clean definition of either, you wind up talking about the "cultures" and "mindsets" of the programmers that use and design them, rather than actual features of the language. Which starts making sense, really, when you note "functional vs. imperative" is a perennial holy war, and that these terms have become the labels for sides in that war, rather than precise technical positions.
I mean, I am somewhat partisan in that war, and rather agree that, e.g., we should point new programmers to Scheme rather than Python or Java. But presenting "functional" vs. "imperative" as the major division in thinking about programming languages is epistemically dirty, when there's so many distinctions between languages that matter just as much, and describe things more precisely.
(Jim: fair rephrasing?)
So your example of how 'functional programming fails' is to use a vague personal anecdote about possibly the worst 'functional' language in the world, many versions of which don't even have higher-order functions which is a basic key functional feature dating back literally to the 1960s, and of which people have published research papers just to prove it is Turing-complete?
Do you understand why no one is going to find your claim that "functional programming sucks because I once wrote a bad program in XSLT" even remotely convincing? Even if you do boast about yourself that
I am well versed in most of this math, and a fair portion of the CS (mostly the more theoretical parts, not so much the applied bits). Should I contact you now, or should I study the rest of that stuff first?
In any case, this post has caused me to update significantly in the direction of "I should go into FAI research". Thanks.
Main is that-a-way ->
<- that way actually.
I was a CS major but I haven't taken most of the CS courses listed here, including Numerical Analysis, Parallel Computing, Quantum Computing, Machine Learning, Artificial Intelligence, Functional Programming, and Automated Program Verification. I think it's probably not necessary to have more than a cursory understanding of most these topics at the current stage of Friendliness research.
I would suggest swapping one or more of these courses out for a course in cryptography. Cryptography, besides possibly having direct applications, is good for giving a sense of the limits of human intelligence and mathematical reasoning. You can see how far "provable security" (which seems like the closest analogue we have to "provable Friendliness") as well as "heuristical security" got after thousands of mathematician-years worth of effort.
Why are you suggesting Discrete Math and Its Applications when its Amazon reviews are uniformly negative?
Is this actually true? My current guess is that even though for a given level of training, smarter people can get through harder texts, they will learn more if they go through easier texts first.
Friendliness researchers also need to study what human values actually are and how they are implemented in the brain.
There is apparently a pervasive assumption (not quite spelled out) that a general theory of reflective ethical idealization will be found, and also a general method of inferring the state-machine structure of human cognition, and then Friendliness will be obtained by applying the latter method to human cognitive and neuroscientific data, and then using the general theory to extrapolate a human-relative ideal decision theory from the relevant aspects of the inferred state machine.
I think this is somewhat utopian, and the efficient path forward will involve close engagement with the details of moral cognition (and other forms of decision-making cognition) as ascertained by human psychologists and neuroscientists. The fallible, evolving "first draft" of human state-machine architecture that they produce should offer profound guidance for anyone trying to devise rigorous computational-epistemic methods for going from raw neuro-cognitive data, to state-machine model of the generic human brain, to idealized value system suitable for implementing friendly AI. (F... (read more)
Yeah, universities don't reliably teach a lot of things that I'd want people to learn to be Friendliness researchers. Heuristics and Biases is about the closest most universities get to the kind of course you recommend... and most barely have a course on even that.
I'd obviously be recommending lots of Philosophy and Psychology courses as well if most of those courses weren't so horribly wrong. I looked through the course handbooks and scoured them for courses I could recommend in this area that wouldn't steer people too wrong. As Luke has mentioned (partially from being part of this search with me), you can still profitably take a minority of philosophy courses at CMU without destroying your mind, a few at MIT, and maybe two or three at Oxford. And there are no respectable, mainstream textbooks to recommend yet.
Believe me, Luke and I are sad beyond words every day of our lives that we have to continue recommending people read a blog to learn philosophy and a ton of other things that colleges don't know how to teach yet. We don't particularly enjoy looking crazy to everyone outside of the LW bubble.
This doesn't look as bad as it looks like it looks. Among younger mathematicians, I think it's reasonably well-known that the mathematical blogosphere is of surprisingly high quality and contains many insights that are not easily found in books (see, for example, Fields medalist Terence Tao's blog). So I would expect that younger mathematicians would not care so much about the difference between a good blog recommendation and a good book recommendation. I, for one, have been learning math from blog posts for years, but I might be an outlier in this regard.
More posts like this please!
Here's my reaction to this post.
"Getting the right answer" doesn't really describe numerical analysis. I'd have said "Recognizing when you're going to get the wrong answer, and getting a controllable upper bound on how wrong". The book you list seems typical: only one chapter even begins by discussing an exact rather than an approximate method, and the meat of that chapter is about how badly the error can blow up when you try to use that method with inexact floating-point arithmetic.
Thanks for posting this! I love how you put course numbers on the left, to make it extra-actionable!
Apart from Numerical Analysis and Parallel Computing which seem a bit out of place here (*), and swapping Bishop's Pattern Recognition for Murphy's ML: A Probabilistic Perspective or perhaps Barber's freely available Bayesian Reasoning and ML, this is actually quite a nice list - if complemented with Vladimir Nesov's. ;)
(*) We're still in a phase that's not quite philosophy in a standard sense of the word, but nonetheless light years away from even starting to program the damn thing, and although learning functional programming from SICP is all good and we... (read more)
These typically aren't used with a purely functional progamming style, are they?
On another note, if you're thinking of doing FAI research, it might be a good idea to also study relevant-seeming stuff that's not on this list in order to give yourself a different set of tools than other FAI thinkers.
Anything on ethics / meta-ethics, or are those considered to be covered by the sequences?
Very interesting list, thanks Louie!
I just randomly clicked on a few links for online courses, and it seems there's at least one issue: The "Probability and Computing" part points to "Analytic Combinatorics, Part I" coursera course, which is not about probability at all. The MIT and CMU links for this part seem wrong too. Someone should carefully go through all the links and fix them.
The set theory book only links to an image of the book, not the amazon page,
Now a more serious comment: as someone who already has their undergraduate degree, how could I best go about going back to school to take these courses in a classroom setting? Particularly, how could I do so on a sensible budget?
MIT has an HTML of Structure and Interpretation of Computer Programs
Is Julian Barbour's book suitable for this list, or does it, say, require too big a detour through detailed physics?
Nice list.
Do any of the programming or mathematics course cover Domain Theory ?
Given the importance of self-modification, fixed points and so forth in FAI, that might be a useful subject to know about. That's just a guess, I don't know enough about either domain theory or FAI to know if it is really relevent.
Hi. Thanks for the recommendations above!
I'm currently a philosophy student that has primarily thought about and done work in foundational value theory, where I've looked at how one might build moral and political theories from the ground up (considering almost everyone in the field doesn't even seem to care about this).
I've spend the last couple of months getting into the topic of specifying a "value" or "governing principle(s)" for an AGI, reading up Bostrom's new book as well as several of his papers and other papers by MIRI. The ang... (read more)
Should there be some biology/medicine/ecology in there, or is the xrisks book enough?
Ahh. Yeah, I'd expect that kind of content is way too specific to be built into initial FAI designs. There are multiple reasons for this, but off the top of my head,
I expect design considerations for Seed AI to favor smaller designs that only emphasize essential components for both superior ability to show desirable provability criteria, as well as improving design timelines.
All else equal, I expect that the less arbitrary decisions or content the human programmers provide to influence the initial dynamic of FAI, the better.
And my broadest answer is it's not a core-Friendliness problem, so it's not on the critical path to solving FAI. Even if an initial FAI design did need medical content or other things along those lines, this would be something that we could hire an expert to create towards the end of solving the more fundamental Friendliness and AI portions of FAI.
some quick notes -
PS - I had some initial trouble formatting my table's appearance. It seems to be mostly fixed now. But if an admin wants to tweak it somehow so the text isn't justified or it's otherwise more readable, I won't complain! :)
Though narrower in scope, Hutter's AI Recommendation Page is also very informative; the Undergraduate Books and Courses section provides a useful list of textbooks, as well as general advice and recommended ANU courses.
Additionally, I'm available to compile an extensive list of Friendliness theory course recommendations for the ANU, if this is of interest to anyone. (from there I could also expand it to cover other major Australian universities)
Thanks for your recommendations Louie.
This is a tremendous amount of reading. I've read parts of Cognitive Science and skimmed through bits of Heuristics and Biases. But, reading books bores me in the age of the internet!
I'm currently interested in conceptualising the open problems in FAI research. So, here is my expedient strategy in case someone wants to critique or copy it.
I'm just planning to Wikipedia and google interesting questions around BQP (bounded error quantum polynomial time) to understand Pascal's mugging better, and perhaps Universal Artifi... (read more)