I'm a CS student graduating next year, my education has been pretty crap, due to personal problems, and my school curriculum being pretty bad (we never met the requisite depth in any of the courses we did, and skimmed over a lot). After graduation, I want to take a few years (I'm thinking 3 - 6) off to do a lot of self study.
I'm not sure how my knowledge level compares to international standards, so just assume no prior CS knowledge (I'll skip things I already know satisfactorily, but I don't expect there to be anything I know deep enough that it would be worth skipping it completely). For mathematics, I am at highschool level (currently learning algebra and logic in my free time) sans calculus (which I never really learned), with a little discrete maths. I have no prior philosophy training, and it I sufficient to assume that the entirety of my philosophy knowledge is from Lesswrong.
I have a (set of) goals I want to achieve, and I want to learn the required computer science (among other things) in order to achieve my goal. I plan on pursuing a postgraduate degree towards that goal after my gap years (I intend to start producing original research in at most ten years, and most likely much earlier than that).
Foundations of Intelligence
- Define "Intelligence".
- Develop a model of intelligence.
- Develop a method for quantifying and measuring intelligence of arbitrary agents in agent space.
- Understand intelligence and what makes certain agent designs produce more intelligent agents.
- Develop a hierarchy of intelligent agents over all of agent space.
- Answer: "is there a limit to intelligence?"
- Develop a model of learning.
- Answer: What does it mean for a learning algorithm to be better than another?
- Develop a method for analysing (I'm thinking of asymptotic analysis (at least as of now, all analysis I plan to do would be asymptotic)) (and comparing) the performance of learning algorithms on a particular problem, across a particular problem class, and across problem space using a particular knowledge representation system(KRS), using various KRS, and across the space of possible KRS.
- Understand what causes the difference in performance between learning algorithms.
- Determine the scope/extent of knowledge a given learning algorithm can learn.
- Develop a hierarchy of learning algorithms capturing the entire space of learning algorithms.
- Synthesise the results into a rigorous theory of learning ("learning theory").
- Develop a provably optimal (for some sensible definition of "optimal") learning algorithm.
- Develop a model of knowledge and of KRS.
- Develop a method for quantifying and measuring "knowledge" (for example, we might consider the utility of the information contained, the complexity of that body of knowledge, and its form(structure, relationships, etc).
- Develop a method for analysing and comparing KRS, using a particular learning algorithm, using various types of learning algorithms, and across the space of learning algorithms, on a particular problem, across a particular problem class, and across problem space.
- Determine the scope/extent of knowledge a given KRS can represent.
- Develop a theory for transfer of knowledge among similar (for some sensible notion of "similarity") knowledge representation systems, and among dissimlar knowledge representation systems.
- Understand what makes certain KRS "better" (according to however we measure KRS) than other KRS.
- Develop a hierarchy of KRS capturing the entire space of KRS.
- Synthesise the results of the above, and on learning theory into a (rigorous) theory of knowledge ("knowledge theory").
- Develop a provably optimal (for some sensible definition of "optimal") KRS.
- Synthesise all of the above into a useful theory of intelligent agents.
- Develop a provably optimal (for some sensible definition of "optimal") intelligent agent.
"Develop" doesn't mean that one doesn't already exist, more that I plan to improve on already existing models, or if needed build one from scratch. The aim is model that is satisfactorily (for a very high criteria for satisfy) useful (the criteria I listed above is my attempt at dissolving "useful". The end goal is a theory that can be implemented to build HLMI). I don't plan to (needlessly) reinvent the wheel. When I set out to pursue my goal of formalising intelligence, I would build on the work of others in the area).
How much CS/Maths/Analytical Philosophy/other relevant subject areas do I need to learn, what areas of CS/Maths/Analytical Philosophy/other relevant subject areas should I focus on, and how deep do I need to go? I want to prepare a complete curriculum for myself. I'd appreciate links to learning resources and recommended books, but I would also appreciate mere pointers.