Problems can be decomposed into parts that are shared among different problems, these parts I'll call facets (for want of a better word, if there is some art I am ignorant of let me know). Each facet fundamentally effects how you approach a problem, by changing the class of problem being solved. Facets can be seen as parts of paradigms extended into everyday life.
For example, when trying to path find, given only the map, you may use something like the A* algorithm. But if you have a map and an oracle to tell you that the optimal path runs through certain points, you can use that information decompose the problem into solving the shortest path between those points. Having that oracle is a facet to a problem. Another facet might be that you know of an automated doorway on a shortcut that is open and closed at different times. You no longer have a fixed map so the A* algorithm is not appropriate. You'll have to represent that probabilistically or try and figure out the pattern of the opening, so you can predict exactly when it is open.
There are a number of distinct ways that facets can impact your problem solving . These can be because :
- a facet suggests new sources of information to solve a problem (Epistemic)
- a facet constrains the problem in a hard to discover way that make it easier to solve (Constraints)
- a facet makes things harder to solve but more likely a good solution will be found if the facet is true (Inconveniences)
- a facet means you have to manipulate your other facets (Meta)
A problem can have many facets and they interact in a non-trivial fashion. Having the wrong facets can be very bad they form the systems inductive bias.
I think facets can impact different things:
- How we approach the world ourselves (Are we making use of all the facets that we can? How do the facets we are exploiting interfere? Do we have damaging facets?).
- How we design systems that interact with the world. Enumerating the facets of a problem is the first step to trying to solve it.
Epistemic status: Someone has probably thought of this stuff before. Hoping to find it. If they haven't and people find it useful I'll do a second version.
Base Assumptions: That throwing lots of resources at a single approach with no regard to facets is impractical (No AIXI type solution)
These facets can allow you to move more quickly to solving your problem by giving you information about the world, or how you should act.
Exploration - There are known unknowns. See what is there and or useful the in those locations. You have a function F(x) and you know the values of F(1) F(2) F(3) what is the value of F(10,000). Exploration is a known phase of reinforcement learning and it can interfere with Exploitation.
Generalisation - There are patterns in the unknowns so that it makes sense to try and generalise and discover the true function. More exploration can help with generalisation.
Being Taught - There is an agent out there that can model you in some way and provide feedback on how you are doing or processing data. Whether this is labeled input data pairs or feedback in a language like "Stop thinking so hard" or "A is for Apple".
Copying agents -There are agents out there doing stuff, they may not be able to tell you what to do or they may tell you to do the wrong thing (if they are adversarial). But you can copy what they do. See mirror neurons. This may be a pre-requisite for "Being Taught" if you don't want to pre-encode a language.
Research - There is linguistic information out there that may be useful find it. This is made more complex by the information being from agents with the wrong view of reality or overtly hostile agents.
Commissioned Learning - You can convince another existing agent to do some of the above for you and give you the results.
Creation of Learning systems - You can create a system that does some of the above for you and gives you the results.
These facets can allow you look for certain patterns and make use of them more easily than having to derive them from experience or first principles.
Construction - You can break down a problem into parts, Solve those sub parts and re-assemble. See modular programming. Minimising the things you have to think about at once can make things more tractable.
Positive sum (Social) -There are other people you can communicate with and you share motivation with enough to encourage them to work with you. This adds complex things like, managing other peoples beliefs about you or your organisation. Are you competent? Do you know what you are doing? This can interact with paradigm shifting, unless you are explicitly trying to paradigm shift, you are not motivated to visibly search for things that undermine your current goal (you will look bad and people will no longer want to work/support you).
Adverserial/Zero-Sum - You win if another person loses. If the agent you are competing against is using "Construction" then figuring out the things they are trying to construct is important to be able to disrupt it. See Minmax. Individual strategies might be anti-inductive if you are competing against strong optimizers.
Optimisation/Exploitation - You know a bunch of stuff, pick the best possible action. It might be non-obvious and require lots of processing to figure out the best action. I'm including this to contrast with satisficing or weird non-problems where every action is the same. Most interesting problems have some element of this.
Randomness - You know there is a source of randomness, so you have to manage your expectations/beliefs.
Game building - You want to create a system that encourages other agents/agent-like things to behave in certain ways (prediction markets/ school ranking metrics/ company management). You care less what individual agents know and just want to know enough about their goals so you can fit them into the game.
Introspective - There are things wrong with your internal programs. Explore them/make them more explicit and try and improve them. This might include "Game Building" if you model your internal state as a bunch of agents.
Physics - Your whole world is defined by discoverable rules and you can use them for predictions. sn+1 = U(sn) what is the update function U? This is great, this means everything that happens in the world can be evidence for or against U.
Scientific constraints - There are rules that allow you to constrain the types of hypotheses you have for science. Like conservation of energy or that the update function is time reversible. Or Turing computable.
Optimiser Building - You are trying to build an optimiser to optimise the world for you. You define a goal and a search space, prior knowledge and an search algorithm. All these things embody a set of facets, This is classic AI. Note that it doesn't allow the system to move through facets, this is needed for AGI and general intelligence augmentation (GAI).
Optimising Optimiser Building - You are trying to build an optimiser to build an optimiser to optimise the world for you. You define a goal type of optimser and a search space of possible optimisers, prior knowledge and an search algorithm.
Super-optimiser control - You have a super-optimiser, It needs no facets to solve problems. Try and build a goal that it can maximise and still do what you want.
Person Building - You are trying to build a person. People can adopt different facets, so this is not like optimiser building. People also don't have coherent goals throughout there lifetime. There is a rough purpose, survive and propagate, but that is hard to extract from the day to day activity. The day to day activity can look like optimising so it is good to adopt the intentional stance towards them on a short timescale. However the intentional stance breaks down on the developmental time scale, due to the lack of coherent goals mentioned before. People are the only known General Intelligence.
Intelligence Augmentation - You are trying to figure out enough about the nature of how you work that you can expand it. Looking for the identity function (not to be confused with this identity function) I (x) = I (x + c) and an aptitude function A(x+c ) > A(x). Where you are x, c is some computational resource, + is some method of connection, A is an aptitude test you care about. This is only like optimiser building if x is an optimiser.
These don't help you at all. They just mean that solutions that look good that if you ignore them may not be in fact be good.
Survival - If you make the wrong step you can't make any more. Although in some cases if you can't take a step you are likely to not be able to make any more. This means you cannot or do not want to explore the entire search space. This also means if your knowledge of what will cause death is faulty that you will fail to explore some useful regions.
Embedding within physics - Not only might you die. But your actions and internal actions all have cost and impacts you might not expect. Also people might be able to influence your internal state in ways you might not expect. Interacts with optimisation in that the process of optimisation now itself can have a cost. Time can also be a problem. Optimisation might not be optimal!
Judging Religion - There is a god judging every action and thought you have.
Evolutionary - You know that you and the other agents are products of evolution.
These facets refer to other facets
Paradigm shifting (Kuhnian sense) - You start with a model of the world (maybe acquired from research or being taught). Things don't seem consistent with your world view. Attempting to paradigm shift is to try and change your world view by gathering the inconsistencies and finding models that fit. For a part of the world your view is F(x) = y however F(x) !=y. Maybe you need to include other data (you should be creating a two-place function and you need to find the other variable) or y is not a function of x at all. Or you need to add another facet to your view of the world. There are things that seem to need paradigm shifting currently (physics, the quantum and gravity split and AI/consciousness). If you try to paradigm shift you are likely to be wrong as you are going beyond your current model of reality and the world of possible models is vast. This can be seen as altering your hypothesis space and/or input space or your facets.
Teaching/Research - The epistemic ones, when they have linguistic information can allow you to switch facets too. You can encode facets linguistically (as I am doing here) and propagate them. This is something that is impossible to do with pure feedback (good/bad) or labeled input.
I'm sure there are lots of others. There might be ways of adding ethics and morality into this framework, but I'm not sure it useful. What else can people think of?
Disclaimer: I wrote this so I could have a vocabulary to talk about the rationalist community.