interstice

Posts

Sorted by New

Comments

What's Wrong with Social Science and How to Fix It: Reflections After Reading 2578 Papers

If you're a scientist, your job is ostensibly to uncover the truth about your field of study, so I think being uninterested in the truth of the papers you cite is at least a little bit malicious.

Why would code/English or low-abstraction/high-abstraction simplicity or brevity correspond?

Uhh, I don't follow this. Could you explain or link to an explanation please?

Intuitive explanation: Say it takes X bits to specify a human, and that the human knows how to correctly predict whatever sequence we're applying SI to. SI has to find the human among the other 2^X programs of length X. Say SI is trying to predict the next bit. There will be some fraction of those 2^X programs that predict it will be 0, and some fraction predicting 1. There fractions define SI's probabilities for what the next bit will be. Imagine the next bit will be 0. Then SI is predicting badly if greater than half of those programs predict a 1. But then, all those programs will be eliminated in the update phase. Clearly, this can happen at most X times before most of the weight of SI is on the human hypothesis(or a hypothesis that's just as good at predicting the sequence in question)

The above is a sketch, not quite how SI really works. Rigorous bounds can be found here, in particular the bottom of page 979("we observe that Theorem 2 implies the number of errors of the universal predictor is finite if the number of errors of the informed prior is finite..."). In the case where the number of errors is not finite, the universal and informed prior still have the same asymptotic rate of growth of error (error of universal prior is in big-O class of error of informed prior)

I don't think this is true. I do agree some conclusions would be converged on by both systems (SI and humans), but I don't think simplicity needs to be one of them.

When I say the 'sense of simplicity of SI', I use 'simple program' to mean the programs that SI gives the highest weight to in its predictions(these will by definition be the shortest programs that haven't been ruled out by data). The above results imply that, if humans use their own sense of simplicity to predict things, and their predictions do well at a given task, SI will be able to learn their sense of simplicity after a bounded number of errors.

How would you ask multiple questions? Practically, you'd save the state and load that state in a new SI machine (or whatever). This means the data is part of the program.

I think you can input multiple questions by just feeding a sequence of question/answer pairs. Actually getting SI to act like a question-answering oracle is going to involve various implementation details. The above arguments are just meant to establish that SI won't do much worse than humans at sequence prediction(of any type) -- so, to the extent that we use simplicity to attempt to predict things, SI will "learn" that sense after at most a finite number of mistakes(in particular, it won't do any *worse* than 'human-SI', hypotheses ranked by the shortness of their English description, then fed to a human predictor)

Why would code/English or low-abstraction/high-abstraction simplicity or brevity correspond?
I read it as: "why would stuff the simplicity an idea had in one form (code) necessarily correspond to simplicity when it is in another form (english)? or more generally: why would the complexity of an idea stay roughly the same when the idea is expressed through different abstraction layers?"

I think that the argument about emulating one Turing machine with another is the best you're going to get in full generality. You're right that we have no guarantee that the explanation that looks simplest to a human will also look the simplest to a newly-initialized SI, because the 'constant factor' needed to specify that human could be very large.

I do think it's meaningful that there is at most a constant difference between different versions of Solomonoff induction(including "human-SI"). This is because of what happens as the two versions update on incoming data: they will necessarily converge in their predictions, differing at most on a constant number of predictions.

So while SI and humans might have very different notions of simplicity at first, they will eventually come to have the same notion, after they see enough data from the world. If an emulation of a human takes X bits to specify, it means a human can beat SI at binary predictions at most X times(roughly) on a given task before SI wises up. For domains with lots of data, such as sensory prediction, this means you should expect SI to converge to giving answers as good as humans relatively quickly, even if the overhead is quite large*.

Our estimates for the data requirements to store a mind are like 10^20 bits

The quantity that matters is how many bits it takes to specify the mind, not store it(storage is free for SI just like computation time). For the human brain this shouldn't be too much more than the length of the human genome, about 3.3 GB. Of course, getting your human brain to understand English and have common sense could take a lot more than that.

*Although, those relatively few times when the predictions differ could cause problems. This is an ongoing area of research.

Why would code/English or low-abstraction/high-abstraction simplicity or brevity correspond?
Some things are quick for people to do and some things are hard. Some ideas have had multiple people continuously arguing for centuries. I think this either means you can't apply a simulation of a person like this, or some inputs have unbounded overhead.

Solomonoff induction is fine with inputs taking unboundedly long to run. There might be cases where the human doesn't converge to a stable answer even after an indefinite amount of time. But if a "simple" hypothesis can have people debating indefinitely about what it actually predicts, I'm okay with saying that it's not actually simple(or that it's too vague to count as a hypothesis), so it's okay if SI doesn't return an answer in those cases.

You should include all levels of abstraction in your reasoning, like raw bytecode. It's both low level and can be written by humans. It's not necessarily fun but it's possible. What about things people design at a transistor level?

Why do you need to include those things? Solomonoff induction can use any Turing-complete programming language for its definition of simplicity, there's nothing special about low-level languages.

I use Haskell and have no idea what you're talking about.

I mean you can pass functions as arguments to other functions and perform operations on them.

Regarding dictionary/list-of-tuples, the point is that you only have to write the abstraction layer *once*. So if you had one programming language with dictionaries built-in and other without, the one with dictionaries gets at most a constant advantage in code-length. In general two different universal programming languages will have at most a constant difference, as johnswentworth mentioned. This means that SI is relatively insensitive to the choice of programming language: as you see more data, the predictions of 2 versions of Solomonoff induction with different programming languages will converge.

Why would code/English or low-abstraction/high-abstraction simplicity or brevity correspond?

One somewhat silly reason: for any simple English hypothesis, we can convert it to code by running a simulation of a human and giving them the hypothesis as input, then asking them to predict what will happen next. Therefore the English and code-complexity can differ by at most a constant.

This gives a very loose bound, since it will probably take a lot of bits to specify a human mind. In practice I think the two complexities will usually not differ by too much, because coding languages were designed to be understandable by humans and have syntax similar to human languages. There are some difficult cases like descriptions of visual objects, but even here neural networks should be able to bridge the gap to an extent(in the limit this just becomes the 'encode a human brain' strategy)

Regarding 'levels of abstraction', I'm not sure if this is a big obstacle, as most programming languages have built-in mechanisms for changing levels of abstraction. e.g. functional programming languages allow you to treat functions as objects.

Many-worlds versus discrete knowledge

You might be interested in the work of Jess Riedel, whose research agenda is centered around finding a formal definition of wavefunction branches, e.g. https://arxiv.org/abs/1608.05377

Down with Solomonoff Induction, up with the Presumptuous Philosopher

This example seems a little unfair on Solomonoff Induction, which after all is only supposed to predict future sensory input, not answer decision theory problems. To get it to behave as in the post, you need to make some unstated assumptions about the utility functions of the agents in question(e.g. why do they care about other copies and universes? AIXI, the most natural agent defined in terms of Solomonoff induction, wouldn't behave like that)

It seems that in general, anthropic reasoning and decision theory end up becoming unavoidably intertwined(e.g.) and we still don't have a great solution.

I favor Solomonoff induction as the solution to (epistemic) anthropic problems because it seems like any other approach ends up believing crazy things in mathematical(or infinite) universes. It also solves other problems like the Born rule 'for free', and of course induction from sense data generally. This doesn't mean it's infallible, but it inclines me to update towards S.I.'s answer on questions I'm unsure about, since it gets so much other stuff right while being very simple to express mathematically.

The Presumptuous Philosopher, self-locating information, and Solomonoff induction

Another thing, I don't think Solomonoff Induction would give an advantage of log(n) to theories with n observers. In the post you mention taking the discrete integral of to get log scaling, but this seems to be based on the plain Kolmogorov complexity , for which is approximately an upper bound. Solomonoff induction uses prefix complexity , and the discrete integral of converges to a constant. This means having more copies in the universe can give you at most a constant advantage.

(Based on reading some other comments it sounds like you might already know this. In any case, it means S.I. is even more anti-PP than implied in the post)

The Presumptuous Philosopher, self-locating information, and Solomonoff induction

It seems to me that there are (at least) two ways of specifying observers given a physical world-model, and two corresponding ways this would affect anthropics in Solomonoff induction:

  • You could specify their location in space-time. In this case, what matters isn't the number of copies, but rather their density in space, because observers being more sparse in the universe means more bits are needed to pin-point their location.

  • You could specify what this type of observer looks like, run a search for things in the universe matching that description, then pick one off the list. In this case, again what matters is the density of us(observers with the sequence of observations we are trying to predict) among all observers of the same type.

Which of the two methods ends up being the leading contributor to the Solomonoff prior depends on the details of the universe and the type of observer. But either way, I think the Presumptuous Philosopher's argument ends up being rejected: in the 'searching' case, it seems like different physical theories shouldn't affect the frequency of different people in the universe, and in the 'location' case, it seems that any physical theory compatible with local observations shouldn't be able to affect the density much, because we would perceive any copies that were close enough.

Reality-Revealing and Reality-Masking Puzzles

The post mentions problems that encourage people to hide reality from themselves. I think that constructing a 'meaningful life narrative' is a pretty ubiquitous such problem. For the majority of people, constructing a narrative where their life has intrinsic importance is going to involve a certain amount of self-deception.

Some of the problems that come from the interaction between these sorts of narratives and learning about x-risks have already been mentioned. To me, however, it looks like some of the AI x-risk memes themselves are partially the result of reality-masking optimization with the goal of increasing the perceived meaningfulness of the lives of people working on AI x-risk. As an example, consider the ongoing debate about whether we should expect the field of AI to mostly solve x-risk on its own. Clearly, if the field can't be counted upon to avoid the destruction of humanity, this greatly increases the importance of outside researchers trying to help them. So to satisfy their emotional need to feel that their actions have meaning, outside researchers have a bias towards thinking that the field is more incompetent than it is, and to come up with and propagate memes justifying that conclusion. People who are already in insider institutions have the opposite bias, so it makes sense that this debate divides to some extent along these lines.

From this perspective, it's no coincidence that internalizing some x-risk memes leads people to feel that their actions are meaningless. Since the memes are partially optimized to increase the perceived meaningfulness of the actions of a small group of people, by necessity they will decrease the perceived meaningfulness of everyone else's actions.

(Just to be clear, I'm not saying that these ideas have no value, that this is being done consciously, or that the originators of said memes are 'bad'; this is a pretty universal human behavior. Nor would I endorse bringing up these motives in an object-level conversation about the issues. However, since this post is about reality-masking problems it seems remiss not to mention.)

Load More