interstice

interstice's Comments

Reality-Revealing and Reality-Masking Puzzles

The post mentions problems that encourage people to hide reality from themselves. I think that constructing a 'meaningful life narrative' is a pretty ubiquitous such problem. For the majority of people, constructing a narrative where their life has intrinsic importance is going to involve a certain amount of self-deception.

The post mentioned some of the problems that come from the interaction between these sorts of narratives and learning about x-risks. To me, however, it looks like at least some of the AI x-risk memes themselves are partially the result of reality-masking optimization with the goal of increasing the perceived meaningfulness of the lives of people working on AI x-risk. As an example, consider the ongoing debate about whether we should expect the field of AI to mostly solve x-risk on its own. Clearly, if the field can't be counted upon to avoid the destruction of humanity, this greatly increases the importance of outside researchers trying to help them. So to satisfy their emotional need to feel that their actions have meaning, outside researchers have a bias towards thinking that the field is more incompetent than it is, and to come up with and propagate memes justifying that conclusion. People who are already in insider institutions have the opposite bias, so it makes sense that this debate divides to some extent along these lines.

From this perspective, it's no coincidence that internalizing some x-risk memes leads people to feel that their actions are meaningless. Since the memes are partially optimized to increase the perceived meaningfulness of the actions of a small group of people, by necessity they will decrease the perceived meaningfulness of everyone else's actions.

(Just to be clear, I'm not saying that these ideas have no value, that this is being done consciously, or that the originators of said ideas are 'bad'; this is a pretty universal human behavior. Nor would I endorse bringing up these motives in an object-level conversation about the issues. However, since this post is about reality-masking problems it seems remiss not to mention.)

Clarifying The Malignity of the Universal Prior: The Lexical Update

Thanks, that makes sense. Here is my rephrasing of the argument:

Let the 'importance function' take as inputs machines and , and output all places where is being used as a universal prior, weighted by their effect on -short programs. Suppose for the sake of argument that there is some short program computing ; this is probably the most 'natural' program of this form that we could hope for.

Even given such a program, we'll still lose to the aliens: in , directly specifying our important decisions on Earth using will require both and to be fed into , costing bits, then bits to specify us. For the aliens, getting them to be motivated to control -short programs costs bits, but then they can skip directly to specifying us given , so they save bits over the direct explanation. So the lexical update works.

(I went wrong in thinking that the aliens would need to both update their notion of importance to match ours *and* locate our world; but if we assume the 'importance function' exists then the aliens can just pick out our world using our notion of importance)

Machine Learning Can't Handle Long-Term Time-Series Data

Today's neural networks definitely have problems solving more 'structured' problems, but I don't think that 'neural nets can't learn long time-series data' is a good way of framing this. To go through your examples:

This shouldn’t have been a major issue, except that with each switch it discarded past observations. Had the car maintained this history it would have seen that some sort of large object was progressing across the street on a collision course, and had plenty of time to stop.

From a brief reading of the report, this sounds like this control logic is part of the system surrounding the neural network, not the network itself.

One network predicts the odds of winning and another network figures out which move to perform. This turns a time-series problem (what strategy to perform) into a two separate stateless[1] problems.

I don't see how you think this is 'stateless'. AlphaStar's architecture contains an LSTM('Core') which is then fed into the value and move networks, similar to most time series applications of neural networks.

Most conspicuously, human beings know how to build walls with buildings. This requires a sequence of steps that don’t generate a useful result until the last of them are completed. A wall is useless until the last building is put into place. AlphaStar (the red player in the image below) does not know how to build walls.

But the network does learn how to build its economy, which also doesn't pay off for a very long time. I think the issue here is more about a lack of 'reasoning' skills than time-scales: the network can't think conceptually, and so doesn't know that a wall needs to completely block off an area to be useful. It just learns a set of associations.

ML can generate classical music just fine but can’t figure out the chorus/verse system used in rock & roll.

MustNet was trained from scratch on MIDI data, but it's still able to generate music with lots of structure on both short and long time scales. GPT2 does the same for text. I'm not sure if MuseNet is able to generate chorus/verse structures in particular, but again this seems more like an issue of lack of logic/concepts than time scales(that is, MuseNet can make pieces that 'sound right' but has no conceptual understanding of their structure)

I'll note that AlphaStar, GPT2, and MuseNet all use the Transformer architecture, which seems quite effective for structured time-series data. I think this is because its attentional mechanism lets it zoom in on the relevant parts of past experiences.

I also don't see how connectome-specific-waves are supposed to help. I think(?) your suggestion is to store slow-changing data in the largest eigenvectors of the Laplacian -- but why would this be an improvement? It's already the case(by the nature of the matrix) that the largest eigenvectors of e.g. an RNN's transition matrix will tend to store data for longer time periods.

romeostevensit's Shortform

Steroids do fuck a bunch of things up, like fertility, so they make evolutionary sense. This suggests we should look to potentially dangerous or harmful alterations to get real IQ boosts. Greg cochran has a post suggesting gout might be like this.

Understanding Machine Learning (I)

This seems much too strong, lots of interesting unsolved problems can be cast as i.i.d. Video classification, for example, can be cast as i.i.d., where the distribution is over different videos, not individual frames.

Free Speech and Triskaidekaphobic Calculators: A Reply to Hubinger on the Relevance of Public Online Discussion to Existential Risk

In the analogy, it's only possible to build a calculator that outputs the right answer on non-13 numbers because you already understand the true nature of addition. It might be more difficult if you were confused about addition, and were trying to come up with a general theory by extrapolating from known cases -- then, thinking 6 + 7 = 15 could easily send you down the wrong path. In the real world, we're similarly confused about human preferences, mind architecture, the nature of politics, etc., but some of the information we might want to use to build a general theory is taboo. I think that some of these questions are directly relevant to AI -- e.g. the nature of human preferences is relevant to building an AI to satisfy those preferences, the nature of politics could be relevant to reasoning about what the lead-up to AGI will look like, etc.

What determines the balance between intelligence signaling and virtue signaling?

Fair point, but I note that the cooperative ability only increases fitness here because it boosts the individuals' status, i.e. they are in a situation where status-jockeying and cooperative behavior are aligned. Of course it's true that they _are_ often so aligned.

Many Turing Machines

I agree it's hard to get the exact details of the MUH right, but pretty much any version seems better to me than 'only observable things exist' for the reasons I explained in my comment. And pretty much any version endorses many-worlds(of course you can believe many-worlds without believing MUH). Really this is just a debate about the meaning of the word 'exist'.

What determines the balance between intelligence signaling and virtue signaling?

Ability to cooperate is important, but I think that status-jockeying is a more 'fundamental' advantage because it gives an advantage to individuals, not just groups. Any adaptation that aids groups must first be useful enough to individuals to reach fixation(or near-fixation) in some groups.

Many Turing Machines

You've essentially re-invented the Mathematical Universe Hypothesis, which many people around here do in fact believe.

For some intuition as to why people would think that things that can't ever affect our future experiences are 'real', imagine living in the distant past and watching your relatives travel to a distant land, and assume that long-distance communication such as writing is impossible. You would probably still care about them and think they are 'real' , even though by your definition they no longer exist to you. Or if you want to quibble about the slight chance of seeing them again, imagine being in the future and watching them get on a spaceship which will travel beyond your observable horizon. Again, it still seems like you would still care about them and consider them 'real'.

Load More