Engineer at Donor to LW 2.0.


The Solomonoff Prior is Malign

If it's true that simulating that universe is the simplest way to predict our human, then some non-trivial fraction of our prediction might be controlled by a simulation in another universe. If these beings want us to act in certain ways, they have an incentive to alter their simulation to change our predictions.

I find this confusing. I'm not saying it's wrong, necessarily, but it at least feels to me like there's a step of the argument that's being skipped.

To me, it seems like there's a basic dichotomy between predicting and controlling. And this is claiming that somehow an agent somewhere is doing both. (Or actually, controlling by predicting!) But how, exactly?

Is it that:

  • these other agents are predicting us, by simulating us, and so we should think of ourselves as partially existing in their universe? (with them as our godlike overlords who can continue the simulation from the current point as they wish)
  • the Consequentialists will predict accurately for a while, and then make a classic "treacherous turn" where they start slipping in wrong predictions designed to influence us rather than be accurate, after having gained our trust?
  • something else?

My guess is that it's the second thing (in part from having read, and very partially understood, Paul's posts on this a while ago). But then I would expect some discussion of the "treacherous turn" aspect of it -- of the fact that they have to predict accurately for a while (so that we rate them highly in our ensemble of programs), and only then can they start outputting predictions that manipulate us.

Is that not the case? Have I misunderstood something?

(Btw, I found the stuff about python^10 and exec() pretty clear. I liked those examples. Thank you! It was just from this point on in the post that I wasn't quite sure what to make of it.)

The rationalist community's location problem

FYI I think your second link is broken.

Everything I Know About Elite America I Learned From ‘Fresh Prince’ and ‘West Wing’

I'm not sure I understand your question at the end. Are you asking if people do indeed want to become part of the elite?

If so, it doesn't seem too mysterious to me. People want to be liked, they want to be respected. There are drives both for prestige and dominance. People want the highest quality mates and allies that they can get. Doesn't everything we know about human nature suggest that all else equal, if there are social hierarchies, people will prefer to be at the top of them?

The rationalist community's location problem

It also rules out Cascadian cities like Portland and Seattle - only marginally better housing costs, worse fires, and worse social decay (eg violence in Portland).

I'm not sure this is so conclusive, regarding Seattle. A few notes --

  1. The rent is 40% less than San Francisco, and 20% less than Berkeley. (And the difference seems likely to continue or increase, because Seattle is willing to build housing.)
  2. There is no state income tax.
  3. While the CHAZ happened in Seattle, my impression is that day-to-day it's much more livable than SF. (I haven't lived there in a few years, but from 2007-2014 I thought it was wonderful.)
  4. If MIRI (or others) want to hire programmers, Seattle is probably the 2nd best market in the US for it. (Think of where the big tech cos all have their first secondary offices. It's all Seattle or NYC.)
Matt Goldenberg's Short Form Feed

Here's an analogy -- is Hamlet conscious?

Well, Hamlet doesn't really exist in our universe, so my plan for now is to not consider him a consciousness worth caring about. But if you start to deal with harder cases, whether it exists in our universe becomes a trickier question.

Matt Goldenberg's Short Form Feed

Hmm, it's not so much about how similar it is to me as it is like, whether it's on the same plane of existence.

I mean, I guess that's a certain kind of similarity. But I'm willing to impute moral worth to very alien kinds of consciousness, as long as it actually "makes sense" to call them a consciousness. The making sense part is the key issue though, and a bit underspecified.

Matt Goldenberg's Short Form Feed

I don't have a good answer for this. I'm kinda still at the vague intuition stage rather than clear theory stage.

Matt Goldenberg's Short Form Feed

It seems to me like when it comes to morality, the thing that matters is the reference frame of the consciousness, and not our reference frame (I think some similar argument applies to longtermism).

For the way I mean reference frame, I only care about my reference frame. (Or maybe I care about other frames in proportion to how much they align with mine.) Note that this is not the same thing as egoism.

Matt Goldenberg's Short Form Feed

I don't have a well-developed theory here. But a few related ideas:

  • simplicity matters
  • evolution over time matters -- maybe you can map all the neurons in my head and their activations at a given moment in time to a bunch of grains of sand, but the mapping is going to fall apart at the next moment (unless you include some crazy updating rule, but that violates the simplicity requirement)
  • accessibility matters -- I'm a bit hesitant on this one. I don't want to say that someone with locked in syndrome is not conscious. But if some mathematical object that only exists in Tegmark V is conscious (according to the previous definitions), but there's no way for us to interact with it, then maybe that's less relevant.
Load More