CarlShulman

Comments

"New EA cause area: voting"; or, "what's wrong with this calculation?"

That is the opposite error, where one cuts off the close election cases. The joint probability density function over vote totals is smooth because of uncertainty (which you can see from polling errors), so your chance of being decisive scales proportionally with the size of the electorate and the margin of error in polling estimation.

"New EA cause area: voting"; or, "what's wrong with this calculation?"

The error is a result of assuming the coin is exactly 50%, in fact polling uncertainties mean your probability distribution over its 'weighting' is smeared over at least several percentage points. E.g. if your credence from polls/538/prediction markets is smeared uniformly from 49% to 54%, then the chance of the election being decided by a single vote is one divided by 5% of the # of voters.

You can see your assumption is wrong because it predicts that tied elections should be many orders of magnitude more common than they are. There is a symmetric error where people assume that the coin has a weighting away from 50%, so the chances of your vote mattering approach zero. Once you have a reasonable empirical distribution over voting propensities fit to reproduce actual election margins both these errors go away.

See Andrew Gelman's papers on this.

The Upper Limit of Value

As I said, the story was in combination with one-boxing decision theories and our duplicate counterparts.

The Upper Limit of Value

I suppose by 'the universe' I meant what you would call the inflationary multiverse, that is including distant regions we are now out of contact with. I personally tend not to call regions separated by mere distance separate universes.

"and the only impact of our actions with infinite values is the number of black holes we create."

Yes, that would be the infinite impact I had in mind, doubling the number would double the number of infinite branching trees of descendant universes.

Re simulations, yes, there is indeed a possibility of influencing other levels, although we would be more clueless, and it is a way for us to be in a causally connected patch with infinite future.

The Upper Limit of Value

Thanks David, this looks like a handy paper! 

Given all of this, we'd love feedback and discussion, either as comments here, or as emails, etc.

I don't agree with the argument that infinite impacts of our choices are of Pascalian improbability, in fact I think we probably face them as a consequence of one-boxing decision theory, and some of the more plausible routes to local infinite impact are missing from the paper:

  • The decision theory section misses the simplest argument for infinite value: in an infinite inflationary universe with infinite copies of me, then my choices are multiplied infinitely. If I would one-box on Newcomb's Problem, then I would take the difference between eating the sandwich and not to be scaled out infinitely. I think this argument is in fact correct and follows from our current cosmological models combine with one-boxing decision theories.
  • Under 'rejecting physics' I didn't see any mention of baby universes, e.g. Lee Smolin's cosmological natural selection. If that picture were right, or anything else in which we can affect the occurrence of new universes/inflationary bubbles forming, then that would permit infinite impacts.
  • The simulation hypothesis is a plausible way for our physics models to be quite wrong about the world in which the simulation is conducted, and further there would be reason to think simulations would be disproportionately conducted under physical laws that are especially conducive to abundant computation
What trade should we make if we're all getting the new COVID strain?

Little reaction to the new strain news, or little reaction to new strains outpacing vaccines and getting a large chunk of the population over the next several months?

The Colliding Exponentials of AI

These projections in figure 4 seem to falsely assume training optimal compute scales linearly with model size. It doesn't, you also need to show more data points to the larger models so training compute grows superlinearly, as discussed in OAI scaling papers. That changes the results by orders of magnitude (there is uncertainty about which of two inconsistent scaling trends to extrapolate further out, as discussed in the papers).

Are we in an AI overhang?
Maybe the real problem is just that it would add too much to the price of the car?

Yes. GPU/ASICs in a car will have to sit idle almost all the time, so the costs of running a big model on it will be much higher than in the cloud.

Rafael Harth's Shortform

I'm not a utilitarian, although I am closer to that than most people (scope sensitivity goes a long way in that direction), and find it a useful framework for highlighting policy considerations (but not the only kind of relevant normative consideration).

And no, Nick did not assert an estimate of x-risk as simultaneously P and <P.

Load More