Lanrian

Comments

Anthropics: different probabilities, different questions
SIA isn't needed for that; standard probability theory will be enough (as our becoming grabby is evidence that grabbiness is easier than expected, and vice-versa).
I think there's a confusion with SIA and reference classes and so on. If there are no other exact copies of me, then SIA is just standard Bayesian update on the fact that I exist. If theory T_i has prior probability p_i and gives a probability q_i of me existing, then SIA changes its probability to q_i*p_i (and renormalises).

Yeah, I agree with all of that. In particular, SIA updating on us being alive on Earth is exactly as if we sampled a random planet from space, discovered it was Earth, and discovered it had life on it. Of course, there are also tons of planets that we've seen that doesn't look like they have life on them.

But "Earth is special" theories also get boosted: if a theory claims life is very easy but only on Earth-like planets, then those also get boosted.

I sort-of agree with this, but I don't think it matters in practice, because we update down on "Earth is unlikely" when we first observe that the planet we sampled was Earth-like.


Here's a model: Assume that there's a conception of "Earth-like planet" such that life-on-Earth is exactly equal evidence for life emerging on any Earth-like planet, and 0 evidence for life emerging on other planets. This is clearly a simplification, but I think it generalises. "Earth-like planet" could be any rocky planet, any rocky planet with water, any rocky planet with water that was hit by an asteroid X years into its lifespan, etc.

Now, if we sample a planet (Earth) and notice that it's Earth-like and has life on it, we do two updates:

  • Noticing that Earth is an Earth-like planet should update us towards thinking that Earth-like planets are common in the universe.
  • Noticing that life emerged on Earth should update us towards thinking that life has a high probability of emerging on Earth-like planets.

If we don't know anything else about the universe yet, these two updates should collectively imply an update towards life-is-common that is just as big as if we hadn't done this decomposition, and just updated on the hypothesis "how common is life?" in the first place.

Now, lets say we start observing the rest of the universe. Lets assume this happens via sampling random planets and observing (a) whether they are/aren't Earth-like (b) whether they do/don't have life on them.

  • If we sample a non-Earth-like planet, we update towards thinking that Earth-like planets aren't common.
  • If we sample an Earth-like planet without life, we update towards thinking that Earth-like planets has a lower probability of supporting life.

I haven't done the math, but I'm pretty sure that it doesn't matter which of these we observe. The update on "How common is life?" will be the same regardless. So the existence of "Earth is special"-hypotheses doesn't matter for our best-guess of "How common is life?", if we only conside the impact of observing planets with/without Earth-like features and life.


Of course, observing planets isn't the only way we can learn about the universe. We can also do science, and reason about the likely reasons that life emerged, and how common those things ought to be.

That means that if you can come up with a strong theoretical argument (that isn't just based on observing how many planets are Earth-like and/or had life on them, including Earth) that some feature of Earth significantly boosts the probability of life and that that feature is extremely rare in the universe at-large, then that would be a solid argument for why to expect life to be rare in the universe. However, note that you'd have to argue that it was extremely rare. If we're assuming that grabby aliens could travel over many galaxies, then we've already observed evidence that grabby life is sufficiently rare to not yet have appeared in any of a very large number of planets in any of a very large number of galaxies. Your theoretical reasons to expect life to be rare would have to assert that it's even rarer than that to impact the results.

Anthropics: different probabilities, different questions

Good point, I didn't think about that. That's the old SIA argument for there being a late filter.

The reason I didn't think about it is because I use SIA-like reasoning in the first place because it pays attention to the stakes in the right way: I think I care about acting correctly in universes with more copies of me almost-proportionally more. But I also care more about universes where civilisations-like-Earth are more likely to colonise space (ie become grabby), because that means that each copy of me can have more impact. That kind-of cancels out the SIA argument for a late filter, mostly leaving me with my priors, which points toward a decent probability that any given civilisation colonises space in a grabby manner.

Also: if Earth-originiating intelligence ever becomes grabby, that's a huge bayesian update in favor of other civilisations becoming grabby, too. So regardless of how we describe the difference between T1 and T2, SIA will definitely think that T1 is a lot more likely once we start colonising space, if we ever do that.

Anthropics: different probabilities, different questions
But by "theory of the universe", Robin Hanson meant not only the theory of how the physical universe was, but the anthropic probability theory. The main candidates are SIA and SSA. SIA is indifferent between T1 and T2. But SSA prefers T1 (after updating on the time of our evolution).

SIA is not indifferent between T1 and T2. There are way more humans in world T1 than in world T2 (since T2 requires life to be very uncommon, which would imply that humans are even more uncommon), so SIA thinks world T1 is much more likely. After all, the difference between SIA and SSA is that SIA thinks that universes with more observers are proportionally more likely; so SIA will always think aliens are more likely than SSA does.

Previously, I thought this was in conflict with the fact that humans didn't seem to be particularly early (ie., if life is common, it's surprising that there aren't any aliens around 13.8 billion years into the universe's life span). I ran the numbers, and concluded that SIA still thought that we'd be very likely to encounter aliens (though most of the linked post instead focuses on answering the decision-relevant question "how much of potentially-colonisable space would be colonised without us?", evaluated ADT-style).

After having read Robin's work, I now think humans probably are quite early, which would imply that (given SIA/ADT) it is highly overdetermined that aliens are common. As you say, Robin's work also implies that SSA agrees that aliens are common. So that's nice: no matter which of these questions we ask, we get a similar answer.

Decoupling deliberation from competition

Thanks, computer-speed deliberation being a lot faster than space-colonisation makes sense. I think any deliberation process that uses biological humans as a crucial input would be a lot slower, though; slow enough that it could well be faster to get started with maximally fast space colonisation. Do you agree with that? (I'm a bit surprised at the claim that colonization takes place over "millenia" at technological maturity; even if the travelling takes millenia, it's not clear to me why launching something maximally-fast – that you presumably already know how to build, at technological maturity – would take millenia. Though maybe you could argue that millenia-scale travelling time implies millenia-scale variance in your arrival-time, in which case launching decades or centuries after your competitors doesn't cost you too much expected space?)

If you do agree, I'd infer that your mainline expectation is that we succesfully enforce a worldwide pause before mature space-colonisation; since the OP suggests that biological humans are likely to be a significant input into the deliberation process, and since you think that the beaming-out-info schemes are pretty unlikely.

(I take your point that as far as space-colonisation is concerned; such a pause probably isn't strictly necessary.)

Decoupling deliberation from competition

I'm curious about how this interacts with space colonisation. The default path of efficient competition would likely lead to maximally fast space-colonisation, to prevent others from grabbing it first. But this would make deliberating together with other humans a lot trickier, since some space ships would go to places where they could never again communicate with each other. For things to turn out ok, I think you either need:

  • to pause before space colonisation.
  • to finish deliberating and bargaining before space colonisation.
  • to equip each space ship with the information necessary for deciding what to do with the space they grab. In order of increasing ambitiousness:
    • You could upload a few leaders' or owners' brains (or excellent predictive model thereof) and send them along with their respective colonisation ships; hoping that they will individually reach good decisions without discussing with the rest of humanity.
    • You could also equip each colonisation ship with the uploads of all other human brains that they might want to deliberate with (or excellent predictive models thereof), so that they can use those other human as discussion partners and data for their deliberation-efforts.
    • You also set up these uploads in a way that makes them figure out what bargain would have been struck on Earth; and then have each space ship individually implement this. Maybe this happens by default with acausal trade; or maybe everyone in some reasonably big coalition could decide to follow the decision of some specified deliberative process that they don't have time to run on Earth.
  • to use some communication scheme that lets you send your space ships ahead to compete in space, and then lets you send instructions to your own ships once you've finished deliberating on Earth.
    • E.g. maybe you could use cryptography to ensure that your space ships will follow instructions signed with the right code; which you only send out once you've finished bargaining. (Though I'm not sure if your bargaining-partners would be able to verify how your space ships would react to any particular message; so maybe this wouldn't work without significant prior coordination.)

I'm curious wheter you're optimistic about any of these options, or if you have something else in mind.

(Also, all of this assumes that defensive capabilities are a lot stronger than offensive capabilities in space. If offense is comparably strong, than we also have the problem that the cosmic commons might be burned in wars if we don't pause or reach some other agreement before space colonisation.)

Snyder-Beattie, Sandberg, Drexler & Bonsall (2020): The Timing of Evolutionary Transitions Suggests Intelligent Life Is Rare
And yet I'd guess that none of these were/are on track to reach human-level intelligence. Agree/disagree?

Uhm, haven't thought that much about it. Not imminently, maybe, but I wouldn't exclude the possibility that they could be on some long-winded path there.

It feels like it really relies on this notion of "pretty smart" though

I don't think it depends that much on the exact definition of a "pretty smart". If we have a broader notion of what "pretty smart" is, we'll have more examples of pretty smart animals in our history (most of which haven't reached human level intelligence). But this means both that the evidence indicates that each pretty smart animal has a smaller chance of reaching human-level intelligence, and that we should expect much more pretty smart animals in the future. E.g. if we've seen 30 pretty smart species (instead of 3) so far, we should expect maybe M=300 pretty smart species (instead of 30) to appear over Earth's history. Humans still evolved from some species in the first 10th percentile, which still is an update towards N~=M/10 over N>>M.

The required assumptions for the argument are just:

  • humans couldn't have evolved from a species with a level of intelligence less than X
  • species with X intelligence started appearing t years ago in evolutionary history
  • there are t' years left where we expect such species to be able to appear
  • we assume the appearence rate of such species to be either constant or increasing over time

Then, "it's easy to get humans from X" predicts t<<t' while "it's devilishly difficult to get humans from X" predicts t~=t' (or t>>t' if the appearance rate is strongly increasing over time). Since we observe t<<t', we should update towards the former.

This is the argument that I was trying to make in the grand-grand-grand-parent. I then reformulated it from an argument about time into an argument about pretty smart species in the grand-parent to mesh better with your response.

Snyder-Beattie, Sandberg, Drexler & Bonsall (2020): The Timing of Evolutionary Transitions Suggests Intelligent Life Is Rare
The claim I'm making is more like: for every 1 species that reaches human-level intelligence, there will be N species that get pretty smart, then get stuck, where N is fairly large

My point is that – if N is fairly large – then it's surprising that human-level intelligence evolved from one of the first ~3 species that became "pretty smart" (primates, dolphins, and probably something else).

If the Earth's history would contain M>>N pretty smart species, then in expectation human-level intelligence should appear in the N:th species. If Earth's history would contain M<<N pretty smart species, then we should expect human-level intelliigence to have equal probability to appear in any of the pretty smart species, so in expectation it should appear in the M/2:th pretty smart species.

Becoming "pretty smart" is apparently easy (because we've had >1 pretty smart species evolve so far) so in the rest of the Earth's history, we would expect plenty more species to become pretty smart. If we expect M to be non-trivial (like maybe 30) then the fact that the 3rd pretty smart species reached human-level intelligence is evidence in favor of N~=2 over N>>M.

(Just trying to illustrate the argument at this point; not confident in the numbers given.)

AMA: Paul Christiano, alignment researcher

I'm curious about the extent to which you expect the future to be awesome-by-default as long as we avoid all clear catastrophes along the way; vs to what extent you think we just has a decent chance of getting a non-negligible fraction of all potential value (and working to avoid catastrophes is one of the most tractable ways of improving the expected value).

Proposed tentative operationalisation:

  • World A is just like our world, except that we don't experience any ~GCR on Earth in the next couple of centuries, and we solve the problem of making competitive intent-aligned AI.
  • In world B, we also don't experience any GCR soon and we also solve alignment. In addition, you and your chosen collaborators get to design and implement some long-reflection-style scheme that you think will best capture the aggregate of human and non-human desires. All coordination and cooperation problems on Earth are magically solved. Though no particular values are forced upon anyone, everyone is happy to stop and think about what they really want, and contribute to exercises designed to illuminate this.

How much better do you think world B is compared to world A? (Assuming that a world where Earth-originating intelligence goes extinct has a baseline value of 0.)

Covid 3/25: Own Goals
which is about 20% of the cases in Europe right now (see Luxembourg data)

Do you have a link? (I can't find one by googling.)

The strategy-stealing assumption

Categorising the ways that the strategy-stealing assumption can fail:

  • Humans don't just care about acquiring flexible long-term influence, because
    • 4. They also want to stay alive.
    • 5 and 6. They want to stay in touch with the rest of the world without going insane.
    • 11. and also they just have a lot of other preferences.
    • (maybe Wei Dai's point about logical time also goes here)
  • It is intrinsically easier to gather flexible influence in pursuit of some goals, because
    • 1. It's easier to build AIs to pursue goals that are easy to check.
    • 3. It's easier to build institutions to pursue goals that are easy to check.
    • 9. It's easier to coordinate around simpler goals.
    • plus 4 and 5 insofar as some values require continuously surviving humans to know what to eventually spend resources on, and some don't.
    • plus 6 insofar as humans are otherwise an important part of the strategic environment, such that it's beneficial to have values that are easy-to-argue.
  • Jessica Taylor's argument require that the relevant games are zero sum. Since this isn't true in the real world:
    • 7. A threat of destroying value (e.g. by threatening extinction) could be used as a bargaining tool, with unpredictable outcomes.
    • ~8. Some groups actively wants other groups to have less resources, in which case they can try to reduce the total amount of resources more or less actively.
    • ~8. Smaller groups have less incentive to contribute to public goods (such as not increasing the probability of extinction), but benefit equally from larger groups' contributions, which may lead them to getting a disproportionate fraction of resources by defecting in public-goods games.
Load More