Posts

Sorted by New

Wiki Contributions

Comments

If we had the ability to create one machine capable to centrally planning our current world economy, how much processing power/ memory would it need to have? Interested in some Fermi estimates.

To which I would reply, this is AI-complete, at which point the AI would solve the problem by taking control of the future. That’s way easier than actually solving the Socialist Calculation Debate.

 

As a data point, Byrne Hobart argues in Amazon sees like a state that Amazon is approximately solving the economic calculation problem (ECP) in the Socialist Calculation Debate when it sets prices on the goods on its online marketplace.

US's 2022 GDP of $25 462B is 116x Amazon's 2022 online store revenue of $220B.

Assuming  scaling[1], it would take 500 000x the compute Amazon uses (for its own marketplace, excluding AWS) to approximately solve the ECP for the US economy (to the same level of approximation as "Amazon approximately solves the ECP"[2]).

In practice, I expect Amazon's "approximate solution" to be much more like [3], so maybe a factor of merely 800x.

  1. ^

    The economic calculation problem (ECP) is a convex optimisation problem (convexity of utility functions) which can be solved by linear programming in .

  2. ^

    Which is not a vastly superhuman level of approximate solution? For example, Amazon is no longer growing fast enough to double every 4 years. Its marketplace also doesn't particularly incentivise R&D, while I think a god-like AI would incentivise R&D.

  3. ^

    I just made it up.

Ah, increasing the number of researchers is simply increasing  in . I didn't realize that!

Minor comment on one small paragraph:

Price's Law says that half of the contributions in a field come from the square root of the number of contributors. In other words, productivity increases linearly as the number of contributors increases exponentially. Therefore, as the number of AI safety researchers increases exponentially, we might expect the total productivity of the AI safety community to increase linearly.

I think Price's law is false, but I don't know what law it should be instead. I'll look at the literature on the rate of scientific progress (eg. Cowen & Southwood (2019)) to see if I could find any relationship between number of researchers and research productivity.


Price's law is a poor fit; Lotka's law is a better fit

The most prominent citation for Price's law, Nicholls (1988), says that Price's law is a poor fit (section 4: Validity of the Price Law):

Little empirical investigation of the Price law has been carried out to date [4,14]. Glänzel and Schubert [12] have reported some empirical results. They analyzed Lotka’s Chemical Abstracts data and found that the most prolific  authors contributed less that 20% of the total number of papers. They also refer, but without details, to the examination of “several dozens” of other empirical data sets and conclude that “in the usually studied populations of scientists, even the most productive authors are not productive enough to fulfill the requirements of Price’s conjecture” [12]. Some incidental results of scientometric studies suggest that about 15% of the authors will be necessary to generate 50% of the papers [16,17].

To further examine the empirical validity of Price’s hypothesis, 50 data sets were collected and analyzed here. ... the contribution of the most prolific  group of authors fell considerably short of the [50% of the papers] predicted by Price. ... The actual proportion of all authors necessary to generate at least 50% of the papers was found to be much larger that . Table 2 summarizes these results. In some cases, ..., more than half of the total number of papers is generated by those authors contributing only a single paper each. The absolute and relative size of  for various population sizes t is given in Table 3. All the empirical results referred to here are consistent; and, unfortunately, there seems little reason to suppose that further empirical results would offer any support for the Price law.

Nicholls (1988) continues, saying that Lotka's law (number of authors with  publications is proportional to ) has good empirical support, and finds  to be a good fit for sciences and humanities, and  to be a good fit in social sciences.

A different paper, Chung & Cox (1990), also finds that Price's Law is a poor fit while Lotka's law with  between 1.95 to 3.26 to be a good fit in finance.

(Allison, Price, Griffith, Moravcsik & Stewart (1976) discusses the mathematical relationship between Price's Law and Lotka's Law: neither implies the other; nor are they contradictory.)


Later edits:

Porby, in his post Why I think strong general AI is coming soon, mentions a tangentially related idea: core researchers contribute much more insight than newer researchers. New researchers need a lot of time to become core researchers.

In Porby's model, the research productivity at year  may be proportional to the number of researchers at year .

Intuition pump / generalising from fictional evidence: in the games Pandemic / Plague Inc. (where the player "controls" a pathogen and attempts to infect the whole human population on Earth), a lucky, early cross-border infection can help you win the game faster — more than the difference between a starting infected population of 1 vs 100,000.

This informs my intuition behind when the bonus of earlier spaceflight (through human help) could outweigh the penalty of not dismantling Earth.


When might human help outweigh the penalty of not dismantling Earth? It requires these conditions:

1. The AGI can very quickly reach an alternative source of materials: AGI spaceflight is superhuman.

  • AGI spacecraft, once in space, can reach eg. the Moon within hours, the Sun within a day
  • The AGI is willing to wait for additional computational power (it can wait until it has reached the Sun), but it really wants to leave Earth quickly

2. The AGI's best alternative to a negotiated agreement is to lie in wait initially: AGI ground operations is initially weaker-than-human.

  • In the initial days, humans could reliably prevent the AGI from building or launching spacecraft
  • In the initial days, the AGI is vulnerable to human action, and would have chosen to lay low, and wouldn't effectively begin dismantling Earth

3. If there is a negotiated agreement, then human help (or nonresistance) can allow the AGI to launch its first spacecrafts days earlier.

  • Relevant human decision makers recognize that the AGI will eventually win any conflict, and decide to instead start negotiating immediately
  • Relevant human decision makers can effectively coordinate multiple parts of the economy (to help the AGI), or (nonresistance) can effectively prevent others from interfering with the initially weak AGI

I now think that the conjunction of all these conditions is unlikely, so I agree that this negotiation is unlikely to work.

Answer by puffymistJun 23, 2022130

Even if we're already doomed, we might still negotiate with the AGI.

I borrow the idea in Astronomical Waste. The Virgo Supercluster has a luminosity of about  solar luminosity  W, losing mass at a rate of  kg / s.[1]

The Earth has mass  kg.

If human help (or nonresistance) can allow the AGI to effectively start up (and begin space colonization) 600 seconds = 10 minutes earlier, then it would be mutually beneficial for humans to cooperate with the AGI (in the initial stages when the AGI could benefit from human nonresistance), in return for the AGI to spare Earth[2] (and, at minimum, give us fusion technology to stay alive when the sun is dismantled).

(While the AGI only needs to trust humanity for 10 minutes, humanity needs to trust the AGI eternally. We still need good enough decision-making to cooperate.)

  1. ^

    We may choose to consider the reachable universe instead. Armstrong and Sandberg (2013) (section 4.4.2 Reaching into the universe) estimates that we could reach about  galaxies, with a luminosity of  W, and a mass loss of  kg / s. That is dwarfed by the  stars that becomes unreachable per second (Siegel (2021), Kurzgesagt (2021)), a mass loss of  kg / s.

  2. ^

    Starting earlier but sparing Earth means a space colonization progress curve that starts earlier, but initially increases slower. The AGI requires that space colonization progress with human help be asymptotically 10 minutes earlier, that is:

    For any sufficiently large time , progress with human help at time  progress with human resistance at time .

I think the dramatic impact would be stronger without the "The end", and instead adding more blank space.

Idea copied from a comment on the final chapter of Three Worlds Collide.