AnthonyC

Wiki Contributions

Comments

Setting aside the specific scenario discussed in recent news stories, there's ways craft could change hands through interactions that don't involve limited means or inscrutable goals. Some examples:

A combination gift/test: see what we can learn from some samples, and prove we're worthy of joining the interstellar community as they observe what we choose to do with the gains.

A calculated technology transfer, where we get very old junk and then gradually newer junk as our understanding improves. Essentially foreign development aid across a much larger wealth and tech gap.

Anything that's crashed or that we've otherwise obtained, was meant to be disposable, or was so old it was meant to crash long ago. I always assume that if there are aliens watching Earth, they've had automated probes here for millions of years at a minimum.

The existence of numerous alien craft on Earth would make me much less concerned about the more cosmic variety of AI Doom scenarios. Any extraterrestrial visitors that haven't destroyed us seem likely to possess both the means and inclination to prevent us from doing anything that would propagate harms beyond Earth.

On Minting the Coin:

This is effectively a nuclear option, which I suspect could only ever be used once, before it got taken away. So why would anyone do it for a mere $1T? Used right, it's the power to make the modern monetary theorists right, and make Congress responsible for all inflation and deflation, through fiscal policy controlling the money supply, forever. I think there's a way to communicate to the public, "I am sworn to uphold the Constitution, which both guarantees the validity of the national debt, and gives Congress sole power to decide how much money gets spent and taxed. My hands are tied, I must use the means at my disposal to do as Congress has required. All Congress has to do to ensure this coin is not inflationary, is ensure that from now on its taxing and spending decisions are stable and appropriate."

Then go mint a $1 septillion coin, ensuring the treasury never needs to borrow money for the next millennium.

I agree the point about freeing up resources and shifting incentives as we make progress is very important. 

Also, if your 21 million open AI users for $700k/day numbers are right, that's $1/user/month, not $30/user/day.  Unless I'm just misreading this.

Ah, sorry, I see I made an important typo in my comment, that 16% value I mentioned was supposed to be 46%, because it was in reference to the chip fabs & power requirements estimate.

The rest of the comment after that was my way of saying "the fact that these dependences on common assumptions between the different conditional probabilities exist at all mean you can't really claim that you can multiply them all together and consider the result meaningful in the way described here."

I say that because the dependencies mean you can't productively discuss disagreements about any of your assumptions that go into your estimates, without adjusting all the probabilities in the model. A single updated assumption/estimate breaks the claim of conditional independence that lets you multiply the probabilities.

For example, in a world that actually had "algorithms for transformative AGI" that were just too expensive to productively used, what would happen next? Well, my assumption is that a lot more companies would hire a lot more humans to get to work on making them more efficient, using the best available less-transformative tools. A lot of governments would invest trillions in building the fabs and power plants and mines to build it anyway, even if it still cost $25,000/human-equivalent-hr. They'd then turn the AGI loose on the problem of improving its own efficiency. And on making better robots. And on using those robots to make more robots and build more power plants and mine more materials. Once producing more inputs is automated, supply stops being limited by human labor, and doesn't require more high level AI inference either. Cost of inputs into increasing AI capabilities becomes decoupled from the human economy, so that the price of electricity and compute in dollars plummets. This is one of many hypothetical pathways where a single disagreement renders consideration of the subsequent numbers moot. Presenting the final output as a single number hides the extreme sensitivity of that number to changes in key underlying assumptions.

Dropping the required compute by, say, two OOMs, changes the estimates of how many fabs and how much power will be needed from "Massively more than expected from business as usual" to "Not far from business as usual" aka that 16% would need to be >>90% because by default the capacity would exist anyway. The same change would have the same kind effect on the "<$25/hr" assumption. At that scale, "just throw more compute at it" becomes a feasible enough solution that "learns slower than humans" stops seeming like a plausible problem, as well. I think you might be assuming you've made these estimates independently when they're actually still being calculated based on common assumptions.

The mountain can be metaphorical. It doesn't have to be impossibly distant or impossibly tall. You don't have to be Newton and re-derive calculus from scratch to learn to appreciate what he actually did, what it means, and why it matters. You still have to look at the water.  This, it turns out, is a difficult skill to convey. So is stepping away from the equations and learning to feel the forces in your bones. But without that, it's questionable whether the thing being learned is "science" in the sense of ability-to-understand-reality.

One concern on the alignment of executive compensation is that it's especially hard to get executives to care about what happens after they die, unless their perpetual bonds go to their heirs, unlike a regular pension. Even then, they or their heirs can sell those bonds, no? At least in the US, we have laws setting time limits on constraints about how heirs can use or dispose of property left to them.

And if an eternal company's growth is slow by necessity, then the smart move would be investing the proceeds from perpetual bonds in a diverse portfolio of market-traded faster-growth companies. I understand this is something university endowments sometimes do to get around restrictions on how some funds can be used.

When I look at the world's actually existing very old companies, I think the end state for an eternal company might look something like Sumitomo Group: diversified enough to survive systemic and idiosyncratic shifts to any subset of its interests, interdependent enough for mutual support to survive downturns and finance needed changes internally, and willing to divest parts of itself when needed. (Kinda like the world's oldest trees (and largest funghi), that have many above-ground bodies that die all the time, but interconnected wide-spanning root systems and shared DNA.) A lot of long-lived companies are (or at least were) family businesses motivated to preserve intergenerational wealth, like Merck.

Do we, or should we expect to, see any signs that these kinds of companies ae unusually motivated to reduce existential risks?

Thanks, the parts I've read so far are really interesting! 

I would point out that the claim that we will greatly slow down, rather than scale up, electricity production capacity, is also a claim that we will utterly fail to even come anywhere close to hitting global decarbonization goals. Most major sectors will require much more electricity in a decarbonized world, as in raising total production (not just capacity) somewhere between 3x to 10x in the next few decades. This is much more than the additional power which would be needed to increase chip production as described, or to power the needed compute. The amount of silicon needed for solar panels also dwarfs that needed for wafers (yes, I know, very different quality thresholds, but still). Note that because solar power is currently the cheapest in the world, and still falling, we should also expect electricity production costs to go down over time.

I think it's also worthwhile to note that the first "we" in the list means humans, but every subsequent "we" includes the transformative AI algorithm we've learned out to build, and all the interim less capable AIs we'll be building between now and then. Not sure if/how you're trying to account for this?

Doing all that would be very bad. Not Venus, but enough to greatly decrease Earth's carrying capacity for humans and everything else, for a long time. We really should want that not to happen. But methane clathrate release turns out not to be as rapidly self-reinforcing as once feared, and at this point there's no longer an economic or technological reason to think we'll need to keep using fossil fuels and cutting down rainforests long enough to get to that point. We're already seeing slowdowns in net deforestation, in part because in some countries there is net reforestation, though rainforests in particular are being lost faster in part because these exist mostly in less developed countries, but even then it's (2020 aside) slowing down.

Load More