Last week we wrapped the second post-AGI workshop; I'm copying across some reflections I put up on twitter:
>proper global UBI is *enormously* expensive (h/t @yelizarovanna)
This seems wrong. There will be huge amounts of wealth post-ASI. Even a relatively small UBI (e.g. 1% of AI companies) will be enough to support way better QOL for everyone on earth. Moreover, everything will become way cheaper because of efficiency gains downstream of AI. Even just at AGI, I think it's plausible that physical labour is something like 10x cheaper and cognitive labour is something like 1000x cheaper.
Sorry! I realise now that this point was a bit unclear. My sense of the expanded claim is something like:
For my part I found this surprising because I hadn't reflected on the sheer orders of magnitude involved, and the fact that any version of this basically involves passing through some fragile craziness. Even if it's small as a proportion of future GDP, it would in absolute terms be tremendously large.
I separately think there was something important to Korinek's claim (which I can't fully regenerate) that the relevant thing isn't really whether stuff is 'cheaper', but rather the prices of all of these goods relative to everything else going on.
I was also there, and my take is there was actually fairly little specific, technical discussion about the economics and politics of what happens post-AGI. This is mostly due to it not being anyone's job to think about these questions, and only somewhat because they're inherently hard questions. Not really sure what I would change.
it seems like the main reason people got less doomy was seeing that other people were working hard on the problem [...]
This would be v surprising to me!
It seems like, to the extend that we're less doomy about survival/flourishing, this isn't bc we've seen a surprising amount of effort, and think effort is v correlated with success. It's more like: our observations increase our confidence that the problem was easy all along, or that we have been living in a 'lucky' world all along.
I might ask you about this when I see you next -- I didn't attend the workshop so maybe I'm just wrong here.
You mean post AGI and pre ASI?
I agree that will be a tricky stretch even if we solve alignment.
Post ASI the only question is whether it's aligned or intent aligned to a good person(s). It takes care of the rest.
One solution is to push fast from AGI to ASI.
With an aligned ASI, other concerns are largely (understandable) failures of the imagination. The possibilities are nearly limitless. You can find something to love.
This is under a benevolent sovereign. The intuitively appealing balances of power seem really tough to stabilize long term or even short term during takeoff.
I'm not at all surprised by the assertion that humans share values with animals. When you consider that selective pressures act on all systems (which is to say that every living system has to engage with the core constraints of visibility, cost, memory, and strain), it's not much of a leap to conclude that there would be shared attractor basins where values converge over evolutionary timescales.