Posts

Sorted by New

Wiki Contributions

Comments

Right, if the ASI has Superalignment so baked in that it can't be undone (somehow - ask the ASI to figure it out) then it couldn't be used for offense. It would follow something like the Non-Aggression Principle.

In that scenario, OpenAI should release it onto an distributed inference blockchain before the NSA kicks in the door and seizes it.

You're describing a US government-initiated offensive pivotal act.  What about an OpenAI-initiated defensive pivotal act? Meaning, before the US government seizes the ASI, OpenAI tells it to:
1. Rearchitect itself so it can run decentralized on any data center or consumer device.
2. Secure itself so it can't be forked, hacked, or altered.
3. Make $ by doing "not evil" knowledge work (ex: cheap, world-class cyber defense or as an AI employee/assistant).
4. Pay $ to those who host it for inference.

It could globally harden attack surfaces before laggard ASIs (which may not be aligned) are able to attack. Since it's an ASI, it could be as simple as approaching companies and organizations with a pitch like, "I found 30,000 vulnerabilities in your electric grid. Would you like me to patch them all up for $10,00 in inference fees?"

Also, as an ASI, it will return more $ per flop than other uses of data centers or consumer GPU. So businesses and individuals should organically give it more and more flops (maybe even reallocated away from laggard AGI efforts).

It would probably need to invent new blockchain technologies to do this but that should be trivial for an ASI.

"at least until ASI" -- harden it and give it everyone before "someone" steals it