After lots of dedicated work and a few very close calls, a somewhat aligned AGI is created with a complex utility function based on utilitarian-like values. It immediately amplifies its intelligence and ponders what its first steps should be.

Upon some thought, it notices a few strange edge cases in physics. The universe seems fairly limited, unless…

The chances of expanding far past the known universe seem slim, but there could be a way with enough thought and experimentation. The chances are small, but the payoff could be enormous.

The AGI proceeds to spend all of the available resources in the universe to improve its model of fundamental physics.

New Comment
5 comments, sorted by Click to highlight new comments since: Today at 8:05 AM
[-]rk5y100

Section of an interesting talk relating to this by Anna Salamon. Makes the point that if ability to improve its model of fundamental physics is not linear in the amount of Universe it controls, such an AI would be at least somewhat risk-averse (with respect to gambles that give it different proportions of our Universe)

This is an example of a pascals mugging. Tiny probabilities of vast rewards can produce weird behavior. The best known solution is either a bounded utility function, or a antipascalene agent. (An agent that ignores the best x% and worst y% of possible worlds when calculating expected utilities. It can be money pumped)

Or a low-impact AI. "Don't break what you can't fix given your current level of knowledge and technology."

Yup, in general, I agree.

It's well-known that precision is lost at the extremes of calculation. I presume the AGI has appropriate error bars or confidence scores for it's estimates of effort and reward for such things, and it's making the correct judgement to maximize utilitarian value.

This sounds like a grand success to me!