Why would Squiggle Maximizer (formerly "Paperclip maximizer") produce single paperclip?

2Dagon

-1Donatas Lučiūnas

4Dagon

-1Donatas Lučiūnas

2Dagon

0Donatas Lučiūnas

New Answer

New Comment

1 Answers sorted by

Only if every other entity's anti-paperclip stance is known and unchangeable, and if resource->impact is purely linear, can it be assumed that 100% to self-preservation (oh, wait, also to accumulation of power, there's another balance to be found) is optimal. Neither of these are true, but especially the problem of declining marginal impact.

For any given energy unit decision you could make, there will be a different distribution of future worlds and their number of paperclips. Building one paperclip could EASILY increase the median and average number of future paperclips more than investing one paperclip's worth of power into comet diversion.

It gets more difficult when coordinating with unaligned agents - one has to decide whether to nudge them toward valuing paperclips, convincing/forcing them to give you more power, or (since they're unlikely to care as much as you about the glorious clippy future) point THEM at the comet problem so they reduce that risk AND don't interfere with your paperclips.

If you haven't played it (it was popular a few years ago in these circles, but I haven't seen it mentioned recently), it's worth a run through https://www.decisionproblem.com/paperclips/ . It's mostly humorous, but based on some very good thinking.

Building one paperclip could EASILY increase the median and average number of future paperclips more than investing one paperclip's worth of power into comet diversion.

Why do you think so? There will be no paperclips if planet and maximizer are destroyed.

4

There might be - some paperclips could survive a comet. More importantly, one paperclip's worth of resources won't change the chance of a comet collision by any measurable amount, so the choice is either "completely waste that energy" or "make a paperclip that might survive".

-1

I don't think your reasoning is mathematical. Worth of survival is infinite. And we have situation analogous to Pascal's wager. Why do you think the maximizer would reject Pascal's logic?

2

First rule of probability and decision theory: no infinities! If you want to postulate very large numbers, go ahead, but be prepared to deal with very tiny probabilities.
Pascal's wager is a good example - the chance that the wager actually pays off based on this decision is infinitesimal (not zero, but small enough that I can't really calculate with it), which makes it irrelevant how valuable it is. This gets even easier with the multitude of contradictory wagers on offer - "infinite value" from many different choices, only one of which can you take. Mostly, take the one(s) with lower value but actually believable conditional probability.

0

Why do you think it is rational to ignore tiny probabilities? I don't think you can make maximizer ignore tiny probabilities. And some probabilities are not tiny, they are unknown (black swans), why do you think it is rational to ignore them? In my opinion ignoring self preservation is contradictory to maximizer's goal. I understand that this is popular opinion, but it is not proven in any way. The opposite (focus on self preservation instead of paperclips) has logical proof (Pascal's wager).
Maximizer can use robust decision making (https://en.wikipedia.org/wiki/Robust_decision-making) to deal with many contradictory choices.

Every bit of energy spent on paperclips is not spent on self-preservation. There are many threats (comets, aliens, black swans, etc.), caring about paperclips means not caring about them.

You might say maximizer will divide its energy among few priorities. Why is it rational to give less than 100% for self-preservation? All other priorities rely on this.