Posts

Sorted by New

Wiki Contributions

Comments

Interesting. I agree with all your reasoning, but my plausibility judgements of the implications seem opposite to yours and I came away with the opposite conclusion that well-being is clearly capped.

I think you and the linked post might have mismatching definitions of reward. It seems like your definition is that reward is what the AI values, but the linked post uses reward to mean the reward function specified by the programmers that is used to train the AI.

As for using FLOP as a plural noun, that's how other units work. We use 5 m for 5 meters, 5 s for 5 seconds, 5 V for 5 volts, etc. so it's not that weird.

If human behaviour is fully determined by the laws of the universe, then you have no choice in whether you assign moral blame or not so it doesn't make sense to discuss whether we should or shouldn't do that.

I think you're right that your pennies become more valuable the less you have. Suppose you start with  money and your utility function is . Assuming the original lottery was not worth playing, then , which rearranges to . This can be though of as saying the average slope of the utility function from  to  is greater than some constant .

For the second lottery, each ticket you buy means you have less money. Then the utility cost of the first lottery ticket is , the second , the third, and so on. If the first ticket is worth buying, then  so . This means the average slope of the utility function from  to  is less than the average slope from  to , so if the utility function is continuous, there must be some other point in the interval  where the slope is greater than average. This corresponds to a ticket that is no longer worth buying because it's an even worse deal than the single ticket from the original lottery.

Also note that the value of  is completely arbitrary and irrelevant to the argument, so I think this should still avoid the Egyptology objection.