Posts

Sorted by New

Wiki Contributions

Comments

Tentative guesses:

Nothing else is standing out to me so I just threw some linear regression at it and added 1.22 times the standard deviation of the residual to be safe.

  1. Abigail: 21.9 lb
  2. Bertrand: 19.5 lb
  3. Chartreuse: 25.4 lb
  4. Dontanien: 21.8 lb
  5. Espera: 19.1 lb
  6. Flint: 7.3 lb
  7. Gunther: 27.4 lb
  8. Harold: 20.4 lb
  9. Irene: 24.4 lb
  10. Jacqueline: 21.0 lb

I completely ignored the greenish-gray turtles because His Malevolence didn't have any and there weren't that many of them in the data, I hope that wasn't a mistake. It bothers me that I can't figure out anything regarding nostril size. From a meta perspective, I feel like there wouldn't be two irrelevant columns given that fangs was already redundant. Everything else was at least somewhat correlated with weight.

I've been loving reading these for a while and figured I'd give it a shot for once.

Random early observations

  1. Focusing just on gray turtles for now because they're outliers on every metric.
  2. All gray turtles and only gray turtles have fangs.
  3. Weight is approximately Shell Segments / 2 for gray turtles.
  4. Nothing else seems obviously correlated.

Edit

There are way too many green turtles with 6 shell segments, and they all have no wrinkles, normal nostril size, no miscellaneous abnormalities, and weight 20.4.

Interesting. I agree with all your reasoning, but my plausibility judgements of the implications seem opposite to yours and I came away with the opposite conclusion that well-being is clearly capped.

I think you and the linked post might have mismatching definitions of reward. It seems like your definition is that reward is what the AI values, but the linked post uses reward to mean the reward function specified by the programmers that is used to train the AI.

As for using FLOP as a plural noun, that's how other units work. We use 5 m for 5 meters, 5 s for 5 seconds, 5 V for 5 volts, etc. so it's not that weird.

Answer by qwertyasdefOct 16, 202160

If human behaviour is fully determined by the laws of the universe, then you have no choice in whether you assign moral blame or not so it doesn't make sense to discuss whether we should or shouldn't do that.

I think you're right that your pennies become more valuable the less you have. Suppose you start with  money and your utility function is . Assuming the original lottery was not worth playing, then , which rearranges to . This can be though of as saying the average slope of the utility function from  to  is greater than some constant .

For the second lottery, each ticket you buy means you have less money. Then the utility cost of the first lottery ticket is , the second , the third, and so on. If the first ticket is worth buying, then  so . This means the average slope of the utility function from  to  is less than the average slope from  to , so if the utility function is continuous, there must be some other point in the interval  where the slope is greater than average. This corresponds to a ticket that is no longer worth buying because it's an even worse deal than the single ticket from the original lottery.

Also note that the value of  is completely arbitrary and irrelevant to the argument, so I think this should still avoid the Egyptology objection.