Posts meeting our frontpage guidelines:
• interesting, insightful, useful
• aim to explain, not to persuade
• avoid meta discussion
• relevant to people whether or not they
are involved with the LessWrong community.

Good point that there can be fairly natural finite measures without there being a canonical or physically real measure. But there's also a possibility that there is no fairly natural finite measure on the universe either. The universe could be infinite and homogeneous in some sense, so that no point...(read more)

Human value is complicated. I can't give any abstract constraints on what utility functions should look like that are anywhere near as restrictive as the linear utility hypothesis, and I expect anything along those lines that anyone can come up with will be wrong.

1+ω = ω, for the usual ordering convention for ordinal addition.

Edit: I can't figure out how to delete my comment, but ricraz already said this. https://www.lesserwrong.com/posts/GhCbpw6uTzsmtsWoG/the-different-types-not-sizes-of-infinity/xDfSmdiQATFF4sLPt

I certainly agree we should not content ourselves with an AI ban in lieu of technical progress

Why not? An AI ban isn't politically possible, but if it was enacted and enforced, I'd expect it to be effective at preventing risks from unaligned AI.

I don't think that was where my idea came from. I remember thinking of it during AI Summer Fellows 2017, and fleshing it out a bit later. And IIRC, I thought about learning concepts that an agent has been trained to recognize before I thought of learning rules of a game an agent plays.

If you have an unbounded utility function, then putting a lot of resources into accurately estimating arbitrarily tiny probabilities can be worth the effort, and if you can't estimate them very accurately, then you just have to make do with as accurate an estimate as you can make.

The Linear Utility Hypothesis does imply that there is no extra penalty (on top of the usual linear relationship between population and utility) for the population being zero, and it seems to me that it is common for people to assume the Linear Utility Hypothesis unmodified by such a zero-population...(read more)

He means gambles that can have infinitely many different outcomes. This causes problems for unbounded utility functions because of the Saint Petersburg paradox.

Good point that there can be fairly natural finite measures without there being a canonical or physically real measure. But there's also a possibility that there is no fairly natural finite measure on the universe either. The universe could be infinite and homogeneous in some sense, so that no point...(read more)

Human value is complicated. I can't give any abstract constraints on what utility functions should look like that are anywhere near as restrictive as the linear utility hypothesis, and I expect anything along those lines that anyone can come up with will be wrong.

No.

1+ω = ω, for the usual ordering convention for ordinal addition.

Edit: I can't figure out how to delete my comment, but ricraz already said this.

https://www.lesserwrong.com/posts/GhCbpw6uTzsmtsWoG/the-different-types-not-sizes-of-infinity/xDfSmdiQATFF4sLPt

Why not? An AI ban isn't politically possible, but if it was enacted and enforced, I'd expect it to be effective at preventing risks from unaligned AI.

Are those probabilities, or weightings for taking a weighted average? And if the latter, what does that even mean?

I don't think that was where my idea came from. I remember thinking of it during AI Summer Fellows 2017, and fleshing it out a bit later. And IIRC, I thought about learning concepts that an agent has been trained to recognize before I thought of learning rules of a game an agent plays.

If you have an unbounded utility function, then putting a lot of resources into accurately estimating arbitrarily tiny probabilities can be worth the effort, and if you can't estimate them very accurately, then you just have to make do with as accurate an estimate as you can make.

The Linear Utility Hypothesis does imply that there is no extra penalty (on top of the usual linear relationship between population and utility) for the population being zero, and it seems to me that it is common for people to assume the Linear Utility Hypothesis unmodified by such a zero-population...(read more)

He means gambles that can have infinitely many different outcomes. This causes problems for unbounded utility functions because of the Saint Petersburg paradox.