User Profile

star243
description30
message823

Recent Posts

Curated Posts
starCurated - Recent, high quality posts selected by the LessWrong moderation team.
rss_feed Create an RSS Feed
Frontpage Posts
Posts meeting our frontpage guidelines: • interesting, insightful, useful • aim to explain, not to persuade • avoid meta discussion • relevant to people whether or not they are involved with the LessWrong community.
(includes curated content and frontpage posts)
rss_feed Create an RSS Feed
Personal Blogposts
personPersonal blogposts by LessWrong users (as well as curated and frontpage).
rss_feed Create an RSS Feed

More on the Linear Utility Hypothesis and the Leverage Prior

2mo
8 min read
Show Highlightsubdirectory_arrow_left
4

Against the Linear Utility Hypothesis and the Leverage Penalty

4mo
11 min read
Show Highlightsubdirectory_arrow_left
45

[Link] Metamathematics and Probability

7mo
Show Highlightsubdirectory_arrow_left
0

Existential risk from AI without an intelligence explosion

1y
Show Highlightsubdirectory_arrow_left
23

Superintelligence via whole brain emulation

2y
Show Highlightsubdirectory_arrow_left
28

Two kinds of population ethics, and Current-Population Utilitarianism

4y
Show Highlightsubdirectory_arrow_left
21

Selfish reasons to reject the repugnant conclusion in practice

4y
Show Highlightsubdirectory_arrow_left
0

Prisoner's dilemma tournament results

5y
Show Highlightsubdirectory_arrow_left
124

Prisoner's Dilemma (with visible source code) Tournament

5y
Show Highlightsubdirectory_arrow_left
235
8

Recent Comments

Good point that there can be fairly natural finite measures without there being a canonical or physically real measure. But there's also a possibility that there is no fairly natural finite measure on the universe either. The universe could be infinite and homogeneous in some sense, so that no point...(read more)

Human value is complicated. I can't give any abstract constraints on what utility functions should look like that are anywhere near as restrictive as the linear utility hypothesis, and I expect anything along those lines that anyone can come up with will be wrong.

you can have a prior of 0 that it will actually happen

No.

1+ω = ω, for the usual ordering convention for ordinal addition.

Edit: I can't figure out how to delete my comment, but ricraz already said this.
https://www.lesserwrong.com/posts/GhCbpw6uTzsmtsWoG/the-different-types-not-sizes-of-infinity/xDfSmdiQATFF4sLPt

I certainly agree we should not content ourselves with an AI ban in lieu of technical progress

Why not? An AI ban isn't politically possible, but if it was enacted and enforced, I'd expect it to be effective at preventing risks from unaligned AI.

90% "paperclipping values" and 10% classical utilitarianism.

Are those probabilities, or weightings for taking a weighted average? And if the latter, what does that even mean?

I don't think that was where my idea came from. I remember thinking of it during AI Summer Fellows 2017, and fleshing it out a bit later. And IIRC, I thought about learning concepts that an agent has been trained to recognize before I thought of learning rules of a game an agent plays.

If you have an unbounded utility function, then putting a lot of resources into accurately estimating arbitrarily tiny probabilities can be worth the effort, and if you can't estimate them very accurately, then you just have to make do with as accurate an estimate as you can make.

The Linear Utility Hypothesis does imply that there is no extra penalty (on top of the usual linear relationship between population and utility) for the population being zero, and it seems to me that it is common for people to assume the Linear Utility Hypothesis unmodified by such a zero-population...(read more)

He means gambles that can have infinitely many different outcomes. This causes problems for unbounded utility functions because of the Saint Petersburg paradox.