Viktor Rehnberg

Wiki Contributions

Comments

So we have that

[...] Richard Jeffrey is often said to have defended a specific one, namely the ‘news value’ conception of benefit. It is true that news value is a type of value that unambiguously satisfies the desirability axioms.

but at the same time

News value tracks desirability but does not constitute it. Moreover, it does not always track it accurately. Sometimes getting the news that X tells us more than just that X is the case because of the conditions under which we get the news.

And I can see how starting from this you would get that . However, I think one of the remaining confusions is how you would go in the other direction. How can you go from the premise that we shift utilities to be for tautologies to say that we value something to a large part from how unlikely it is.

And then we also have the desirability axiom

for all and such that together with Bayesian probability theory.

What I was talking about in my previous comment goes against the desirability axiom in the sense that I meant that for in the more general case there could be subjects that prefer certain outcomes proportionally more (or less) than usual such that for some probabilities . As the equality derives directly from the desirability axiom, it was wrong of me to generalise that far.

But, to get back to the confusion at hand we need to unpack the tautology axiom a bit. If we say that a proposition is a tautology if and only if [1], then we can see that any proposition that is no news to us has zero utils as well.

And I think it might be well to keep in mind that learning that e.g. sun tomorrow is more probable than we once thought does not necessarily make us prefer sun tomorrow less, but the amount of utils for sun tomorrow has decreased (in an absolute sense). This comes in nicely with the money analogy because you wouldn't buy something that you expect with certainty anyway[2], but this doesn't mean that you prefer it any less compared to some other worse outcome that you expected some time earlier. It is just that we've updated from our observations such that the utility function now reflects our current beliefs. If you prefer to then this is a fact regardless of the probabilities of those outcomes. When the probabilities change, what is changing is the mapping from proposition to real number (the utility function) and it is only changing with an shift (and possibly scaling) by a real number.

At least that is the interpretation that I've done.


  1. This seems reasonable but non-trivial to prove depending on how we translate between logic and probability. ↩︎

  2. If you do, you either don't actually expect it or has a bad sense of business. ↩︎

Skimming the methodology it seems to be a definite improvement and does tackle the short-comings mentioned in the original post to some degree at least.

Isn't that just a question whether you assume expected utility or not. In the general case it is only utility not expected utility that matters.

Anyway, someone should do a writeup of our findings, right? :)

Sure, I've found it to be an interesting framework to think in so I suppose someone else might too. You're the one who's done the heavy lifting so far so I'll let you have an executive role.

If you want me to write up a first draft I can probably do it end of next week. I'm a bit busy for at least the next few days.

Lol. Somehow made it more clear that it was meant as a hyperbole than did.

You might want to consider cross-posting this to EA forum to reach a larger audience.

I've been thinking about the Eliezer's take on the Second Law of Thermodynamics and while I can't think of a succint comment to drop with it. I think it could bring value to this discussion.

Well I'd say that the difference between your expectations of the future having lived a variant of it or not is only in degree not in kind. Therefore I think there are situations where the needs of the many can outweigh the needs of the one, even under uncertainty. But, I understand that not everyone would agree.

I agree with as a sufficient criteria to only sum over , the other steps I'll have to think about before I get them.


I found this newer paper https://personal.lse.ac.uk/bradleyr/pdf/Unification.pdf and having skimmed it seemed like it had similar premises but they defined (instead of deriving it).

GovAI is probably one of the densest places to find that. You could also check out FHI's AI Governance group.

Load More