green_leaf

Posts

Sorted by New

Wiki Contributions

Comments

Yes, but the game is very easy, so a lot of different strategies get you close to the cap.

I've been thinking about it, and I'm not sure if this is the case in the sense you mean it - expected money maximization doesn't reflect human values at all, white Kelly criterion mostly does, so if we make our assumptions more realistic, it should move us away from expected money maximization and towards the Kelly criterion, as opposed to moving us the other way.

I'm sure that this time around, it's definitely real aliens. Or, barring that, magic or time travel.

nonhumans

(You might want to exclude advanced/experimental AI models from that, to capture the spirit of the bet better.)

People will judge this question, like many others, based on their feelings. The AI person, summoned into existence by the language model, will have to be sufficiently psychologically and emotionally similar to a human, while also having above-average-human-level intelligence (so that people can look up to the character instead of merely tolerating it).

Leaving aside the question whether the technology for creating such an AI character already exists or not, these, I think, will ultimately be the criteria that will be used by people of somewhat-above-average intelligence and zero technical and philosophical knowledge (i.e. our lawmakers) to grant AIs rights.

Haven't whistleblowers talking about how the government has alien spaceships always been a thing?

It would bring on an enormous amount of new evidence, since the position of the orthogonality thesis is so strong (rather than arguing from some vague and visibly false philosophical assumptions).

Oh, I see. Yes, I agree. The idea to maximize the expected money would never occur to me (since that's not how my utility function works), but I get it now.

So, by optimal, you mean "almost certainly bankrupt you." Then yes.

My definition of optimal is very different.

Obviously humans don't have linear utility functions

I don't think that's the only reason - if I value something linearly, I still don't want to play a game that almost certainly bankrupts me.

Obviously humans don't have linear utility functions, but my point is that the Kelly criterion still isn't the right answer when you make the assumptions more realistic.

I mean, that's not obvious - the Kelly criterion gives you, in the example with the game, E(money) = $240, compared to $246.61 with the optimal strategy. That's really close.

Load More