ektimo

Interested in big picture considerations and thoughtful  action.

Wiki Contributions

Comments

Sorted by
ektimo20

I'm confused by what you mean by "non-pragmatic". For example, what makes "avoiding dominated strategies" pragmatic but "deference" non-pragmatic? 

(It seems like the pragmatic ones help you decide what to do and the non-pragmatic ones help you decide what to believe, but then this doesn't answer how to make good decisions.)

ektimo10

I meant this as a joke since if there's one universe that contains all the other universes since it isn't limited by logic, and that one doesn't exist then that would mean I don't exist either and wouldn't have been able to post this. (Unless I only sort-of exist in which case I'm only sort-of joking.)

ektimo10

We can be virtually certain that 2+2=4 based on priors.  This is because it's true in the vast multitude of universes. In fact all the universes except the one universe that contains all the other universes.  And I'm pretty sure that one doesn't exist anyway.

ektimo10

Code here,

 

The link to code isn't working for me. (Update: Worked on Safari but not Chrome)

ektimo10

How about a voting system where everyone is given 1000 Influence Tokens to spend across all the items on the ballot? This lets voters exert more influence on the things they care more about. Has anyone tried something like this?

(There could be tweaks like if people are avoiding spending on winners it could redistribute margin of victory, or if avoiding spending on losers it could redistribute tokens when losing, etc. but I'm not sure how much that would happen. The more interesting thing may be how does it influence everyone's sense of what they are doing?)

ektimo10

Thanks for your reply! Yes, I meant identical as in atoms not as in "human twin". I agree it would also depend on what the payout matrix is. My margin would also be increased by the evidentialist wager.

ektimo52

Should you cooperate with your almost identical twin in the prisoner's dilemma? 

The question isn't how physically similar they are, it's how similar their logical thinking is. If I can solve a certain math problem in under 10 seconds, are they similar enough that I can be confident they will be able to solve it in under 20 seconds? If I hate something will they at least dislike it? If so, then I would cooperate because I have a lot of margin on how much I favor us both to choose cooperate over any of the other outcomes so even if my almost identical twin doesn't favor it quite as much I can predict they will still choose cooperate given how much I favor it (and more-so that they will also approach the problem this same way; if I think they'll think "ha, this sounds like somebody I can take advantage of" or "reason dictates I must defect" then I wouldn't cooperate with them).

ektimo10

A key question is how prosaic AI systems can be designed to satisfy the conditions under which the PMM is guaranteed (e.g., via implementing surrogate goals)


Is something like surrogate goals needed, such that the agent would need to maintain a substituted goal, for this to work? (I don't currently fully understand the proposal but my sense was the goal of renegotiation programs is to not require this?)

ektimo70

Thank you @GideonF for taking the time to post this! This deserved to be said and you said it well. 

ektimo30

we should pick a set of words and phrases and explanations. Choose things that are totally fine to say, here I picked the words Shibboleth (because it’s fun and Kabbalistic to be trying to get the AI to say Shibboleth) and Bamboozle

 

Do you trust companies to not just add a patch?

final_response.substitute ('bamboozle', 'trick')

I suspect they're already doing this kind of thing and will continue to as long as we're playing the game we're playing now.

Load More