Posts

Sorted by New

Wiki Contributions

Comments

This seems like great work! If we're allowing to run out of players, the whole paradox collapses.

I think that's what makes this a paradox.

"The sum of the probability densities of the games ending in snake-eyes is less than 1 which means that the rounds ending in snake-eyes does not cover the full probability space."

This is contradicted by the problem statement: "At some point one of those groups will be devoured by snakes", so there seems to be some error mapping the paradox to the math.

I'll definitelly give this a try.

If you like the Schelling Game, you might also like my favorite party game: Just One. It is sort of an inverse Schelling Game. One Player is given a secret (to him only) word which he has to guess. The other players each give a clue, but the clue can only be a single word. Before the clues are presented to the guesser, duplicate clues get discarded, so everyone tries to avoid the Schelling point.

Works well with 6+ players, had more fun with something like 10 players, but not yet played with siginificantly more.

Plot twist: Humanity with near total control of the planet is Magnus Carlson, obviously.

The AI carefully placing the plates on a table will be used to put 5000 more strawberries on plates. Afterwards it will be used as a competent cook in an arbitrary kitchen. Thus the plate smasher AI will have lower impact and be "more corrigible".

I wonder how many of us don't want to see AI progress slow down because AI progress keeps proving us right.

After spending at least hundreds of hours reading lesswrong et al. and not being able to alter our path towards AI, I want the satisfaction of telling people "See? Told you so!"

A different perspective: Putin might cease to be the russian president due to a bunch of reasons (health, assassination, coup, ...). One of those reasons is "overthrown due to military defeat of the russian army in Kyiv". Now the defeat kind of happened, but Putin is still president. How should we update here? One might well argue: There are worlds in which failure to take Kyiv lead to Putin being overthrown quickly. We're not in one of those worlds, so his chances to stay in power go up.

Latest data from the RKI is 17.5% Omicron for the week ending on december 26, up from 3.1% the week before. Regional differences seem to be huge, between 1% in Sachsen and 65% in Bremen.

Overall numbers - still mostly Delta - are still declining and should turn upward again in 1 or 2 weeks.

When interpreting human brains, we get plenty of excellent feedback. Calibrating a lie detector might be as easy as telling a few truths and a few lies while in an fMRI.

To be able to use similar approaches for interpreting AIs, it might be necessary to somehow get similar levels of feedback from the AIs. I notice I don't have the slightest idea whether feedback from an AI was a few orders of magnitude harder to get - compared to getting human feedback - or whether it would be a few orders of magnitude easier or about the same.

Can we instruct GPT-3 to "consciously lie" to us?

Load More