Sorted by New

Wiki Contributions


"What I cannot create, I do not understand."

-Richard Feynman

taken from wiki quotes which took it from Stephen Hawking's book Universe in a Nutshell which took it from Feynman's blackboard at the time of this death (1988)

its simple but it gets right at the heart of why the mountains of philosophy are the foothills of AI (as Eliezer put it) .

Captain Tagon: Lt. Commander Shodan, years ago when you enlisted you asked for a job as a martial arts trainer.

Captain Tagon: And here you are, trying to solve our current problem with martial arts training.

Captain Tagon: How's that saying go? "When you're armed with a hammer, all your enemies become nails?"

Shodan: Sir,.. you're right. I'm being narrow-minded.

Captain Tagon: No, no. Please continue. I bet martial arts training is a really, really useful hammer.

I didn't get into it earlier, but everyman's a little more complicated. it runs through each match test one at a time from most likely to least, and checks after each match how much time it has left, if it starts to run out, it just leaves with the best strategy its figured out. By controlling how much time you pass in, strategies can avoid n^2 processing problems. Its why I thought it was so necessary to include even if it does give hints if strategies are being simulated or not. Everyman was built as a sort of upper bound. its one of the least efficient strategies one might implement.

Actually, it seems that crashes it. thanks for the catch. I hadn't tested copy_cat against itself and should have. Forbidding strategies to pass their opponent fixes that, but it does indicate my program may not be as stable as I thought. I'm going to have to spend a few more days checking for bugs sense I missed one that big. thanks eugman.

I've written a pretty good program to complete variant 3 but I'm such a lurker on this site i lack the nessesary karma to submit it as a article. So here it is as a comment instead:

Inspired by prase's contest, (results here) and Eliezer_Yudkowsky's comment, I've written a program that plays iterated games of prisoners dilemma, with the cravat that each player can simulate his opponent. I'm now running my own contest. You can enter your onwn strategy by sending it to me in a message. Deadline to enter is Sunday September 25. The contest itself will run shortly there after.

Each strategy plays 100 iterations against each other strategy, with cumulative points. Each specific iteration is called a turn, each set of 100 iterations is called a match. (thanks prase for the terminology). Scoring is CC: (4,4) DC: (7,0) CD: (0,7) DD: (2,2).

Every turn, each strategy receives a record of all the moves it and its opponent made this match, a lump of time to work and its opponents code. Strategies can't review enemy code directly but they can run it through any simulations they want before deciding their move, within the limits of the amount of time they have to work.

Note that strategies can not pass information to themselves between iterations or otherwise store it, other than the record of decisions. They start each turn anew. This way any strategy can be simulated with any arbitrary record of moves without running into issue.

Strategies in simulations need a enemy strategy passed into them. To avoid infinite recursion of simulations, they are forbidden from passing themselves. They can have as many secondary strategies as need however. This creates a string of simulations where strategy(1) simulates enemy(1), passing in strategy(2) which enemy(1) simulates and passes in enemy(2) which strategy(2) simulates and passes in strategy(3). The final sub-strategy passed in by such a chain must be a strategy which performs NO simulations.

for example, the first sub-strategy could be an exact duplicate of main strategy other than that it passed the tit_for_tat program instead. so main_strategy => sub_strategy1 => tit_for_tat.

You can of course use as many different sub_strategies as you need, the programs are limited by processing time, not memory. Strategies can run their simulated opponent on any history they devise, playing which ever side they choose.

Strategies can't read the name of their opponent, see the number of strategies in the game, watch any other matches or see any scores outside their current match.

Strategies are not cut off if they run out of time, but both will receive 0 points for the turn. The decisions of the turn will be recorded as if normal.

I never figured out a good way to keep strategies from realizing they were being simulated simply by looking at how much time they were given. Not knowing how much time they have would make it prohibitively difficult to avoid timing out. My hack solution is to not give a definitive amount of time for the main contest but instead a range: from 0.01 seconds to 0.1 seconds per turn, with the true time to be know only by me. This is far from ideal, and if anyone has a better suggestion I'm all ears.

To give reference: a strategy that runs 2 single turn simulations of tit_for_tat against tit_for_tat takes, on average 3.510^-4 seconds. Running only 1 single turn simulations took only 1.510^-4 seconds. tit_for_tat by itself takes about 2.5*10^-5 seconds to run a single turn. Unfortunately, do to factor outside of my control, matlab, the software I'm running will for unknown reasons take 3 to 5 times as long as normal about 1 out of a 100 times. Leave yourself some buffer.

Strategies are NOT allowed a random number generator. This is different from prase's contest but I would like to see strategies for dealing with enemy intelligence trying to figure them out without resorting to unpredictability.

I've come up with a couple of simple example strategies that will be performing in the contest.


simulates its opponent against tit_for_tat to see its next move. If enemy defects, Careful_cheater defects. If enemy cooperates, Careful_cheater simulates its next move after that with a record showing Careful_cheater defecting this turn, passing tit_for_tat. If the opponent would have defected on the next turn, Careful_cheater cooperates, but if the opponent would have cooperated despite Careful_cheaters defection, it goes ahead and defects.

simulations show it doing evenly against tit_for_tat and its usual variations, but Careful_cheater eats forgiving programs like tit_for_2_tats alive.


simulates 10 turn matchs with its opponent against several possible basic strategies, including tit_for_tat, tit_for_tat_optimist (cooperates first two turns), and several others, then compares the scores of each match and plays as the highest scoring


Switches player numbers and simulates its opponent playing Copy_cats record while passing its opponent and then performs that action. That is to say, it sees what its opponent would do if it were in Copy_cats position and up against itself.

this is basically DuncanS strategy from the comments. DuncanS, your free to enter another one sense I stole yours as an example

and of course tit_for_tat and strategy "I" from prase's contest will be competing as well.

one strategy per person, all you need for a strategy is a complete written description, though I may message back for clarification. I reserve the right to veto any strategy I deem prohibitively difficult to implement.

Like many problems in self improvement I think this comes down to a two essential factors, Practice and Evaluation. Social activities like convincing people need extensive real world practice to effectively apply, even with the best advice. However, improvement can only be made if you know how effective you are actions are.

that much is obvious

As an initial suggestion: locate only groups on difficult but none controversial topics and practice convincing people of things, asking them to rate their knowledge, opinions and beliefs before and after

perhaps we could set up such a thing here on less wrong for people to practice on one another or does that seem to Dark arts to people?

a lot of the comments here are critical of the way these scenarios are presented, but I don't believe there is in fact any deep issue. the fact of the matter remains, if you are in a situation where you have two things which you do not yet have the information to differentiate, and you know they have a state in some binary property, and that this state for one is independent of the other. Then, if you learn, by any means, that one of these objects has a specific state (call it A) with regard to the binary property, your probability needs to adjust to p(both are in state A|one is in state A) = 1/3 and p(only on object is in state A|one is in state A) = 2/3. johnclark just used a classic example to demonstrate the importance of how interchangeability has a large and unintuitive effect on probability. It is my contention that the sentiment this scenario is unintuitive because of the way johnclark is incorrect. I have seen this question posed in other ways to classes of smart college students studying probability and most of them getting it wrong. External knowledge of the way people tend to provide information isn't really a relevant factor here. This article is well written and important, and by no means deserves the very low karma score it has received (-6 as of this writing, -7 before my vote)

I'll be there (registered just to say that)