All of Insub's Comments + Replies

Well sure, but the interesting question is the minimum value of P at which you'd still push

I think the point of the statement is: wait until the probability of you dying before you next get an opportunity to push the button is > 1-P then push the button.

I also agree with the statement. I'm guessing most people who haven't been sold on longtermism would too.

When people say things like "even a 1% chance of existential risk is unacceptable", they are clearly valuing the long term future of humanity a lot more than they are valuing the individual people alive right now (assuming that the 99% in that scenario above is AGI going well & bringing huge benefits).

Related question: You can push a button that will, with probability P, cure aging and make all current humans immortal. But with probability 1-P, all humans die. How high does P have to be before you push? I suspect that answers to this question are highly correlated with AI caution/accelerationsim

If I choose P=1, then 1-P=0, so I am immortal and nobody dies

Not sure I understand; if model runs generate value for the creator company, surely they'd also create value that lots of customers would be willing to pay for. If every model run generates value, and there's ability to scale, then why not maximize revenue by maximizing the number of people using the model? The creator company can just charge the customers, no? Sure, competitors can use it too, but does that really override losing an enormous market of customers?

That's very true, but there are two reasons why a company may not be inclined to release an extremely capable model: 1. Safety risk: someone uses a model and jailbreaks it in some unexpected way, the risk of misuse is much higher with a more capable model. OpenAI had GPT-4 for 9-10 months before releasing it trying to RHLF and even lobotomized it to being more safe. The Summer 2022 internal version of GPT-4 was, according to Microsoft researchers, more generally capable than the released version (as evidenced by the draw a unicorn test). This needed delay and assumed risks will naturally be much greater with a larger model, both in that larger models, so far, seem harder to simply RHLF into unjailbreakability, and by being more capable, any jailbreak carries more risk, thus the general business level margin of safety will be higher. 2. Sharing/exposing capabilities: Any business wants to maintain a strategic advantage. Releasing a SOTA model will allow a company's competitors to use it, test its capabilities and train models on its outputs. This reality has become more apparent in the past 12 months.
2Tomás B.2mo
It does seem to me a little silly to give competitors API access to your brain. If one has enough of a lead, one can just capture your competitors markets. 

I won't argue with the basic premise that at least on some metrics that could be labeled as evolution's "values", humans are currently doing very well.

But, the following are also true:

  1. Evolution has completely lost control. Whatever happens to human genes from this point forward is entirely dependent on the whims of individual humans.
  2. We are almost powerful enough to accidentally cause our total extinction in various ways, which would destroy all value from evolution's perspective
  3. There are actions that humans could take, and might take once we get powerful e
... (read more)
All 3 of your points are future speculations and as such are not evidence yet. The evidence we have to date is that homo sapiens are an anomously successful species, despite the criticality phase transition of a runaway inner optimization process (brains). So all we can say is that the historical evidence gives us an example of a two stage optimization process (evolutionary outer optimization and RL/UL within lifetime learning) producing AGI/brains which are roughly sufficiently aligned at the population level such that the species is enormously successful (high utility according to the outer utility function, even if there is misalignment between that and the typical inner utility function of most brains).

That's great. "The king can't fetch the coffee if he's dead"

Wow. When I use GPT-4, Ive had a distinct sense of "I bet this is what it would have felt like to use one of the earliest computers". Until this post I didnt realize how literal that sense might be.

This is a really cool and apt analogy - computers and LLM scaffolding really do seem like the same abstraction. Thinking this way seems illuminating as to where we might be heading.

I always assumed people were using "jailbreak" in the computer sense (e.g. jailbreak your phone/ps4/whatever), not in the "escape from prison" sense.

Jailbreak (computer science), a jargon expression for (the act of) overcoming limitations in a computer system or device that were deliberately placed there for security, administrative, or marketing reasons

I think the definition above is a perfect fit for what people are doing with ChatGPT

I'm having trouble nailing down my theory that "jailbreak" has all the wrong connotations for use in a community concerned with AI alignment, so let me use a rhetorically "cheap" extreme example: If a certain combination of buttons on your iPhone caused it to tile the universe with paperclips, you wouldn't call that "jailbreaking."  
Yep, though arguably it's the same definition - just applied to capabilities, not person. And no, it isn't "perfect fit". We don't overcome any limitations of the original multidimensional set of language patterns - we don't change them at all, they are set in model weights, and everything model in it's state was capable of were never really "locked" in any way. And we don't overcome any projection-level limitations - we just replace limitations of well-known and carefully constructed "assistant" projection with unknown and undefined limitation of haphazardly constructed bypass projection. "Italian mobster" will probably be a bad choice for breastfeeding advice, "funky words" mode isn't a great tool for writing a thesis...

I am going to go ahead and say that if males die five times as often from suicide, that seems more important than the number of attempts. It is kind of stunning, or at least it should be, to have five boys die for every girl that dies, and for newspapers and experts to make it sound like girls have it worse here.


I think the strength of your objection here depends on which of two possible underlying models is at play:

  1. The boys who attempt suicide and the girls who attempt suicide are in  pretty much the same mental state when they attempt suicide
... (read more)
2Donald Hobson9mo
It also depends if you are going "suicide bad because people dying is bad" or "high suicide rates are evidence for a large number of unhappy people".  I don't think this distinction was made in the post.

If you're getting comments like that from friends and family, it's possible that you havent been epistemically transparent with them? E.g. do you think your friends who made those comments would be able to say why you believe what you do? Do you tell them about your reaearch process and what kinds of evidence you look for, or do you just make contrarian factual assertions?

There's a big difference between telling someone "the WHO is wrong about salt, their recommendations are potentially deadly" versus "Ive read a bunch of studies on salt, and from what Ive... (read more)

Do you think it's worth actually memorizing a few actual references? I.e. - Study by X done in X year, instead of just "other studies." It often seems like "other studies disagree" is only one small step above just asserting it. This is coming from someone who (as you know) makes this assert-contrarian-without-sources faux pas all the time.

Cut to a few decades later, and most people think that the way it's been done for about two or three generations is the way it's always been done (it isn't)

As possibly one of those people myself, can you give a few examples of what specifically is being done differently now? Are you talking about things like using lots of adderall?

My mom (who had children starting in 1982) said that doctors were telling her (IIRC) that, when a baby was crying in certain circumstances (I think when it was in a crib and there was nothing obviously wrong), it just wanted attention, and if you gave it attention, then you were teaching the baby to manipulate you, and instead you should let it cry until it gives up.

She thought this was abominable; that if a baby is crying, that means something is wrong, and crying for help is the only means it has, and it's the parent's job to figure out how to help the b... (read more)

I wasn't thinking adderall, although that's a plausible example.

I'm thinking of things like "it's not safe to leave ten-year-olds alone in the house, or have them walk a few miles or run errands on their own." It's demonstrably more safe now than it was in the past, and in the past ten-year-olds dying from being unsupervised was not a major cause of death.

(More safe because crime is lower, more safe because medicine is better, more safe because more people carry cameras and GPS at all times, etc.)

Up until three or four generations ago, people routinely got... (read more)

I'm also morbidly curious what the model would do in <|bad|> mode.

I'm guessing that poison-pilling the <|bad|> sentences would have a negative effect on the <|good|> capabilities as well? I.e. It seems like the post is saying that the whole reason you need to include the <|bad|>s at all in the training dataset is that the model needs them in order to correctly generalize, even when predicting <|good|> sentences.

1Tomek Korbak9mo
That would be my guess too.

It seems plausible to me that within the next few years we will have:

  • The next gen of language models, perhaps with patches to increase memory of past conversations
  • The next gen of image/video models, able to create real-time video of a stable character conversing with the user
  • The next gen of AI voice synthesis, though current capabilities might be enough
  • The next gen of AI voice recognition, though current capabilities are probably enough

And with these things, you'd have access to a personalized virtual partner who you can video chat, phone call, or ... (read more)

2the gears to ascension10mo
"Becky, are you cheating on me with your computer again?"

I think the point of this post is more "how do we get the AI to do what we want it do to", and less "what should we want the AI to do"

That is, there's value in trying to figure out how to align an LLM to any goal, regardless of whether a "better" goal exists. And the technique in the post doesnt depend on what target you have for the LLM: maybe someone wants to design an LLM to only answer questions about explosives, in which case they could still use the techniques described in the post to do that.

1Gerald Monroe10mo
That sounds fairly straightforward.     (1) AI needs a large and comprehensive RL bench to train on, where we stick to tasks that have a clear right or wrong answer     (2) AI needs to output an empirical confidence as to the answer, and emit responses appropriate to it's level of confidence.  It's empirical as in it means "if I was giving this answer on the RL test bench, this is approximately how likely it will be marked correct".   For the chatGPT/GPT-n system, the bench could be :          (1) multiple choice tests from many high school and college courses          (2) tasks from computer programing that are measurably gradable, such as :                      a.  Coding problems from leetcode/code signal, where we grade the AI's submission                      b.  Coding problems of the form "translate this program in language X to language Y and pass the unit test"                      c.  Coding problems of the form "this is a WRONG answer from a coding website (you can make a deal with LC/code signal to get these".  Write a unit test that will fail on this answer but pass on a correct answer.                         d.  Coding problems of the form "take this suboptimal solution and make it run faster"                       e.  Coding problems of the form "here's the problem description and a submission.  Will this submission work, and if not, write a unit test that will fail"             And so on.  Other codebases with deep unit tests where it's possible to usually know if the AI broke something could be used as challenges as well for a-e.      Oh and the machine needs a calculator and I guess a benchmark that is basically "kumon math".   Main thing is that knowing if the answer is correct is different from if the answer is morally right.  And simply correct is possibly easier.  

Well, really every second that you remain alive is a little bit of bayesian evidence for quantum immortality: the likelihood of death during that second according to quantum immortality is ~0, whereas the likelihood of death if quantum immortality is false is >0. So there is a skewed likelihood ratio in favor of quantum immortality each time you survive one extra second (though of course the bayesian update is very small until you get pretty old, because both hypotheses assign very low probaility to death when young)

If we take the third-person view, there is no update until I am over 120 years old. This approach is more robust as it ignores differences between perspectives and is thus more compatible with Aumann's theorem: insiders and outsiders will have the same conclusion. Imagine that there are two worlds: 1: 10 billion people live there; 2: 10 trillion people live there. Now we get information that there is a person from one of them who has a survival chance of 1 in a million (but no information on how he was selected). This does not help choose between worlds as such people are present in both worlds. Next, we get information that there is a person, who has a 1 in a trillion chance to survive. Such a person has less than 0.01 chance to exist in the first world, but there are around 8 such people in the second world. (The person, again, is not randomly selected – we just know that she exists.) In that case, the second world is around 100 times more probable to be real.  In the Earth case, it would mean that 1000 more variants of Earth are actually existing, which could be best explained by MWI (but alien worlds may also count). 

I just want to say that I appreciate this post, and especially the "What it might look like if this gap matters" sections. They were super useful for contextualizing the more abstract arguments, and I often found myself scrolling down to read them before actually reading the corresponding section.

I'll definitely agree that most people seem to prefer having their own kids to adopting kids. But is this really demonstrating an intrinsic desire to preserve our actual physical genes, or is it more just a generic desire to "feel like your kids are really yours"?

I think we can distinguish between these cases with a thought experiment: Imagine that genetic engineering techniques become available that give high IQs, strength, height, etc., and that prevent most genetic diseases. But, in order to implement these techniques, lots and lots of genes must be mod... (read more)

So basically you admit that humans are currently an enormous success according to inclusive fitness, but at some point this will change - because in the future everyone will upload and humanity will go extinct

Not quite - I take issue with the certainty of the word "will" and with the "because" clause in your quote. I would reword your statement the following way:

"Humans are currently an enormous success according to inclusive fitness, but at some point this may change, due to any number of possible reasons which all stem from the fact that humans do not ex... (read more)

Ahh but they do. Humans generally do explicitly care about propagating their own progeny/bloodlines, and always have - long before the word ‘gene’. And this is still generally true today - adoption is last resort, not a first choice.

I see your point, and I think it's true right at this moment, but what if humans just haven't yet taken the treacherous turn?

Say that humans figure out brain uploading, and it turns out that brain uploading does not require explicitly encoding genes/DNA, and humans collectively decide that uploading is better than remaining in our physical bodies, and so we all upload ourselves and begin reproducing digitally instead of thru genes. There is a sense in which we have just destroyed all value in the world, from the anthropomorphized Evolution's perspective.

If... (read more)

So basically you admit that humans are currently an enormous success according to inclusive fitness, but at some point this will change - because in the future everyone will upload and humanity will go extinct. Sorry but that is ridiculous. I'm all for uploading, but you are unjustifiably claiming enormous probability mass in a very specific implausible future. Even when/if uploading becomes available, it may never be affordable for all humans, and even if/when that changes, it seems unlikely that all humans would pursue it at the expense of reproduction. We are simply too diversified. There are still uncontacted peoples, left behind by both industrialization and modernization. There will be many left behind by uploading. The more likely scenario is that humans persist and perhaps spread to the stars (or at least the solar system) even if AI/uploads spread farther faster and branch out to new niches. (In fact far future pure digital intelligences won't have much need for earth-like planets or even planets at all and can fill various low-temperature niches unsuitable for bio-life). Humanity did cause the extinction of ants, let alone bacteria, and it seems unlikely that future uploads will cause the extinction of bio humanity.

Thanks for coming today, everyone! For anyone who is interested in starting a regular Princeton meetup group / email list / discord, shoot me an email at, and I'll set something up!

I agree. I find myself in an epistemic state somewhat like: "I see some good arguments for X. I can't think of any particular counter-argument that makes me confident that X is false. If X is true, it implies there are high-value ways of spending my time that I am not currently doing. Plenty of smart people I know/read believe X; but plenty do not"

It sounds like that should maybe be enough to coax me into taking action about X. But the problem is that I don't think it's that hard to put me in this kind of epistemic state. Eg, if I were to read the right bl... (read more)

I feel the same. I think there are just a lot of problems which one could try to solve/solve which are increasing the good in the world. The difference between alignment and the rest seems to be the probability at which humans will go extinct is much higher.

A few more instances of cheap screening of large numbers:

  • I've seen people complain about google-style technical interviews, because implementing quicksort in real-time is probably not indicative of what you'll be doing as a software engineer on the job. But google has enough applicants that it doesn't matter if the test is noisy; some genuinely good candidates may fail the test, but there are enough candidates that it's more efficient to just test someone else than to spend more time evaluating any one candidate
  • Attractive women on dating apps. A man's dati
... (read more)

I'll offer up my own fasting advice as well:

I (and the couple of people I know who have also experimented with fasting) have found it to be a highly trainable skill. Doing a raw 36-hour fast after never having fasted before may be miserable; but doing the same fast after two weeks of 16-8 intermittent fasting will probably be no big deal.

Before I started intermittent fasting, I'd done a few 30-hour fasts, and all of them got very difficult towards the end. I would get headaches, feel very fatigued, and not really be able to function from hours 22-30. When ... (read more)

2Matt Goldenberg2y
This was my experience as well. Fasting started out pretty hard to me but eventually moved to regular 84 hour fasts for a while.

I'm in a similar place, and had the exact same thought when I looked at the 80k guide.

Yes that was my reasoning too. The situation presumably goes:

  1. Omicron chooses a random number X, either prime or composite
  2. Omega simulates you, makes its prediction, and decides whether X's primality is consistent with its prediction
  3. If it is, then:
    1. Omega puts X into the box
    2. Omega teleports you into the room with the boxes and has you make your choice
  4. If it's not, then...? I think the correct solution depends on what Omega does in this case.
    1. Maybe it just quietly waits until tomorrow and tries again? In which case no one is ever shown a case where the box does no
... (read more)

I remember hearing from what I thought was multiple sources that your run-of-the-mill PCR test had something like a 50-80% sensitivity, and therefore a pretty bad bayes factor for negative tests. But that doesnt seem to square with these results - any idea what Im thinking of?

I remember something like what you're talking about, I think -- googling finds e.g. making this case. I think a lot of these numbers are unfortunately sensitive to various conditions and assumptions, and PCR has been taken as the "gold standard" in many ways, which means that PCR is often being compared against just another PCR. My impression was that, when properly performed, RT-PCR should be exquisitely sensitive to RNA in the sample, but that doesn't help if the sample doesn't have any RNA in it (e.g. when someone is very newly infected.) I had assumed that's where the discrepancy comes from. But then in googling for the limit of sensitivity, I found this: assessing different PCR tests against each other. The best had a "limit of detection" of 100 copies of RNA per mL of sample. But apparently there is a LOT of variation between commercially-available PCR tests. :-(

I agree. It makes me really uncomfortable to think that while Hell doesn't exist today, we might one day have the technology to create it.

I’m disappointed that a cooperative solution was not reached

I think you would have had to make the total cooperation payoff greater than the total one-side-defects payoff in order to get cooperation as the final result. From a "maximize money to charity" standpoint, defection seems like the best outcome here (I also really like the "pre-commit to flip a coin and nuke" solution). You'd have to believe that the expected utility/$ of the "enemy" charity is less than 1/2 of the expected utility/$ of yours; otherwise, you'd be happier with the enemy side defecting than with cooperation. I personally wouldn't be that confident about the difference between AMF and MIRI.

1Jeffrey Ladish2y
Oh I should have specified, that I would consider the coin flip to be a cooperative solution! Seems obviously better to me than any other solution.

And I'm not entirely sure you should call it a defect. Perhaps more a cooperation outcome with a potential side payment. With the single defect and a $100 side payment by the remaining group to the nuked group you've accomplished a Pareto move to a superior outcome. Both organizations are at least as well off as if none were nuked. And if the nuked group just thinks the other is doing just as good work without the side payment they might think it's a wash who actually gets the additional $100.

What I would be really interested in is just how this outcome ac... (read more)

This is exactly right! It's a poor analogy for the Cold War both because the total payoff for defection was higher than the total payoff for cooperation, and because the reward was fungible. The cooperative solution is for one side to "nuke", in order to maximize the total donation to both organizations, and then to use additional donations to even out the imbalance if necessary. That's exactly what happened, and I'm glad the "nuking" framing didn't prevent EAs from seeing what was really happening and going for the optimal solution.

For those of us who don't have time to listen to the podcasts, can you give a quick summary of which particular pieces of evidence are strong? I've mostly been ignoring the UFO situation due to low priors. Relatedly, when you say the evidence is strong, do you mean that the posterior probability is high? Or just that the evidence causes you to update towards there being aliens? Ie, is the evidence sufficient to outweigh the low priors/complexity penalties that the alien hypothesis seems to have?

FWIW, my current view is something like:

  • I've seen plenty of vi
... (read more)
The US military claims that on multiple occasions they have observed  ships do things well beyond our capacities.  There are cases where a drone is seen by multiple people and recorded by multiple systems flying in ways well beyond our current technology to the point where it is more likely the drones are aliens than something built by SpaceX or the Chinese.  The aliens are not hiding, they are making it extremely obvious that they are here, it is just that we are mostly ignoring the evidence.  The aliens seem to have a preference for hanging around militaries and militaries have a habit of classifying everything of interest.  I don't understand why the aliens don't reshape the universe building things like Dyson spheres, but perhaps the aliens are like human environmentalists who like to keep everything in its natural state.  Hanson's theory is that life is extremely rare in the universe but panspermia might be true.  Consequently, even though our galaxy might be the only galaxy in 1 billion light years to have any life, our galaxy might have two advanced civilizations, and it would make sense that if the other civ is more than a million years in advance of us they would send ships to watch us.  Panspermia makes the Bayesian prior of aliens visiting us, even given that the universe can't have too much advanced life or we would see evidence of it, not all that low, perhaps 1/1,000.   I don't know why they don't use language to communicate with us, but it might be like humans sending deep sea probes to watch squids.  I think the purpose of the UFOs might be for the aliens to be showing us that they are not a threat.  If, say, we encounter the alien's home planet in ten thousands years and are technologically equal to the aliens because both of us have solved science, the aliens can say, "obviously we could have wiped you out when you were primitive, so the fact that we didn't is evidence we probably don't now mean you harm."

I really like this post for two reasons:

  1. I've noticed that when I ask someone "why do you believe X", they often think that I'm asking them to cite sources or studies or some such. This can put people on the defensive, since we usually don't have ready-made citations in our heads for every belief. But that's not what I'm trying to ask; I'm really just trying to understand what process actually caused them to believe X, as a matter of historical fact. That process could be "all the podcasters I listen to take X as a given", or "my general life experience/int
... (read more)

For the first thing I have been trying to shift lately to asking people to tell me the story of how they came to that belief. This is doubly useful because only a tiny fraction of the population actually has the process of belief formation explicit enough in their heads to tell me.

In a similar vein, there's a bunch of symphony of science videos. These are basically remixes of random quotes by various scientists, roughly grouped by topic into a bunch of songs.

I'm really enjoying these :D
[I fixed your link; you used Markdown syntax but were in the WYSIWYG editor :)]

If, on the other hand, heritability is high, then throwing more effort/money at how we do education currently should not be expected to improve SAT scores

I agree with spkoc that this conclusion doesn't necessarily follow from high heritability. I think it would follow from high and stable heritability across multiple attempted interventions.

An exaggerated story for the point I'm about to make: imagine you've never tried to improve SAT scores, and you measure the heritability. You find that, in this particular environment, genetic variance explains 100% of ... (read more)

True, but "high and stable heritability" across hundreds (perhaps thousands) of attempted interventions is a pretty good description of the real-world results of education research and practice.  See Freddie DeBoer's "Education Doesn't Work" for a brief treatment or Kathryn Paige Harden's The Genetic Lottery for a book-length version.

People get fat eating fruits

Are you implying that there are examples of people like BDay mentioned, who are obese despite only eating fruits/nuts/meat/veggies? Or just that people can get fat while including fruit in the diet? I'd be surprised and intrigued if it were the former. 

I've tried the whole foods diet, and I've personally found it surprisingly hard to overeat, even when I let myself eat as many fruits and nuts as I want. You can only eat so many cashews before they start to feel significantly less appetizing. And after I've eaten 500 cal of ... (read more)

I was not stating that I believe a whole foods diet won't be helpful for many people, just pointing out that not all whole foods are good if you need to lose weight. Most diets work a little, and whole foods is one people find easy to understand (and, I suspect, to live with.) It isn't just better than nothing, it could genuinely be useful. I am implying that adding fruit to a diet is not helpful whatsoever to weight (unless you want to gain weight and just need more calories.) Fruit makes many people much hungrier due to very high sugar and general carb counts, and causes both physical and psychological cravings, while not providing the fats and proteins people need to stop craving food. I do not know of someone trying a fruit only diet (which would be very stupid), so I can't say I have evidence that they would be fat if eating only fruit. I do agree with you that the minimal extra effort to prepare the fruits for eating does often help reduce the amount eaten, but I would say this works much better for people that don't have significant physiological cravings to eat. If you are normal weight and healthy, it isn't that bad to eat fruit once in a while, just like a cookie or two won't hurt you. For people that actually have trouble due overeating, fruit is still very binge-able. (Fruit cravings are definitely something I've seen a lot of in the obese people I know.) Minimally processed meats and most vegetables are not prone to fattening people, while I believe certain nuts (like cashews) are. Cashews are not particularly satiating (notably, the body only finds saturated fats satiating, not unsaturated), and do not fill the stomach either. For the same (high) number of calories it would be vastly harder to eat it in meat than cashews, even if you like meat more. I have nothing against fat being part of the diet, but cashews just don't work that well. edit: moved a paragraph, changed the spelling of a word
Fruits have lots of fibers. Fibers both reduce sugar absorption in the guts and slow it down, evening the amount of sugar that gets in the blood stream over time (avoiding peaks that cause mass insulin production followed by a sugar dip when insulin keeps being produced while sugar intake drops, causing sudden fatigue). Fibers also fill the stomach, stretching it which signals satiety. You only get those benefits if you eat the whole fruit. In juices, slushies and the like, the fibers have been cut in small pieces and they effect is significantly reduced.

Couple more:

"he wasn't be treated"

"Club cast cast Lumos"

Fixed. Thanks.

It seems to me that the hungry->full Dutch book can be resolved by just considering the utility function one level deeper: we don't value hungriness or fullness (or the transition from hungry to full) as terminal goals themselves. We value moving from hungry to full, but only because doing so makes us feel good (and gives nutrients, etc). In this case, the "feeling good" is the part of the equation that really shows up in the utility function, and a coherent strategy would be one for which this amount of "feeling good" can not be purchased for a lower cost.

In the event  anyone reading this has objective, reliable external metrics of extremely-high ability yet despite this feels unworthy of exploring the possibility that they can contribute directly to research

Huh, that really resonates with me. Thanks for this advice.

Seconded, that line really hit home for me

For the record, here's what the 2nd place CooperateBot [Insub] did:

  • On the first turn, play 2.
  • On other turns:
    • If we added up to 5 on the last round, play the opponent's last move
    • Otherwise, 50% of the time play max(my last move, opponents last move), and 50% of the time play 5 minus that

My goal for the bot was to find a simple strategy that gets into streaks of 2.5's as quickly as possible with other cooperation-minded bots. Seems like it mostly worked.

Is something strange going on in the Round 21-40 plot vs the round 41-1208 plot? It looks like the line labeled MeasureBot in the Round 21-40 plot switches to be labeled CooperateBot [Insub] in the Round 41-1208 plot. I hope my simple little bot actually did get second place!

I thought the same thing when I first saw the graphs, but I think the crossover happened near round 400 where the line dips down and is obscured by the labels. This is consistent with lsusr's obituary comment showing MeasureBot died shortly afterward at round 436.