Is this something Stampy would want to help with?
I think that incentivizes self-deception on probabilities. Also, P <10^-10 are pretty unusual, so I'd expect that to cause very little to happen.
Thanks! When you say "They do, however, have the potential to form simulacra that are themselves optimizers, such as GPT modelling humans (with pretty low fidelity right now) when making predictions"do you mean things like "write like Ernest Hemingway"?
Is it true that current image systems like stable diffusion are non-optimizers? How should that change our reasoning about how likely it is that systems become optimizers? How much of a crux is "optimizeriness" for people?
Why do people keep saying we should maximize log(odds) instead of odds? Isn't each 1% of survival equally valuable?
In addition to Daniel's point, I think an important piece is probabilistic thinking - the AGI will execute not based on what will happen but on what it expects to happen. What probability is acceptable? If none, it should do nothing.
Have you written about your update to slow takeoff?
Nice! Added these to the wiki on calibration: https://www.lesswrong.com/tag/calibration
Oh, whoops. I took from this later tweet in the thread that they were talking.
After years of tinkering and incremental progress, AIs can now play Diplomacy as well as human experts.
Maybe this happened in 2022: https://twitter.com/polynoamial/status/1580185706735218689