Hey! This is another update from the distillers at the AI Safety Info website (and its more playful clone Stampy).
Here are a couple of the answers that we wrote up over the last month (April 2023). As always let us know if there are any questions that you guys have that you would like to see answered.
The list below redirects to individual links, and the collective URL above renders all of the answers in the list on one page at once.
I would like to also see some ELI5 examples; currently the text feels too abstract. For example, I clicked on the "outer alignment" page. I think it would be better if it contained 3-5 simple examples.
Hey! This is another update from the distillers at the AI Safety Info website (and its more playful clone Stampy).
Here are a couple of the answers that we wrote up over the last month (April 2023). As always let us know if there are any questions that you guys have that you would like to see answered.
The list below redirects to individual links, and the collective URL above renders all of the answers in the list on one page at once.
Crossposted to the Effective Altruism Forum: https://forum.effectivealtruism.org/posts/FyfBJJJAZAdpfE9MX/stampy-s-ai-safety-info-new-distillations-2