AI Success Models

Applied to AI Safety "Success Stories" by plex at 1mo

Open to a better name for this. The reason I went with this (rather than Alignment Proposals, Success Stories, or just Success Models) is because I liked capturing this as the mirror of threat models, and including AI feels like a natural category since the other x-risks don't have clear win conditions unlike threat models which apply widely. I also would like to include this in the AI box in the portal since it feels like a super important tag, and including AI makes that more likely.

AI Success Models are proposed paths to an existential win via aligned AI. They are (so far) high level overviews and won't contain all the details, but present at least a sketch of what a full solution might look like. They can be contrasted with threat models, which are stories about how AI might lead to major problems.

Created by plex at 2mo