Mo Putera
Message
Feedback welcome: www.admonymous.co/mo-putera
Long-time lurker (c. 2013), recent poster. I also write on the EA Forum.
For my own reference: some "benchmarks" (very broadly construed) I pay attention to.
2558
2
518
For the Open Philanthropy AI Worldview Contest Executive Summary * Conditional on AGI being developed by 2070, we estimate the probability that humanity will suffer an existential catastrophe due to loss of control (LoC) over an AGI system at ~6%, using a quantitative scenario model that attempts to systematically assess...
To illustrate what I mean, switching from p(doom) to timelines: * The recent post AGI Timelines in Governance: Different Strategies for Different Timeframes was useful to me in pushing back against Miles Brundage's argument that "timeline discourse might be overrated", by showing how choice of actions (in particular in the...