Mo Putera
Non-loss of control AGI-related catastrophes are out of control too
For the Open Philanthropy AI Worldview Contest Executive Summary * Conditional on AGI being developed by 2070, we estimate the probability that humanity will suffer an existential catastrophe due to loss of control (LoC) over an AGI system at ~6%, using a quantitative scenario model that attempts to systematically assess...
How should we think about the decision relevance of models estimating p(doom)?
To illustrate what I mean, switching from p(doom) to timelines: * The recent post AGI Timelines in Governance: Different Strategies for Different Timeframes was useful to me in pushing back against Miles Brundage's argument that "timeline discourse might be overrated", by showing how choice of actions (in particular in the...
Rant/nitpick: I know it's not central, but the choice of indicators to pay attention to here
annoyed me as being subpar and potentially misleading for real-world value (although I guess they're non-issues if your ToC for TAI/PASTA/etc centrally routes... (read 391 more words →)