Alternatively, do you see a benefit to having a company leading on capability development articulate its principles, evaluations and findings on safety so thoroughly? While odds that the U.S. federal government imposes (useful) regulations on American frontier labs seem low (1:7?), for the near-mid future the upside to "safety-washing" could be consensus-building among norms for OpenAI, DeepMind, and so forth.
Tangentially related, but still: is there a world where survival-weighted hedging is mediated through belief markets like Polymarket or Kalshi? How does this mode of decision analysis apply to making short-term bets on trajectories to AGI?
Alternatively, do you see a benefit to having a company leading on capability development articulate its principles, evaluations and findings on safety so thoroughly? While odds that the U.S. federal government imposes (useful) regulations on American frontier labs seem low (1:7?), for the near-mid future the upside to "safety-washing" could be consensus-building among norms for OpenAI, DeepMind, and so forth.