Sorted by New


What I'd change about different philosophy fields

Who ignores, or argues against courage and honesty?

As an intrinsic value? Lots of utilitarians, myself included. I'm unsure if Rob's intent was to suggest these things are values worth respecting intrinsically or just instrumentally.

Cooperative Oracles: Introduction

(At the risk of necroposting:) Was this paper ever written? Can't seem to find it, but I'm interested in any developments on this line of research.

In a multipolar scenario, how do people expect systems to be trained to interact with systems developed by other labs?

It seems plausible that transformative agents will be trained exclusively on real-world data (without using simulated environments); including social media feed-creation algorithms, and algo-trading algorithms. In such cases, the researchers don't choose how to implement the "other agents" (the other agents are just part of the real-world environment that the researchers don't control).

I have quite a different intuition on this, and I'm curious if you have a particular justification for expecting non-simulated training for multi-agent problems. Some reasons I expect otherwise:

  • At the very least, in the early days, you simply won't have much (accurate, up-to-date) data on the behavior of AI systems made by other labs because they're new. So you'll probably need to use some simulations and/or direct coordination with those other labs.
  • Even as the systems become more mature, if there's rapid improvement over time as seems likely, again the relevant data that would let you accurately predict the behavior of the other systems will be sparse.
  • Even if data are abundant, presumably for some strategic purposes developers won't want to design their AIs to behave so predictably that learning from such data will be sufficient. You'll probably need to use some combination of game theoretically informed models and augmentation of the data to account for distributional shifts, if you want to put your AI to use in any task that involves interacting with AIs you didn't create.