pde

Previously, Chief Computer Scientist at EFF and Director of Research at the Partnership on AI. Currently, co-founder of the AI Objectives Institute. https://pde.is

Posts

Sorted by New

Wiki Contributions

Comments

pde2y40

Our first public communications probably over-emphasized one aspect of our thinking, which is that some types of bad (or bad on some people's preferences) outcomes from markets can be thought of as missing components of the objective function that those markets are systematically optimizing for. The corollary of that absolutely isn't that we should dismantle markets or capitalism, but that we should take an algorithmic approach to whether and how to add those missing incentives.

A point that we probably under-emphasized at first is that intervening in market systems (whether through governmental mechanisms like taxes, subsidies or regulation, or through private sector mechanisms like ESG objectives or product labeling schemes) has a significant chance of creating bad and unintended consequences via Goodhart's law and other processes, and that these failures can be viewed as deeply analogous to AI safety failures.

We think that people with left and right-leaning perspectives on economic policy disagree in part because they hold different Bayesian priors about the relative likelihood of something going wrong in the world because market fail to optimize for the right outcome, or because some bureaucracy tried to intervene in people's lives or in market processes in a unintentionally (or deliberately) harmful way.

To us it seems very likely that both kinds of bad outcomes occur at some rate, and the goal of the AI Objectives Institute is to reduce rates of both market and regulatory failures. Of course there are also political disagreements about what goals should be pursued (which I'd call object level politics, and which we're trying not to take strong organizational views on) and on how economic goals should be chosen (where we may be taking particular positions, but we'll try to do that carefully).

pde2y180

Hi!

I'm a co-founder of the AI Objectives Institute. We're pretty interested in the critical view you've formed about what we're working on! We think it's most likely that we just haven't done a very good job of explaining our thinking yet -- you say we have a political agenda, but as a group we're trying quite hard to avoid having an inherent object-level political agenda, and we're actively looking for input from people with different political perspectives than ours. It's also quite possible that you have deep and reasonable criticisms of our plan, that we should take on board. Either way, we'd be interested in having a conversation, trying to synchronize models and looking for cruxes for any disagreements, if you're open to it!