[ Question ]

How could "Kickstarter for Inadequate Equilibria" be used for evil or turn out to be net-negative?

by Raemon 7mo21st Feb 201916 comments

25


Following up to "If a "Kickstarter for Inadequate Equlibria" was built, do you have a concrete inadequate equilibrium to fix?"

I think a kickstarter for coordinated action would be net positive, but it's the sort of general purpose powerful tool that might turn out bad in ways I can't easily predict. It might give too much power to mobs of people who don't know what they're doing, or have weird/bad goals.

How bad might it be if misused? What equilibrias might be we end up in in the world where everyone freely has access to such a tool?

New Answer
Ask Related Question
New Comment
Write here. Select text for formatting options.
We support LaTeX: Cmd-4 for inline, Cmd-M for block-level (Ctrl on Windows).
You can switch between rich text and markdown in your user settings.

5 Answers

I think a lot of people have high time-discounting rates, resulting in a pretty adversarial relationship with their future selves, such that contracts that allow them to commit to arbitrary future actions are bad. For example, imagine a drug addict being offered to commit themselves to slavery a month from now, in exchange for some drugs right now. I would argue that the existence of this offer is overall net negative from a humanitarian perspective.

I think there is a part of human psychology, and human culture, that expects that large commitments should be accompanied by significant-seeming rituals, in order to help people grok the significance of what they are committing to (example: Marriage). As such, I think it would be important that this platform limits the type of thing that people can commit to, to stuff that wouldn't be extremely costly to their future selves (though this is already mostly covered with modern contract law, which mostly prevents you from signing contracts that are extremely costly for your future self).

A couple of thoughts:

  • When the mechanism is new, and especially if it's somewhat complicated, you could imagine many people using it and accidentally committing to things that they didn't quite realise they were signing up for, or otherwise becoming over-committed.
  • In a recent 80,000 hours podcast, Glen Weyl makes the point that some of his proposed mechanisms for changing how voting and public goods provision are designed under a self-interest assumption, which might not hold true for people in the context of e.g. voting, where they're used to considering the good of the nation. As such, if this type of system were naively designed and initially deployed at significant scale, you could imagine weird problems like over-investment in public goods.

Entrenched interests with a large body of people attached would be able to use such a thing more effectively than the public at large. Consider the case of coal power: it appears that most people have a soft preference for coal power plants to go away; coal power plant workers, coal miners, and coal mining towns have a very strong preference for them to remain or expand.

This could easily mean the coal industry effectively uses this mechanism to advance coal interests, or at least that it sets overcoming coal's efforts as the minimum threshold.

There doesn't seem to be a way to disentangle inadequate equilibria shifting from general equilibria shifting, including the case of shifting to the same equilibrium which might be thought of as equilibria fixing.

That being said, even if such activity takes place it will (presumably) be rendered transparent by the mechanism. I think the new information gained across all equilibria could outweigh an evil shift or an evil fix on some of them.

Lack of follow-through means that too few people actually change and the new equilibrium is not achieved. This makes future coordination more difficult as people lose faith in coordination attempts in general.

If I were to be truly cynical I could create/join a coordination for something I was against, spam a lot of fake accounts, get the coordination conditions met and watch it fail due to poor follow-through. Now people lose faith in the idea of coordinating to make that change.

Not sure how likely this is, how easy it is to counter or how much worse than the status quo coordination attempts can get...

For profitable ventures, the reciprocal commitment way of doing things would be to build a coop by getting everyone to commit to paying in large amounts of their own money to keep the lights on for the first 6 months, iff enough contributing members are found.

The current alternative is getting an investor. Investors, as a mechanism for shifting equilibria, has a lot of filters that make unviable ideas less likely to recieve funding (the investor has an interest in making good bets, and experience in it) and insulate the workers from risk (if the venture fails, it's the investor who eats the cost, not the workers).

It's conceivable that having reciprocal commitment technologies would open the way for lots of hardship as fools wager a lot of their own money on projects that never could have succeeded. It's conceivable that the reason the investor system isn't creating the change we want to see is that those changes aren't really viable yet under any system and "enabling" them would just result in a lot of pain. (I hope this isn't generally true, but in some domains it probably is.)