So here's the problem: Given a well-defined group charter, how should groups make decisions? You have an issue, you've talked it over, and now it's time for the group to take action. Different members have different opinions, because they're not perfect reasoners and because their interests don't reliably align with those of the group. What do you do? Historical solutions include direct democracy, representative democracy, various hierarchies, dictatorships, oligarchies, consensus...But what's the shoes-with-toes solution? How do they do it in Weirdtopia? What is the universally correct method that could be implemented by organizations, corporations, and governments alike?

This Tuesday, I posted an idea. I came up with it about ten, maybe fifteen years ago, decided it was awesome and revolutionary, and spent a few years doing some extremely ineffectual advocacy for it. I've pushed for it in various contexts on and off since then, but I'd basically shelved it until I had a better platform to talk about it. And recently I realized, belatedly, that Less Wrong is probably the perfect audience. You're going to be open to it, and you'll be able to competently critique it. And maybe some of you will use it.

I posted this tease first because the idea is a solution to a problem you may not have thought for five minutes about. I want to give you the opportunity to come up with other solutions, or discuss how to rigorously evaluate ideas. Also, I'm really curious to see whether anybody independently reinvents mine. Over the years, I've had the experience several times of reading someone else's work and thinking they're about to start pitching my idea. But they never do. It all gives me complicated feelings.

Please discuss in the comments. Remember, we're aggregating the opinions, the judgment, of the group members, not their personal preferences. So answers like “iterated runoff voting,” “bargaining theory X,” or “coherent extrapolated volition” are not in themselves what I'm looking for. If you happen to know what my idea is, or manage to find it by google-stalking me, please don't give it away yet.

New Comment
53 comments, sorted by Click to highlight new comments since:
[-]Dagon180

Downvoted for the tease aspect - don't do that. Instead, make a post describing the assumptions and premises in concrete terms that lays the foundation for your proposal. Or put the whole thing in an open thread and see what responses you get and what you'd need to expand on before making a top-level post.

not downvoted for the problems I suspect you'll hit. I very much hope you can address them, though:

You say: opinions, the judgment, of the group members, not their personal preferences, but there aren't many individuals who can clearly distinguish among these things. If your mechanism depends on rational participants, it won't work too well for humans.

More importantly, you say that a well-defined group charter is a given. But there is no group with such a thing. Groups tend to have natural-language charters, and members routinely ignore even those.

What would you suggest as a better way to get solutions to a problem without risking contamination from your own?

[-][anonymous]00

He suggested open thread. I would say discussion.

[-][anonymous]00

What's wrong with the tease aspect? He didn't give away his solution so you'd think independently. This is done all the time.

[This comment is no longer endorsed by its author]Reply

The obvious answer from this crowd is some kind of prediction market, with the "group charter" being turned into a measurable utility function with which to make the judgments about the success or failure of a policy. If people are restricted to using only money from an equal "allowance", plus whatever they have earned from predictions, over time those who have made more accurate predictions gain the most influence on the outcomes of the decisions.

Depending of the size of the group, there is a very catastrophic possible side-effect from such a system : we have to decide one of the two techniques to use to solve a problem, I make the prediction technique A will not work, but the others disagree and finally the group decides to chose technique A. What is my interest now ? To ensure the group will fail, so my prediction will be true.

A very fundamental aspect of group decision is to ensure the "dissidents" will still do their best to ensure the group success. Prediction markets may work when you can't really change the outcome of the prediction (when you predict what others will do), but not for (relatively small) group decision when you'll be part of the process that will finally succeed or fail.

Robin Hanson has said that prediction markets have historically been extremely resilient against manipulation attempts. Historical markets are mostly those where the "bettors on beliefs" do not have a personal stake in the success of "technique A," like a group member would--so it seems like this futarchist method is overall better than historical group decision-making methods, even if there are some perverse incentive problems.

There are two problems with this that I see - one specific and one general.

The specific one is that one-level systems don't handle politics very well. For example, say a person or subgroup in your system accumulates most of the "control resource." What's to stop them from doing a bunch of political bullshit?

The general problem is that this system assumes you've already managed to agree on a measurable utility function, and so breaks down when the group has to somehow agree on a utility function.

There are two factors I can think of straight away that could prevent an imbalance of power disrupting results. Firstly, the system is perpetually growing with the allowance, so unless only one person or subgroup is making good predictions there should be some balance, with the . This is not guaranteed, but I expect it to be the case. Otherwise, the less likely solution is that everyone uses whatever allowance they haven't lost to try and cash in from the false predictions the rogue individual or subgroup is making. I do not expect this solution to be a consistent safety net, since the people without much power will have made bad predictions in the past.

And you're right, this does not help choose a utility function. In Robin Hanson's Futarchy proposal he advocates elected representatives to choose the utility function, and seems to dismiss the problems with that by saying that "By limiting democracy primarily to values, we would presumably focus voters more on expressing their values, rather than their beliefs.". If we did have elected representatives, I think they would create a utility function that explicitly encourages policies they support. I haven't thought of a solution to this yet.

It seems likely that the optimal solution would, perhaps strongly, depend on details that are not given in your post. The composition and size of the group spring to mind, but there are other factors as well.
Is there any specific type of group that you're thinking of, or do you think that there is a one size fits all solution? The latter seems unlikely unless you are thinking of a meta-strategy for choosing between decision making processes rather than a decision making process.

Is there any specific type of group that you're thinking of, or do you think that there is a one size fits all solution? The latter seems unlikely unless you are thinking of a meta-strategy for choosing between decision making processes rather than a decision making process.

I am thinking of, basically, a one size fits all solution. Obviously its implementation will vary greatly depending on the size, composition, and charter of the group. It could also be thought of as a meta-strategy.

What is the universally correct method that could be implemented by organizations, corporations, and governments alike?

If it's not a prediction market, I have no idea what he might have in mind. Any idea if HonoreDB's an amateur mathematician?

It's not a prediction market, although that was the first idea I remember having when I was trying to work this out.

I do do math.

Hmm, a mathematical framework then. (I doubt something like this would have a good solution that could be concisely expressed in words alone.)

Is it a voting system with some kind of safeguards against double-counting? As in, a single entity voting an infinite number of times for a particular option doesn't shift the outcome toward that option?

But no... you'd only come up with that if you were trying to optimise for anonymity.

I'm reasonably sure it's a mathematical framework, though.

He says ‘answers like “iterated runoff voting,” ... are not in themselves what I'm looking for’, so I think a voting system is not what he's thinking about.

Different members have different opinions, because they're not perfect reasoners and because their interests don't reliably align with those of the group.

I'm going to channel Margaret Thatcher: there is no such thing as society. That is, there is no such thing as "the group's interests" and attempting to compare the solution you have to the "group's interests" is the wrong question. The real question is, how do you aggregate the opinions of those members?

It seems almost impossible to me that there will be a "universally correct" solution, especially if where to draw group boundaries is part of the problem. If the optimal economic organization is hierachical groups existing in a non-hierarchical ecosystem, then it seems wrong to claim hierarchy is universally correct (when it is terrible applied globally) or that non-hierarchical is universally correct (when it can be terrible applied locally).

Examples:

  • A board of twelve people is in charge of deciding which grant proposals to accept. Their foundation's stated goal is to maximize the average number of hedons among the human race.
  • A corporation is trying to maximize its expected profits. Numerous decisions must be made on behalf of the corporation.
  • A constitutional government is charged with taking the action whose outcome will most please the majority of its citizens, while obeying the constitution.

A board of twelve people is in charge of deciding which grant proposals to accept. Their foundation's stated goal is to maximize the average number of hedons among the human race.

So they need to work out wtf a hedon is, put as many as possible in one person then kill all other humans?

Great idea, I'm sold. I heroically sacrifice myself to be the hedon-recipient.

You had best phrase your offer in a way that will appeal to whoever is trying to maximize average hedons. This, of course, means showing why they can put hedons into you more efficiently than they can anyone else. The maximisation process will involve killing nearly your entire species, leaving you alone. This being the case the selected candidate will be one who takes sadistic delight in the death of others, currently has no meaningful bonds with his or her fellow humans and above all has absolutely no standards regarding what will give them hedonistic pleasure.

So volunteering to be involved in such a horrible plan is actually a pretty good start.

That's one hell of a grant proposal/foundation.

That's one hell of a goal.

The first and third examples make me cringe. How in the world does one measure a hedon? And how are the citizenry supposed to predict the behavior of the law? A proper government exists to define the board, not direct the pieces.

As for the second example, typically people talk about "net present value" rather than "expected profits." They cash out similarly, but NPV explicitly includes factors like discounting and risk which expected profits generally do not do. And there it's easy to see how opinions will clash- how much do we value intangibles like reputation? What is our risk tolerance?

Those problems highlight my point- there is no objective "net present value" that we just need to discover. There are about as many net present values as there are decision-makers, and if they aren't coherent the corporation will be working at cross-purposes.

But how to make them cohere is a complicated problem, and I suspect that any solution that does not approach the problem in its complexity will be a vacuous solution.

I think the "tease" element here might be better suited for the Discussion section, though the topic itself certainly seems interesting enough to merit a top-level post.

I think that lots of people are ignoring the whole 'could be implemented' part.

Given a group objective function so well defined that only measurement errors and uncertain expectations make choosing the optimum nontrivial, there remains the difficulty of obtaining group members' honest estimation of the outcome distribution of the plans under consideration. Further, you want to do better than just averaging estimates; you want to combine information without double-counting.

When (as in reality) people's actual preference for the group decision isn't exactly the stated group objective, I'm daunted by the difficulty of soliciting their reasons+evidence in such a way that they can't dishonestly push the decision toward their desired outcome.

I guess I didn't consider the iterated version of this - same group, many decisions, where it seems you can actually do something about the dishonesty: make people expose their actual forecasts of the variables in the group objective function; give more voting weight to those who are more accurate (about predicting the results of whatever choice is implemented).

Smells like the prediction markets suggestion of several other comments.

As historical examples you give forms of democracy and hierarchy, which are ways for a group to choose which action to take out of several options. But then you say that you don't want answers like IRV. I can't tell how to interpret your proposed distinction between "opinions" and "preferences".

Obviously, if we're talking about voting methods, then Arrow's Impossibility Theorem and some other related theorems may be of interest. Loosely speaking, they say that no deterministic way of structuring a group can simultaneously meet certain basic standards of fairness and consistency.

If all you want to do is solicit opinions but not vote on them, then I propose that the most straightforward way is best: ask for opinions. Let them be submitted anonymously to avoid going to Abilene.

Prediction markets?

Brainstorm a bunch of actions and outcomes. Each member privately assigns an utility to each possible outcome, how likely they think each action is to lead to each outcome, how trustworthy they think each other member is on the question, etc. etc. Then you feed all of this into a black box AI algorithm and it outputs what action should be taken, which everyone simply has to take on faith until they have enough examples to take it on induction.

So my answer is the meta one of "I don't know, but whatever it is it'll be a complex or even downright inelegant, found through testing and interaction with the real world and/or simulations rather than armchair theorising or mathematical derivation from first principles, probably involve large datasets and some kind of machine learning algorithms, and not possible to implement well on a human brain".

What was that line in Fight Club?

Narrator: When people think you're dying, they really, really listen to you, instead of just... Marla Singer: - instead of just waiting for their turn to speak?

No matter what anyone says here, I get the feeling you're just waiting for your turn to speak. Why don't you just do it?

The timestamps are munged because someone moved this to discussion; my turn came a few days ago. I'll change the tenses in the post.

A post like this belongs in Discussion.

[-][anonymous]00

Well I came up with an interesting idea a few weeks ago for this very problem.

You get a hypothetical bayesian judge to take each person's opinion as evidence.

I don't know the details, but you might make people do some calibration tests to adjust for overconfidence, stupidity, etc.

You get a hypothetical bayesian judge to take each person's opinion as evidence.

Uh. How are you going to express options as probabilities?

This method works for forecasters because it's typically done for mutually exclusive events. Either it rains, or it doesn't, and your Bayesian judge can integrate different weathermen together into one superior weatherman (especially if the Bayesian judge also has access to the info the weathermen are using to make predictions with).

But when people are deciding what path the corporation should take, even if they assign probabilities of success to each possible action, only one action gets taken. And someone who is good at predicting when actions will fail may be bad at predicting when they will succeed (or vice versa).

[-][anonymous]20

I'm not sure they have to be probabilities. The probabilities can be internal to the calculation (what's the probability of this person having an opinion like this if such and such is the best course of action).

It might be helpful to get people to assign probabilities, but you might end up confusing yourself.

I'm not sure I understand what you mean with the other issues, sounds like something that could be handled with calibration tests and looking at previous records.

If they're not probabilities, then what will you use Bayes' Rule for?

[-][anonymous]-10

What? Evidence is almost never a probability. Probability is an internal detail of cognitive engines. How does a bayesian spam filter work? The words it processes are not probabilities.

You observe events and assign them probabilities. The opinions of the group are events.

What? Evidence is almost never a probability.

I... don't think we're using the same Bayesianism, or we're having serious communication problems.

Suppose my spam filter sees the word "movie." It thinks P("movie"|legit)=.005, and P("movie"|spam)=.001. That gives a likelihood ratio of 5 that the email is legitimate because it saw the word "movie". That likelihood ratio isn't a probability- but it's a ratio of probabilities.

The problem in question, though, isn't analogous to determining whether emails are spam or legit (logistic regression). It's the problem of how to write emails. And so if one person wants to end the email with "Sincerely," another with "Warmly," and a third with "Yours," how do you turn that into input for your Bayesian judge? I suppose you could elicit the probabilities that the person emailed will buy from you (how much do they buy?) given the word chosen, but it's not obvious that would be helpful, especially when you're doing this for everything at once rather than targeted A/B testing. (Also, what do you do when it's clear a fourth word should have been considered?)

[-][anonymous]10

The word "movie" is not a probability itself, the same way the opinions people express are events and not probabilities.

With people's opinions of what to do, there's no reason you have to constrain what they say to things like "I, Bob, assign 50% probability that plan A is the best". Even if you did that, you still have to consider that as evidence, it's not like you can use another agent's probability estimate directly in some way that you can't use other types of statements, because it's not your estimate. Bob might not even know probability theory.

If Bob says plan A is best, Linda and Alice say plan B is best, but Bob scored better on calibration assessments (including past decisions), and they have a poor record, you would integrate all the evidence with factors like P(Bob=A|Best=A) = 0.8, P(Alice=B|best=A) = .4, and so on to estimate P(Best=A|Bob=A, Alice=B,Linda=B).

Do you see what I mean? Bayes doesn't become useless just because the environment is not composed of agents making explicit probability estimates.

But you never measure Best=A; you just measure how A performed, and you possibly compare that with your expectations of how A would perform. The system as just described runs into all of the problems that plurality voting has- irrelevant alternatives aren't independent, etc.

Bayes doesn't become useless just because the environment is not composed of agents making explicit probability estimates.

Bayes has a specific use: to maneuver through conditional probabilities. When you have moved outside the domain of a tool, it should be used with caution.

[-][anonymous]00

But you never measure Best=A;

Of course you don't; that's the hypothesis.

P(H|E) = P(E|H)*P(H)/P(E)

Do you see E = H anywhere? I don't. E is the evidence, like, say, "bob thinks plan A is best". H is the hidden variable that we are trying to infer. In this case H is "Plan A is best".

The system as just described runs into all of the problems that plurality voting has- irrelevant alternatives aren't independent, etc.

Like I said, I have not gone thru the math. I am not proposing a concrete formulation, I am trying to explain the concept that you don't need to actually observe probability estimates from other agents to reason about the world.

Bayes has a specific use: to maneuver through conditional probabilities. When you have moved outside the domain of a tool, it should be used with caution.

Conditional probabilities like P(Bob thinks A is best | A is best).

I'm done.

Do you see E = H anywhere? I don't. E is the evidence, like, say, "bob thinks plan A is best". H is the hidden variable that we are trying to infer. In this case H is "Plan A is best".

But to determine how much weight to give Bob as a judge for the second decision, you need to know whether or not Plan A was best for the first decision.

I am trying to explain the concept that you don't need to actually observe probability estimates from other agents to reason about the world.

I agree that you don't need to actually observe probability estimates from other agents to reason about the world. What I believe is that a Bayesian Judge is a tool that operates on probability estimates from other agents- and so if you want to reason this way, then you need this data.

[-][anonymous]00

But to determine how much weight to give Bob as a judge for the second decision, you need to know whether or not Plan A was best for the first decision.

You don't need certainty. And you don't necessarily need that particular evidence. It would still work using calibration tests to weight them.

The only evidence you really have access to from last time is who voted for what, and whether everyone thinks it was a good idea in hindsight. I think that would be enough.

What I believe is that a Bayesian Judge is a tool that operates on probability estimates from other agents- and so if you want to reason this way, then you need this data.

Ok. we are talking about different things. I'm talking about using bayesian methods to integrate evidence like votes, voting records, and hindsight estimations of optimality to determine the best distribution of probability over which plan is best (or some other output).

I have no idea how this "Bayesian Judge" thing that uses probability estimates directly would even work.

I have no idea how this "Bayesian Judge" thing that uses probability estimates directly would even work.

Here's an article on Bayesian aggregation of forecasts. Essentially, you look at past forecasts to get P(Bob: "rain"|rain) and P(Bob: "rain"|~rain). (You can just elicit those expert likelihoods, but if you want this to be a formula rather than a person, you need them to be the data you're looking for instead of just suggestive of the data you're looking for.) From just that, you could calibrate Bob to find out what P(rain|Bob: "rain") and P(rain|Bob: "~rain") are. When you also have data on past predictions from Alice, Charlie, and David, you can combine them and get a more sophisticated estimate than any individual expert. It's generally able to notice things like "when Alice and Bob agree, they're both wrong," which you couldn't find by just computing individual calibrations.

That is, this thing you've been talking about is a procedure that's already been worked out and that I've personally performed. It's typically only done for forecasters of mutually exclusive possibilities and is inappropriate for decision-makers for reasons I've already mentioned.

[-][anonymous]00

neat!

Needs more specification. Is the judge a member of the group? Selected in advance (how?)?

[-][anonymous]20

The judge is a mathematical construction. I just mean that you use bayes rule in a well-defined way that everyone can agree on to take opinions and such as evidence.

Not sure how it would turn out, haven't thought much. Should be able to do at least as well as prediction markets, tho.

That really sounds like a mysterious answer to me. Is there anything more to this than 'Bayesian agent which inputs opinions and outputs awesome decisions'?

[This comment is no longer endorsed by its author]Reply
[-][anonymous]00

There is no internal detail yet, but this is not a mysterious answer.

What I said was basically "there exists a mathematical procedure for this problem that you would get if you went thru a derivation starting with bayes rule." I don't know how it works because I have not done the math. Optimality and uniqueness theorems suggest it should be at least as good as a prediction market (because at worst you just pull the bayes structure out of a prediction market), you could probably do much better, tho.

If that is a mysterious answer, the term means very little.

If you want detail, imagine naive bayes over people's opinions of various proposals. Or something. That's the 30 seconds of thought version, you could probably do a lot better in 5 minutes.

Didn't have any of that background information, sorry.

You're right, though, this isn't a mysterious answer.

[-][anonymous]00

The judge is selected by another hypothetical Bayesian agent, who is selected by yet another hypothetical Bayesian agent, who is selected by....

[This comment is no longer endorsed by its author]Reply