In response to Marketing rationalism, Bystander Apathy, Step N1 in Extreme Rationality: It Could Be Great, and Rationality: Common Interest of Many Causes.

The problem that motivates this post is:
 “Given a controversial question in which there are good and bad arguments on both sides, as well as unreliable and conflicting information, how do you determine the answer when you’re not yourself an expert in the subject?”

Well into the information age, we are still not pooling our resources in the most efficient way to get to the bottom of things. It would be enormously useful to develop some kind of group strategy to answer questions that have solutions somewhere in there.

The idea I'm presenting is a way to apply our intellectual (and rational) resources in a niche way, that I will shortly describe, to facilitate public (non-expert) understanding of real world problems.

The Niche and the Need

Science, obviously, does the best job of solving problems. I'm confident that epidemiologists are effectively and efficiently working on the best models for pandemics as I write this post.

And journalists do a pretty good job of what they do: providing information about what the scientists are doing. What's best about how journalists do that is that they always provide the source of the information, so that a rational person can judge the truth-value of that information. A qualified and rational person can then put the information in perspective.

Alas, people are not so good at interpreting the information: they are neither rational in weighting the information nor qualified to put it in perspective. (Presumably, the epidemiologists are too busy to do this.)

Interpreting information correctly is a service that rationalists could provide.

How we could do it:

1. Information culled from various sources would be posted in the first half of a post titled P and updated continuously.

2. As a rational group, within threads, we would discuss interpretations and implications of the available information.

3. Only consensus views would be presented in the second half of the post P, updated continuously to prioritize the most relevant information on top.

Why we would do it

1. It would be objectively and enormously useful to cull useful and consensus interpretations. It is a time-consuming task even for a qualified person, as a group we would be effective.

2. It would be a good demonstration of the usefulness of rationality.

3. I would strongly recommend against any kind of advertisement, but if people from the general public happened to come there and happened to find the information exceptionally useful, rationality would be considered a useful and practical thing (and they would be inclined to fund the organization that provided this service).

I'm motivated to do this because I feel like this is exactly what is missing in the information age. We have science and we have journalism but we need something more. Blogs are doing it, but not effectively because they are doing it as individuals (with comments, which is helpful), and they're not generally responsive to feedback and new information. I think LW has the intellectual resources and the correct problem solving paradigm to be successful.

Small Scale verses Large Scale

On a small scale, it could be done here, just among ourselves. If on a large scale, eventually, it would be done somewhere else. There, I see Huge Opportunity.

Large Scale Idea:

A library of posts. Each post would address a different problem and would be mediated by a particular individual. If someone in the general public is interested in a particular problem, they will go to that post for information. More important and relevant topics will have more activity.

There would be interaction between the post and the public via threads, and between the post and other blogs on the internet via people navigating back and forth and sharing information.

A post is mediated by a person or group concerned about that topic. There will be limitations associated (the mediator may not be rational enough, the mediator might be biased), so we would allow competition by allowing several posts on one topic. Here, the karma scoring would be enormously useful to help people decide which post is worth going to.

Popular and relevant posts would get traffic and funding.

The reason why this has a chance of working, if it isn't obvious, is because of the karma system. The problem in the information age is TMI (too much information), and the karma system solves that. We would have to instruct people, until it becomes the ethical standard, that noise and errors get down-voted, new information and plausible dissenting views get upvoted.

Small scale idea:

If there is interest in this idea, either small scale or large scale, I would like to suggest beginning with a post on swine flu. Volunteers to mediate the post could submit credentials and we would choose a team. We would open a new LW account to be shared by the team so that they can mediate the post collectively.



19 comments, sorted by Click to highlight new comments since: Today at 7:32 PM
New Comment

The thing is, I think Wikipedia beat you to the punch on this one. They may not be Yudkowskian, big-R Rationalists, but they are, broadly-speaking, rational. And they do an incredibly effective job of pooling, assessing, summarizing, and distributing the best available version of the truth already. Even people of marginal source-diligence can get a clear view of things from Wikipedia, because extensive arguments have already distilled what is clearly true, what is accepted, what is speculation, and what is on the fringe.

I encourage you to bring the clarity of thought taught in the Less Wrong community to Wikipedia by contributing.

That said, it would be pretty cool if they'd implement a karma-like system for Wikipedia contributors. It would make vandals, fools, trolls, noobs, editors in good standing, and heroic contributors easily recognizable.

Agreed, we shouldn't duplicate anything that Wikipedia already does.

However, Wikipedia is an encyclopedia of general information and, explicitly, doesn't want the role I am advocating here. While users try to expand the role of Wikipedia, the mediators want a narrower role for Wikipedia and would probably appreciate a complementary site for the purpose of analyzing information.


Articles may not contain any new analysis or synthesis of published material that serves to advance a position not clearly advanced by the sources. from here

I would be open to petitioning for some kind of "WikiAnalysis" sister site, but that would do little for R-outreach (Is R-outreach something we are interested in?) and we'd be able to do it better.

  1. Publish your original thought somewhere.
  2. Get it referenced by "reputable sources".
  3. You may now republish it in Wikipedia!
[-][anonymous]13y 0

It takes a lot of work (full article, with a developed story) and time (1-6 months for science peer review) to publish; it doesn't come close to taking advantage of the efficiencies of networked community thought.

3a. Although if you do so, there is some risk that other Wikipedia editors will take exception and complain of conflict of interest.

Correct me if I am wrong and forgive the ultra-condensation, are you proposing a type of rationalist wikipedia where instead of NPOV we would have an RPOV (rational point of view) policy? It does sound very interesting.

That's a good condensation.

I didn't realize that Wikipedia was so timely in collecting and organizing information for current events.

Looking through the main page for the swine flu, my first impression is that wikipedia already does a good job of collecting and organizing information with user input. My second impression is that the discussion would benefit from threading and user scoring (karma). The RPOV would help throughout, especially in repressing PPOV (Political Points Of View) and encouraging some rigorous (science, math) analysis.

NPOV has regularly been criticised as a weak point because it gravitates towards consensus rather than evaluation of arguments, so there might be value in an alternative approach. And working out the algorithms/processes for determining RPOV would be an interesting challenge in itself.

NPOV does not stand for "No point of view." Nor does it mean "balance between competing points of view." Check out this and this. NPOV requires that Wikipedia take the view of an uninvolved observer, and it is supplemented by verifiability, which requires that Wikipedia take an empirical, secondary point of view that credits established academia.

So content disputes are usually settled by evaluating claims as true or false through verification. Those who continue to object to a claim once it has been established do not have to be included in a consensus. That is why Wikipedia is able to assert the truth of the Armenian Genocide, the Holocaust, and the moon landings.

I don't think I said anything about 'no' point of view. I just claimed that the current policy of wikipedia is to reach for general consensus rather than the truth-seeking standards of this community. You could probably find a few examples of topics where the beliefs held here are not mirrored in the correcponding wikipedia page. This would seem to indicate that the two communities have different reasoning mechanisms. The examples you mentioned belong in the overlap between the two, simply because consensus on these matches the rational viewpoint, despite vocal oposition. However I can think of other articles where there would be quite significant difference (think of the list of topics in the comments here for instance.

[-][anonymous]13y 0

Wikipedia doesn't do research, which determining what is right as opposed to popular is a specific example of; the rest of the world does. Wikipedia only organizes the world's conclusions. If the world is somewhat insane, Wikipedia follows suit. The two activities are largely unrelated: if there to arise an organization that is successful in telling truth from crust, and it gains good reputation, the world may change its consensus position, and Wikipedia will improve as a side effect.

The problem with controversial questions is that attempts to publicly debate the issue are soon joined by people deep in affective death spirals, from one or both sides, who usually get all their information from one side's editorials and who often prefer sabotaging the debate over losing. Identifying these people and keeping them out is the main challenge.

Initially, this has to be an exercise among ourselves to see if we can do it at all, before we can think of doing the public a service by it. We aspire to rationality: let this be a test of that aspiration.

What do you think of implementing a test-run of our techniques by solving a problem: We choose to estimate something that is currently unknown, but which we expect will be resolved later. Later, we compare our estimate with the actual result -- "Did we win or lose?", and we can analyze why.

I had suggested some questions about rational response to H1N1 here, but the information seems to have already caught up with my need for it. Are there any other questions of interest that we could work on?

If you want to start solving a problem, in a new thread, maybe wait and see if discussing such a real-world problem (here in this post) is group-approved first.

[-][anonymous]13y 0

I'm no longer interested in H1N1 because the information has finally caught up with my need for it.

Are there any other questions of interest that we could debate about?

  1. Diet, sleep, exercise.
  2. global warming/climate change (a science question and a politics question. But with a focus on science, and perhaps political distortion)
  3. The best method of alternative education.
  4. the most effective motivational techniques/positive psychology techniques.
  5. which emerging technologies are most promising (pyrolysis, wind/solar/tidal, nanotech, etc)

This raises a meta-question: Can one really hope to do better by this method than by simply following the expert consensus on the issue? Are the techniques presented on this site powerful enough to allow interested non-experts to do a better job in evaluating a complicated issue than expert analysts?

Are the techniques presented on this site powerful enough to allow interested non-experts to do a better job in evaluating a complicated issue than expert analysts?

No, the experts will do a better job. Actually, I had idealistically envisioned that experts would dominate the discussion while non-experts observed, occasionally bringing forth ideas from their field of expertise.

However, with a smaller group with no critical mass of experts, we may still make some headway in cases when:

(a) It is unclear how to apply an expert opinion. This often happens when the application spans or escapes fields of expertise.

(b) The experts don't have a consensus. RPOV can figure out how to make decisions that minimize risk when the experts have incomplete or conflicting solutions/predictions.

Other examples? Particularly, any specific examples when RPOV among non-experts would lead to better solutions than what one non-expert might come up with individually?

[This comment is no longer endorsed by its author]Reply