I'm reading Superforecasters and one of the things that differentiates good from bad forecasters is ideology: those who try to fit the world in their left/right wing view are less effective at forecasting.

Does Ideology still has a place in a rational world? Why (not)?

My limited perspective tells me in theory ideology would give you ideas to try and would bias potential solutions.

New to LessWrong?

New Answer
New Comment
3 comments, sorted by Click to highlight new comments since: Today at 12:32 PM

TL:DR;

My limited perspective tells me in theory ideology would give you ideas to try and would bias potential solutions.

Theories also give you ideas to try. Is biasing potential solutions a good thing or a bad thing?


Long:

What is ideology?


I'll try to offer an answer here. (For the purposes of this comment "Ideology" is used negatively.)


Here's a frame:

A) Ideology: The way the world is + Things to do. Immutable 'Theory' that in the simplest case flows from one's self, multiple sources leads to complications and can involve integration and schisms, "not necessarily possessing any connection to reality" - it can say the 'sky is red' and treat that as a fact.

Theories:

B) A model that generates claims is created (via some method). These claims are tested*, and models that produce false claims are rejected. If the process of claim generation integrates and reworks refuted theories, maybe "progress" can be made - or this just leads to an ensemble of probably overfitting theories that crash whenever a new (or old[1]) experiment is performed.

*Relevant quote for one way this can work: "He who decides the null hypothesis is king."

C) Theories are generated from data via some method. "Refutation" leads to revision, and maybe theories get more points if they make correct predictions in advance about experiments that have never been performed.


Under A, there are no constraints on theories - a theory can say anything at all.

Under B, a theory can say anything - except things that are "wrong". Any statement a theory makes that is later shown to be "wrong" means it is discarded/revised. The current pool of viable theories obeys the constraint "we don't know it's wrong". (Footnote 1 notes that this is incorrect - what is believed to have been shown via an experiment can change over time, especially as a result of evidence it was fake, not replicating, etc.)

Under C, theories don't just come with a bundle of "yes we've checked this and it was right/wrong, we haven't checked this yet, etc." These theories begin with evidence...but how can such a thing be shown via experiment? How do we know type C theories aren't just type B theories that later accumulated evidence? Does it matter?


The striking difference (as formulated here) between type A theories ("Ideologies") and everything else is that they don't have a connection to reality.

They can be seen as lazy theories - no requirements for predictions about reality, or that those predictions match reality. To be fair, if you were "absolutely certain" in a mathematical sense, then it would make sense to never change your mind. (Some argue that this is a basis for never being "absolutely certain" - but then how certain should one be that 1+1=2? An argument can also be made for methods that enable handling discontinuity, coming up with new theories, etc.)

But there's also the normative component - values. Are these immutable, or do they change? Are they based on 'truth' or something else?

If one values human lives, then one may consequently value things one believes are necessary or improve human lives. Let's say this includes clean water and cookies. But one later finds out water is necessary for human life, and cookies are bad for human health. In this toy model, the value of cookies has changed, but not the value of human lives. So human lives are judged good as an immutable part, and cookies/water judged based on consequence on the immutable value.

Part of this is based on what "is" - do people need water? cookies? Are these things good for them?

Part of this is purely "ought" - human lives are good. (Or the more complicated "good human lives are good".)


So what is "ideology" good for? It's good to know the truth, and it's good to know your values. Replacing ideology with theory where what is is concerned may be useful for finding the truth. Whatever framework should be used for ought/handling values, acknowledging the possibility of change/being incorrect (whatever that means) seems to suggest the possibility of change, of learning. And a mind that never changes, if wrong, 'can never become right'. But what does it mean to be wrong about what ought to be?


Footnotes

[1] This argues for 'preservation', and rewinding - a "theory" refuted by one experiment, which doesn't replicate, whose result is then reversed by several clear experiments, 'should' 'come back'.

Or it supports a more complicated model incorporating "probabilities". For a simplified model:

After inspecting a coin, and finding it bears two faces:

Theory H says: This coin will come up heads when flipped.

Theory T says: This coin will come up tails when flipped.

These both seem reasonable, but equally likely, so we'll pretend we've seen both happen once. After the coin is flipped n times, if the number of heads and tails sum to n, then the weight for the theories/outcomes is: h+1:t+1, where h is the number of heads actually seen, and t is the number of tails seen.

(What should be done if the coin settles on the edge is less clear - a new theory may be required (Theory E). And if the point of the imaginary outcomes is just so some weight will be given to outcomes we consider 'possible' but haven't observed, then after they've been observed, should the imaginary outcomes be removed?)

This offers one way of doing things:

An experiment is a trial which each theory says will provide 1 count of evidence for them. After being performed, whichever theory was 'right' gets 1 more point. The weights that develop over time serve as an estimate of the outcome of future experiments - and the probability that a coin comes up heads or tails.

This model doesn't include more complicated hypotheses like:

the coin will come up HTHTHTHT...repeating forever.

That count where so and so said the coin landed on an edge? That whole experiment was made up and never performed. (Or performed until that result was reached, and the prior experiments weren't recorded.)


Which leaves the question of how to handle them. If a result can obtained via many 'experiments' how do we incorporate that evidence if we don't have the number of experiments?

Thank you for taking the time to reply. I had to read your comment multiple times, still not sure if I got what you wanted to say. What I got from it:

a) Ideology is not the most efficient method to find out what the world is

b) Ideology is not the most efficient method to find out what the would ought to be

Correct?

You ask if biased solutions are a good or a bad thing. I thought biases were generally identified by rationality as bad things in general, is this correct?

We should hence strive to live and act as ideology-free as possible. Correct?

It depends on what you mean by ideology. I could have made this clearer by just asking this question and leaving it at that:

What is ideology?

I wrote my comment in a way that:

1. Presented "ideology" as meaning "dogma".

2. But also considered it a (degenerate case) of theory.

I don't think it's bad to have theories, but if the way a relationship between a theory and reality is handled is that 'reality' is always rejected/ignored if they disagree, than that means learning is impossible.

Is it bad to have theories? No.

You ask if biased solutions are a good or a bad thing. I thought biases were generally identified by rationality as bad things in general, is this correct?

Yes, though it's useful to distinguish between 'here is a heuristic people use' and 'here is where that heuristic goes wrong'.