Hello to all,

Like the rest of you, I'm an aspiring rationalist. I'm also a software engineer. I design software solutions automatically. It's the first place my mind goes when thinking about a problem.

Today's problem is the fact that our beliefs all rest on beliefs that rest on beliefs. Each one has a <100% probability of being correct. Thus, each belief built on it has an even smaller chance of being correct.

When we discover a belief is false (or less dramatically, revise its probability of being true), it propagates to all other beliefs that are wholly or partially based on it. This is an imperfect process and can take a long time (less in rationalists, but still limited by our speed of thought and inefficiency in recall).

I think that software can help with this. If a dedicated rationalist spent a large amount of time committing each belief of theirs to a database (including a rational assessment of its probability overall and given that all other beliefs that it rests on are true) as well as which other beliefs their beliefs rest on, you would eventually have a picture of your belief network. The software could then alert you to contradictions between your estimate of a belief's probability of being true and its estimate based on the truth estimate of the beliefs that it rests on. It could also find cyclical beliefs and other inconsistencies. Plus, when you update a belief based on new evidence, it can spit out a list of beliefs that should be reconsidered.

Obviously, this would only work if you are brutally honest about what you believe and fairly accurate about your assessments of truth probabilities. But I think this would be an awesome tool.

Does anyone know of an effort to build such a tool? If not, would anyone be interested in helping me design and build such a tool? I've only been reading LessWrong for a little while now, so there's probably a bunch of stuff that I haven't considered in the design of such a tool.

Your's rationally,
Avi

New Comment
24 comments, sorted by Click to highlight new comments since:

There's a facebook group with a bunch of LWers trying to work on building better argument mapping tools.

[-][anonymous]00

Oh wow, that's really awesome. Almost makes me wish I had a FB. If anything successful comes of it, please be sure to let us know.

Thanks for that. Request to join sent.

You could look into Bayesian Belief networks. I think your problem will be that your calculations for belief propagation will only be as good as your ability to properly model your beliefs and their dependencies. Also, the shifting interpretation even of your own statements will lead you chasing your own tail in updates.

I think this kind of tool and the use of it would be more useful in limited contexts. I was entering into a little business arrangement and getting my undies in bunch over trust issues. I stepped back a second and considered not what the other guy could do, but what he was likely to do, based on reasonable priors. That unwound my undies and I got on with the deal.

Having a belief network wizard that walked you through analyzing particular problems and applying reasonable priors would likely be very helpful in a lot of situations. Instead of the belief network alone, make it a bayesian decision theory tool, so that you can make better decisions, instead of just better estimates of what will happen.

At the moment I’m using yEd to create a dependency map of the Sequences, which is roughly equivalent to creating what I guess you could call an inferential network. Since embarking on this project I’ve discovered just how useful having a structured visual map can be, particularly for things like finding the weak points of various conclusions, establishing how important a particular idea is to the validity of the entire body of writing, and using how low a post is on the hierarchy as a heuristic for establishing the inferential distance to the concepts it contains.

So I’m thinking that the use of a belief network mapping tool might not necessarily be mainly in allowing updates to propagate though a personal network, but creating networks representing bodies of public knowledge. Like for example, the standard model of physics. As you can imagine this would be immensely useful for both research and education. For research such a network would point to the places where (for example) the standard model is weak, and for education it would lay out the order in which concepts should be taught in order for students to let them form an accurate internal working model without getting confused.

TL;DR: Yes, I’d love to help you design and build such a tool.

That's a great idea. And in the domain of physics, it might be a lot easier to quantify the probability that a belief is wrong. And what theories theories rest upon. The same could be done for pure mathematics.

Probabilistic inference for general belief networks is NP-hard (see The Computational Complexity of Probabilistic Inference Using Bayesian Belief Networks (PDF)). Thus straitforward approach is not an option. The problem is more like finding computationally tractable yet sufficiently powerful subtype of belief networks.

This only implies that the required computation time scales poorly with the number of graph nodes. It seems like for any reasonable number of beliefs that could be input by a single person you wouldn't run into any practical difficulty. Perhaps if one tried to extend this to a web based application with a world-wide, constantly updated belief net you would run into scaling issues, and then you simply make practical decisions about how complex you're willing to let things get.

I've been doing something like this on my own time. I haven't come very far but I would like to help.

I think this is an awesome idea. What you think of making it web-based and public, so the focus would be on creating the best possible communal estimate, instead of the best possible individual estimate?

I think that's a great idea. We could offer it as a web-based individual tool and then apply some sort of analysis to the results to come up with the data we might need to offer guidance in decision theory.

Hi all,

I wrote this and then went away for a long weekend. I'm glad to see that everyone's enjoying it. After reading all of the comments, I've applied to join the Facebook group mentioned by Curiouskid.

I also agree with the suggestion that it would need to be a model of a personal Baysian Network with an attached decision theory tool to help users make logical choices about what they enter and how the assess probabilities. For example, instead of asking for a probability, it might ask how many times in a lifetime you'd expect to be wrong about something you feel this confident about. (How often are you likely to go to sleep on Tuesday and wake up on a day other than Wednesday? There's probably a lot of people who live their whole lives without that ever happening.)

Moreover, it would be web-based, with the decision theory tool making use of the database of individual networks to help form a view of communal knowledge. (This part would never be perfect, because the size of the sum of the individual networks would be so large that NP-hard considerations come into play.) It would need to be able to show its work in graphic format when one of its assessments is challenged.

I will read up on existing argument mapping tools and the deliberations of the aforementioned Facebook group. If no one else is already doing this, I think we should do it. Anyone have any knowledge on how to go about it?

I think it would be a case of having a period of discussing the problem with a growing wiki page (or something) containing information about the problem to be solved. After that, we could discuss the shape of the solution. Only after that, those with the technical knowledge could discuss the best way of actually implementing the solution. Then we could divide up the work between those with the time and appropriate skills and actually do it.

Sounds like a plan. Really what you want to do is contact everyone who's shown interest in helping you (including myself) in order to collude with them via email and then hold a discussion about how to move on at a scheduled time in an irc channel or somesuch.

I used to be pretty interested in this kind of thing (the semantic web and all that), though I haven't been paying much attention to it lately.

I think the biggest benefit is not in updating mapping one's personal beliefs (though that does have some value), but in having shared belief networks to map reasons for disagreements (or even whether there is a disagreement at all).

I would be interested in helping design such a tool (and I second buybuydandavis's recommendation to look into Bayesian Belief Networks).

I'm not sure where to ask this, I'll just toss it here.

You know this site? I'm interested in something that would work similarly but for a different purpose. http://www.music-map.com/

Instead of visitors putting in a few favorite bands, you make an account and select degrees of agreement with various pre-made contentious issues. The account is so you can update your views and change the data points you contributed. So for example, there would be one for "Humans evolved by natural selection", and there would be a selection of confidence levels you could pick to represent your agreement or disagreement.

You then get a bunch of people to do this, and use algorithms similar to that music site's, so that you end up with a kind of crowd belief map with the different statements of belief clustering based on statistical closeness. So the selection for "Humans evolved by natural selection: Strongly Agree" would be on the map somewhere, probably nearer a democrat-ish cluster, and probably farther from an "intelligent design"-ish cluster of agreement statements.

So you'd end up with things like a conspiracy theory-ish cluster, which would probably have "UFOs have been on Earth: Agree" somewhere near or inside it. I would find it fascinating to look at this sort of visual representation for where these statements of belief would appear on a belief landscape, especially after thousands of people have participated and with lots of different issues to weigh in on.

If the sample size was big enough, you might even use it as a rough first-draft confidence of a particular statement you haven't researched yet. Sometimes I just wish I could short-sell a conspiracy-theory belief cluster index fund, or an ID one. And I might get a heads-up on things to look into, say for example the belief statements that ended up nearer to "Many worlds interpretation: Agree".

I didn't know other people do this! Freemind is a nice little piece of software that I use for everything from drafting papers to building knowledge maps. It works well, for me, but a program specifically designed for that purpose would be very nice as well.

It looks like the difference is that Freemind is made for lots of notes, but doesn't asses probabilities at all.

I know. In the interim before something like that is made, someone could just write them in. It seemed to me like the idea had more to do with identifying beliefs that are connected to one another.

Regardless, the software would be nice.

I had the personal experience of having the belief "If I wake up and the last time I remember going to bed was Tuesday today should be Wednesday, it shouldn't be Monday" turn out to be false. Dealing with getting that belief challenged produced a huge amount of cognitive dissonance.

There are probably thousands of similar beliefs that I hold that are also mostly true but false in a few edge cases. If you would go and throw out every false belief that you acquired between 0 and 5 years of age you would lose your way to relate to the world around you. You would probably lose your sanity.

It's also not easy to change beliefs. Getting Bob who believes that his ex-girl friend Alice is a bad person to drop his belief is a hard process even when Bob wants to change his belief. There are emotions that have to be dealt with before Bob can successfully change his belief.

The same went for my "Tuesday -> Wednesday but not Tuesday->Monday" belief. It's not easy to drop a belief where a lot of emotions get attached.

Such a software has the danger that it let's you ignore the emotional attachment that you have to various beliefs. If you let a new belief propagate through your belief network you actually have to change how you feel about the beliefs on an emotional level.

If your probability assessment of the subjective experience of a Tuesday following the subjective experience of going to sleep on a Monday were 0.00001, you would expect it to happen around once in a lifetime. Personally, I would assign a much higher probability than that.

I'm not sure if I understand how it would be dangerous to change how you feel about things based on evidence. We should strive to hold our beliefs lightly.

When I say belief, than I mean stuff that's in your mind and that effects the way you act. The question "What do I believe?" is a different question than "What's resonable for me to believe?".

I never consciously formed my "Tuesday -> Wednesday but not Tuesday->Monday" belief. I just noticed the belief when I got it challenged. It produced a lot of stress. The fact that I can know intellectually that my memory isn't perfect doesn't change the fact that I belief on an emotional level in my memory.

It might not be the best example because a lot of people don't have similar experiences of how it feels like to get such a belief challenged by empiric reality.

Then let's take a different belief from the world of social dynamics. A guy who thinks no girl likes him because he's fat doesn't suddenly drop his belief when you show him fat guys with girlfriends. It takes emotional work to change the belief.

If you just have a computer program with values in it, then I don't think you can reasonable say that those values are what you belief if you haven't actually integrated those beliefs in your mind.

How fast and accurate the propagation of an update is? I think it is very slow. Often it just stops at the beginning of the chain.

Sometimes it propagates, but wrongly. It causes some new errors in the structure, due to a mistaken reasoning.

This is why, many just copy the whole mind structure chunks from elsewhere, hopping that the reasoning process was properly done there, in the leader's head. Often it's a good tactic, sometimes it goes catastrophically wrong.

But recently, with so much of a thinking supply, the quality of the "thought deeply about" may rise. I am not sure.

That's where such a tool could come in handy. To propagate in a human mind, you must evaluate every belief that rests on the changed belief. And then do it again for the beliefs that rested on the beliefs, etc. At any stage, you can make a mistake in logic and propagate an incorrect belief. A computer would take seconds and would not make such mistakes. You wouldn't have to believe what the computer tells you, but if it tells you what you didn't expect to see, it would be a good idea to consider its reasoning.

What are the chances of getting a whole belief network into the system accurately? How obvious is an error in one of the internal multiple-conditional probabilities?