Follow-up to: Argument Maps Improve Critical Thinking, Software Tools for Community Truth-Seeking

We are here, among other things, in an attempt to collaboratively refine the art of human rationality.

Rationality is hard, because the wetware we run rationality on is scavenged parts originally intended for other purposes; and collaboration is hard, I believe because it involves huge numbers of tiny decisions about what information others need. Yet we get by, largely thanks to advances in technology.

One of the most important technologies for advancing both rationality and collaboration is the written word. It affords looking at large, complex issues with limited cognitive resources, by the wonderful trick of "external cached thoughts". Instead of trying to hold every piece of the argument at once, you can store parts of it in external form, refer back to them, and communicate them to other people.

For some reason, it seems very hard to improve on this six-thousand-year-old technology. Witness LessWrong itself, which in spite of using some of the latest and greatest communication technologies, still has people arguing by exchanging sentences back and forth.

Previous posts have suggested that recent software tools might hold promise for improving on "traditional" forms of argument. This kind of suggestion is often more valuable when applied to a real and relevant case study. I found the promise compelling enough to give a few tools a try, in the context of the recent (and recurrent) cryonics debate. I report back here with my findings.

I. Argunet

The first tool I tried was Argunet, an Open Source offering from the Institute of Philosophy in Berlin. I was seduced by the promise of reconstructing the logical structure of an argument, and by the possiblity of collaborating online with others on an argument.

Like other products in that category, the basic principle of operation of Argunet is that of a visual canvas, on which you can create and arrange boxes which represents statements, portions of an argument. Relationships between parts of an arguments are then materialized using links or arrows.

Argunet supports two types of basic relationship between statements, Supports and Attacks. It also supports several types of "inference patterns".

Unfortunately, when I tried using the Editor I soon found it difficult to the point of being unusable. The default expectation of being able to move boxes around by clicking and dragging is violated. Further, I was unable to find any way to move my boxes after initially creating them.

I ended up frustrated and gave up on Argunet.

II. bCisive Online

I had somewhat better luck with the next tool I tried, bCisive online. This is a public beta of a commercial offering by Austhink, the company already referenced in the previous posts on argument mapping. (It is a spin-off of their range of products marketed for decision support rather than argument support, but is also their only online, collaborative tool so far.)

The canvas metaphor proved to be implemented more effectively, and I was able in a relatively short time to sketch out a map of my thinking about cryonics (which I invite you to browse and comment on).

bCisive supports different types of statements, distinguished by the icons on their boxes: questions; arguments pro or con; evidence; options; "fixes", and so on. At present it doesn't appear to *do* anything valuable with these distinctions, but they proved to be an effective scheme for organizing my thoughts.

III. Preliminary conclusions

I was loath to invest much more time in updating my cryonics decision map, for two reasons. One is that what I would like to get from such a tool is to incorporate others' objections and counter-objections; in fact, it seems to me that the more valuable approach would be a fully collaborative effort. So, while it was worthwhile to structure my own thinking using the tool, and (killing two birds with one stone) that served as a test drive for the tool, it seems pointless to continue without outside input.

The other, more important reason is that bCisive seems to provide little more than a fancy mindmapping tool at the moment, and the glimpse I had of tool support for structuring a debate has already raised my expectations beyond that.

I have my doubts that the "visual" aspect is as important as the creators of such software tools would like everyone to think. It seems to me that what helped focus my thinking when using bCisive was the scheme of statement types: conclusion, arguments pro and con, evidence and "fixes". This might work just as well if the tool used a textual, tabular or other representation.

The argument about cryonics is important to me, and to others who are considering cryonics. It is a life decision of some consequence, not to be taken lightly and without due deliberation. For this reason, I found myself wishing that the tool could process quantitative, not just qualitative, aspects of my reasoning.

IV. A wish list for debate support

Based on my experiences, what I would look for is a tool that distinguishes between, and support the use of:

  • a conclusion or a decision, which is to be "tested" by the use of the tool
  • various hypotheses, which are offered in support or in opposition to the conclusion, with degrees of plausibility
  • logical structure, such as "X follows from Y"
  • challenges to logical structure, such as "X may not necessarily follow from Y, if you grant Z"
  • elements of evidence, which make hypotheses more or less probable
  • recursive relations between these elements

The tool should be able to "crunch numbers", so that it gives an overall indication of how much the total weight of evidence and argumentation contributes to the conclusion.

It should have a "public" part, representing what a group of people can agree on regarding the structure of the debate; and a "private" part, wherein you can adduce evidence you have collected yourself, or assign private degrees of belief in various statements.

In this way, the tool would allow "settling" debates even while allowing disagreement to persist, temporarily or durably: you could agree with the logical structure but allow that your personal convictions rationally lead you to different conclusions. Highlighting the points of agreement and contention in this way would be a valuable way to focus further debate, limiting the risk of "logical rudeness".

New Comment
80 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Comment in a nutshell: "Practice with very simple tools optimized for swift use in a culture that values logic and evidence are likely to be better than specialized tools with hard coded abstractions."

There's a specialized form of note taking called "flowing" within the policy/CEDA/NDT debate community. Here is a wikipedia article on the subject if anyone wants to hunt in the concept space for keywords and links and such :-)

In this speech community (especially at the higher levels) people tend to speak very swiftly because there is a background theory that "dropping arguments is conceding arguments" which creates an incentive to make many parallel arguments with the same conclusion that reach towards different sets of evidence. It is possible to win simply by making many adequate arguments that your opponent is incapable of handling with enough speed or concision.

(It can be dangerous to spin out many arguments without paying attention to how they interact, because sometimes a response will be something like "I concede argument 2 and 5 which constitute a turn on the general position that works like X. Arguments 1, 3, and 4 were about A, B, and C wh... (read more)

That's a fascinating chunk of info - perhaps hard to digest in one go, but I'll come back to it to enjoy at leisure, and perhaps reply. One of the things that attracted me to LW in the first place was the way many of the comments turn out to be more interesting than the original post. Thanks for yet another demonstration.
When I was composing the response I was a bit worried that I was using too much jargon. If someone has access and time to experience policy debate I'd highly recommend it, but I'm not sure how much value can be transmitted via mere text (even if it was a book's worth of text -- and books have been written on the theory and practices behind competitive policy debate). If there are any questions about something that didn't make sense, I'd be happy to try to answer them :-) Honestly, there are some seriously negative elements to policy debate that go with the positive. If I understand correctly, REM's song "Its the end of the world as we know it" was inspired by visiting a debate tournament and seeing high level debaters trying to flow each other out of rounds with "techno strategic language" that was basically unintelligible to normal people. Rounds would be decided on "existential risk" level policy impacts but no one was doing anything about it really... they'd just win or lose a debate round and move on to the next one. I once judged a round where the decision came down to whether the affirmative policy would increase or decrease the likelihood of human detection and extermination by extra-solar civilizations stealth-bombing our sun with large masses moving at relativistic velocities in order to make sure humans weren't competing with them for stars 50k years from now... I think a lot of smart people get involved with policy debate and end with a very cynical view of "competitive" communication. REM:
Update: in modern years, a tool called Verbatim exists, that allows for flowing to be done in Excel. The drawback is that you lose some flexibility, because you can't draw freeform arrows or unconventional symbols. The benefit is that you can type faster than you can write, you can copy and paste, you can keep files organized more easily than physical papers, and there are lots of community tools and support and macros. There are also some electronic tools for speech writing that let you easily import cards from various files into word, quickly reassembling arguments from indexed files into mostly coherent works of writing, but I didn't attend a school that encouraged the use of electronic evidence, so I don't have much experience working with those.

First, my own observation agrees with GreenRoot. My view is less systematic but much longer, I've been watching this area since the 70s. (Perhaps longer, I was fascinated in my teens by Leibnitz's injunction "Let us calculate".)

Empirically I think several decades of experiment have established that no obvious or simple approach will work. Unless someone has a major new idea we should not pursue straightforward graphical representations.

On the other hand we do have a domain where machine usable representation of thought has been successful, and where in fact that representation has evolved fairly rapidly. That domain is "programming" in a broad sense.

Graphical representations of programs have been tried too, and all such attempts have been failures. (I was a project manager for such an attempt in the 80s.) The basic problem is that a program is naturally a high-dimensional object, and when mapped down into a two dimensional picture it is about as comprehensible as a bowl of spagetti.

The really interesting aspect of programming for representing arguments isn't the mainstream "get it done" perspective, but the background work that has been d... (read more)

Couldn't help but think of Wikipedia as a kind of example of this "vagueness/resolution" problem.
From professional experience (I've been a programmer since the 80's and was paid for it from the 90's onward) I agree with you entirely re. graphical representation. That doesn't keep generation after generation of tool vendors crowing that thanks to their new insight, programming will finally be made easy thanks to "visual this, that or the other". UML being the latest such to have a significant impact. You have me pondering what we might gain from whipping up a Domain-Specific Language (say, in a DSL-friendly base language such as Ruby) to represent arguments in. It couldn't be too hard to bake some basics of Bayesian inference into that.
PyMC is a DSL in python for (non-recursive) Bayesian models and Bayesian probability computations. I have been thinking of trying to make an ad-hoc collaborative scenario projection tool with PyMC and ikiwiki. Users would edit Literate Python (e.g. PyLit or Ly) wiki pages that defined PyMC model modules, and ikiwiki triggers would maintain Monte Carlo sampling computations and update results pages. But it won't be enough for real argument mapping without decision theory (and possibly some other things).
I wish I could be optimistic about some DSL approach. The history of AI has a lot of examples of people creating little domain languages. The problem is the lack of ability to handle vagueness. The domain languages work OK on some toy problems and then break down when the researcher tries to extend them to problems of realistic complexity. On the other hand there are AI systems that work. The best examples I know about are at Stanford -- controlling cars, helicopters, etc. In those cases the researchers are confronting realistic domains that are largely out of their control. They are using statistical modeling techniques to handle the ill-defined aspects of the domain. Notably in both the cars and the helicopters, a lot of the domain definition is done implicitly, by learning from expert humans (drivers or stunt pilots). The resulting representation of domain models is explicit but messy. However it is subject to investigation, refinement, etc. as needed to make it work well enough to handle the target domain. Both of these examples use Bayesian semantics, but go well beyond cookbook Bayesian approaches, and use control theory, some fairly fancy model acquisition techniques, etc. There is a lot of relevant tech out there if Less Wrong is really serious about its mission. I haven't seen much attempt to pursue it yet.
I strongly support the notion of whipping up a DSL for argumention targeted at LessWrong readers. Philosophy and law argumentation tools seem to be targeting users without any math or logic who demand a graphical interface as the primary means of creating argument. My guess is that LessWrong readers would be more tolerant of Bayesian math and formal logic, the necessity of learning a little syntax, and only exporting a graphical representation. Features might include: * Compose in ordinary ASCII or UTF-8 * Compose primarily a running-text argument, indicating the formal structure with annotations * Export as a prettified document, still mostly running text (html and LaTeX) * Export as a diagram (automatically layed out, perhaps by graphviz) * Export as a bayes net (in possibly several bayes net formats) * Export as a machine-checkable proof (in possibly several formats) I'm currently learning noweb, the literate programming tool by Norman Ramsey.
Well visual programing of visual things, is good. but thats just WYSIWYG.
The problem has consitently appeard to me to be related to the use of incorrect abstractions. Most of the visual attempts I've seen have been roughly equivalent to printing binary code to screen as an attempt for a textual representation of a program. I'm still (very optimistically) waiting for a video-game which tackles this problem succesfully (some of the FF series ones have done an ok job).
You can use *italics* for italics and **bold** for bold. Good comment btw, from experience I'm very much in agreement about the futility of visual programming.
Yes, sorry, fixed. I couldn't find any description of the markup conventions and there's no preview button (but thankfully an edit button).
It's not terribly obvious but the little 'Help' link at the bottom of the comment box gives the most useful markup conventions. More complete documentation is here.

Thanks for looking into this for us!

One of my ever-pending posts to write is on what sort of simple interface might prevent online arguments from retracing the same points over and over. I suspect it will not be graphical with boxes, because that makes poor use of screen real estate. I suspect it will not have lots of fancy argument types and patterns, because no one really uses that stuff. I think it does need to have a karma system, because otherwise there's no way to find the good stuff.

Yes, one of the commonest mistakes people make when building this sort of tech is to suppose that the main success factor is the tech. An even more common mistake is to assume that the more features it has, the better the tech. One of the main success factors is whether the software suits how people use it, in actual everyday operation and not just in the testing lab. So one of the main factors in succeeding with software projects is a capacity to observe people closely. Here on LW I observe that people make heavy and ready use of permalinks; and not just post permalinks at that, but comment permalinks, which is more remarkable. Linking to fine-grained material is one of the features I'd expect to play a large role in effective online debate. On the other hand LW remains a blog, perceptually a "source of news" rather than a knowledge base. The format encourages an unending accumulation of new material more than it does drilling down into previously covered topics. The LW Wiki is a useful adjunct in combating this tendency, but it seems to still be viewed as a sideshow at the moment (though that might be getting less so over time). Also wikis have their own way of encouraging a "granularity" of arguments and topics, which may or may not be suitable for the kind of work I attempted to do re. cryonics.

Reading that Argunet was both open source and flawed, I decided to download the code (signals geekiness) in order to fix the problem you have encountered (signals generosity). I didn't have to change anything: it took me a while (signals perseverance) but I finally figured out how to move a box in Argunet (signals superior problem-solving abilities).

It turns out that the result of dragging a box depends on whether the box was selected prior to dragging it. If the box was selected, it is moved, otherwise, an arrow is created. So, before you move a box, select it by clicking on it.

Heh. Thanks a bunch for your efforts, that's useful. There are some conclusions to draw from this incident about the attributes that make for good software design, but they might be a topic for another thread (or another blog).

I created a wikipage listing all of the debate tools that were mentioned so far in this comments thread:

Please feel free to edit this wikipage.

I would recommend that we try to create our own debate-mapping tool. It might end up being surprisingly easy.

I've already used PHP, GraphViz, and MediaWiki to implement a vaguely similar project, the Transhumanist Wiki Scenarios Map.

Unfortunately, that project ended up being less useful than I had hoped, and has been abandoned for now.

Today, I made a rough sketch of what a debate-mapping tool based on these tools might look like.

A VERY rough sketch.

Pretty much every detail is probably going to need to be changed in order for it to be useable.

Anyway, here'... (read more)

Yes, dissecting a conversation like that is the kind of thing I had in mind. A further intent, and I agree with Eliezer here, is to capture and provisonally settle arguments as they occur, to avoid discussions that go around in circles. So it's not about just mapping one conversation, but also about being able to link further conversations back to that formal map. By "provisionally settle" I mean something like agreeing that whether you should spend your money on cryonics is an expected utility trade-off. If you think there is an ethical issue such that even wanting cryonics to work is bad, or sinful or what have you, there's no point discussing the price. The discussion has a tree structure, as suggested in this thread by clockbackwards and Jennifer. Possibly you don't need much more of a tool than a good outliner. Perhaps not even that: if you set out to write the "definitive" article on the best known arguments for and against cryonics, just setting out everything in one place, that could be enough. I very much like the "roadmap" format, as in the whole-brain emulation roadmap (pdf link). The written word is hard to beat. I do appreciate the work you've done on these Wiki pages, btw. Perhaps, as Vladimir Nesov commented somewhere, they won't go much further; but it's also good that they exist and collect in one place information that has taken some effort to gather.
The idea I keep coming back to is something that works basically the same as how the Transhumanist Wiki's Scenarios Map currently works, but expanded to include a few more features. The way it is set up now: * Each claim in the argument would have its own node in the graph. * Each connection between claims would have its own node in the graph. * Each node in the graph would have its own wiki page. * Each wiki page contains all of the information about the node, including tags that specify how it connects to the other nodes. * Nodes for claims would include a description of the claim, any debate about the claim, links to supporting, opposing, supported, and opposed claims, and any other relevant information. * Nodes for connections between claims would include an explanation of why the claims are connected, debate about whether the connection is valid, and any other relevant information. * The graph is automatically generated by a PHP script that scans all of the wiki pages, and parses the tags. * Clicking on any node in the graph loads that node's wiki page, which shows a graph of nearby nodes. A BETA VERSION OF THIS HAS ALREADY BEEN UP AND RUNNING FOR MONTHS! But noone has shown any interest in this at all. And I don't know why noone is showing any interest. * Is it because the core idea of the system is so unworkable that it would be best to just abandon the whole project? * Is it because noone has any use at all for any system even remotely like this? * Is it because noone has heard about it? * Is it because hoone who heard about it actually checked it out? * Is it because I've just done a really bad job of explaining what the system could do? * Is it because the current interface is so ugly that everyone's first reaction is to turn away in disgust? * Or is it because of something else I haven't thought of? Please... I need feedback on this. * If this comment is so long that you're not going to bother reading the rest of it, please post a
It is my humble opinion that these tools need to start with user interface design. If it looks like a Steve Jobs Apple product and it has the right features, then it has a good chance of succeeding. I'd love this not to be the case.
Tidy up: in favor. :) This is of interest to me. I haven't expressed interest because I didn't know about it. Previously (pre-LW in fact) I've had an interest in using Semantic Wikis for the purpose of controversy mapping. Basing this type of work on a wiki makes sense to me, either in the context of controversy mapping a la MACOSPOL or in the context of trying to clarify my own thinking about cryonics. Although (perhaps because) I used to love programming for fun, I have become a strong skeptic of just going off and implementing features on the off chance that the result will be somewhat useful. The code that's easiest to debug is the code you never write. My default move when thinking about doing something with technology is, "What can we possibly do that doesn't require implementing anything new?" So, I would gladly volunteer to start "porting" my map of the cryonics debate to a Scenarios Map format, and report back with any issues I encounter. Offhand I feel that creating Wiki pages "by hand" might be the bottleneck, but I'm up to testing that.
Yay, finally some feedback! Thanks :) Before I started the Scenarios project, I checked out the semantic wikis that were available, and concluded that none of them were able to do what I was trying to do with the scenarios project. Though I might as well admit that part of the reason why I went ahead and started coding this was just for fun. Thanks for offering to help try this out! My Skype ID is PeerInfinity, please feel free to contact me any time I'm online. Though we shouldn't need to make all of the wiki pages by hand. We should have some sort of automated tool that generates most of the content on the wiki pages from an annotated chat log, like that experiment I tried. I think the way this tool should work is that it should scan a wiki page containing the annotated conversation, and generate an XML file in the format used by MediaWiki's import tool. Then someone with admin access to the wiki can import the XML file using the import tool, or a regular user can manually copy the data from the XML file to the wiki. Have you used PHP before? ... And as for posting this idea as an actual post... Maybe before I do that, I should make some of the changes I mentioned to the project, and actually try using it on an example argument or two. I guess I'll use your cryonics argument and that other example argument. ... Thinking about this project some more, it seems like these argument maps will fit in nicely with the scenarios map. Maps of arguments that a scenario will happen will fit in nicely with maps of things that could cause or prevent the scenario. Or maybe if there are too many links, then everything will look all tangled... I guess one way to find out is to try it and see what happens...
Maybe before I try to post this comment as an actual post, I should make some of the changes I mentioned to the project, and actually try using it on an example argument or two. I guess I'll use your cryonics argument and that other example argument.
I'm surprised more tools to do this kind of thing don't already exist. It reminds me of the Truth Maintenance Systems I learned about in AI classes in the mid-90s.
Why would academics want to make their field transparent to outsiders? ;-)
Don't think about the output at all at this stage. The annotations should be XML. The output should be left to the discretion of the browser.

I have my doubts that the "visual" aspect is as important as the creators of such software tools would like everyone to think.

I strongly suspect this is highly personal. People can relate to thoughts in different ways. The developers might primarily 'see' argument structure in their heads, where you might think less visually. This tools helps them clarify their ideas and show them to other visually oriented people.

So you're right that it might not catch on, but it may be that the visual aspect is hugely useful to the people it's adapted for.

I looked into this sort of software a couple years ago as part of a information visualization seminar at Stanford. People have been creating research prototypes and betas for a long time, aiming to support everything from education to the formalization of legal dispute resolution ("computational law"). I wasn't impressed with what I saw in my review then, and looking around again today, it doesn't look like things have changed much. I have never seen anything quantitative.

You can find descriptions of research systems and small-scale studies in... (read more)

Thanks. There's a distinction that my post tries (ever so subtly) to establish, between "mapping arguments" in the abstract, as a grand overarching goal, possibly augmented by intuitive claims such as "visualization clarifies thinking", on the one hand; and, on the other hand, the ground-level work of mapping of a particular controversy, with a particular tool which appears adequate to the task. ISTM this community should be more interested in the latter, though observations made while mapping particular arguments can (and should) of course be used to improve the tools. If you have an interest in the latter type of research, you might for instance want to take a look at the work of MACOSPOL, who now have an impressive looking list of mapped controversies. I've been talking to some of the people at Sciences Po working in that area.

One problem with these sorts of tools is that they encourage people to split their arguments into lots of little pieces, which means arguments for C often end up being A->B, B'->C where B and B' look alike but on closer inspection turn out to be different.

I think that is in fact the point -- so we can see where the argument falls down.
Why is this a problem?
The reason this (splitting the argument into many small pieces) is a problem is that sometimes, checking an argument by going over it with a microscope isn't sufficient. If the checker is mostly reliable but fallible, a long-enough inferential chain can defeat any specific amount of reliability, making the checker useless at best and maybe even deceptive. However, even though it is a flaw, it's not a fatal flaw. We can strongly prefer smaller arguments, and maybe there are other techniques too (representing is-a-refinement-of explicitly?).
Sure. But it doesn't seem preferable to just not go over arguments piece by piece.
Promoting "closer inspection" of argument pieces sounds like a win to me, so I'm not sure how this is a problem. Can you elaborate, or perhaps illustrate with an example you've come across?

Here's another interesting and potentially useful tool I found recently. I'm not sure if this qualifies as a debate tool, but it seems like it's in the general category of what we're looking for:

Summary: is a wiki system with added camp and survey capabilities. The system provides a rigorous way to measure scientific / moral expert consensus. It is designed for collaborative development of concise descriptions of various competing scientific or moral theories, and the best arguments for such. People can join the camps representing such, givi... (read more)

A priori, it seems likely that would lead to Green vs. Blue behavior: "Go mind experts!"

Today I did a google search for "debate map", and this was the very first result:

This... is exactly what we're looking for, isn't it?

Though it still doesn't actually do anything with numbers.

I still haven't gotten around to continuing my own project for a debate tool that actually does calculations involving probabilities, though it has finally risen to the top of my to-do list. I was planning to get back to work on it last weekend, but ended up getting distracted by other things again.

I'm slowly getting more and more determined that a mass-usable but based-on-sound-principles debate/argument tool should be created, and a structure is taking shape in my mind. And somehow none of the tools I have seen can be adapted to fit this bill. I have seen your extensive comments and articles on this subject here. So : 1. Are there any serious problems in going mass-based? I would like this because we need to get more arguments, and that can't be done with a highly rigid and hard-to-use interface and model. This would limit it to those who are extremely passionate, either about the issue, or about putting it in an organized manner. 2. Are quantitative measures necessary? Right now the quality of arguments is so low, that virtually anything structured is far better than the status quo. Would you say that, without a way of measuring the acceptance, authority, or logical strength, a tool would be ineffective? 3. Do you disagree that a strong community moderation is far better than very rigid rules in place? A system that rewards editing of arguments into logical nodes on a graph, much more than putting forth a plaintext argument, would encourage moderators in the way and allied sites do. 4. Can I PM/ping you?
As I recall, this was among the tools I tried and quickly dismissed for reasons that had to do with usability, or anyway didn't make me want to invest further time in. Did you give it a try? Curious what came out of that if you did.
I didn't try creating any new graphs with this tool, I just clicked around a couple of existing graphs for a couple minutes, then said "hey, this is pretty cool, I'm going to post to LW about it now". I did a quick check of the debate tools wiki page, and didn't find it in the list, so I added it to the list.

A point about using diagrams to make arguments: If you are attempting to convince a person that something is true, rather than just launching into your evidence and favorite arguments it is often most efficient to begin by asking a series of questions to determine precisely how the person disagrees with you. The questioning allows you to hone in on the most important sticking points that prevent the other party from coming to your conclusion. These points can then be attacked individually, preventing you from wasting time making arguments that the other pa... (read more)

Debate mapping is part of TakeOnIt, a publicly editable database of expert opinions introduced in a previous post ( ). It's deliberately very simple. Here's how it works:

1) Every debate is expressed as a yes-no question.
2) Every yes-no question has experts on both sides of the debate.
3) Every debate can link to a sub-debate (recursively).

A "simple debate" is one where '1' and '2' are sufficient. You can determine who is right in a simple debate by judging which experts have the b... (read more)

Thanks for chiming in. TakeOnIt appears to encourage a "coarser-grained" approach to mapping a debate, compared to what I was trying to do with cryonics and how I ended up doing that in bCisive. Its mode of operation doesn't appear suitable for my purposes (improving discussions between people committed to truth-seeking, by exposing which parts of their belief system structures are congruent and which parts conflict; and ultimately, letting myself be convinced by arguments which are actually accurate, not just convincing). Its raw material isn't arguments per se, but entire worked-out positions. These worked-out positions are expressed in the usual blend of rhetoric and logic. Take for example this excerpt from the quoted position of Bryan Caplan on the "contra" side of cryonics: "If they had a ghost of a chance of giving me what I want, they wouldn't need to twist the English language." There is an inference there, which a finer-grained tool would let us consider on its own, after rendering into its constituent parts: a) an observation ("cryonics advocates twist the English language") which may or may not correspond to facts, b) an inference pattern ("people twist language to bolster untenable positions, therefore positions bolstered by twisted language tend to be untenable") and c) a conclusion ("whatever cryonics advocates claim is an untenable position"). The issue here is that this sentence is of course not Bryan's entire reasoning on the matter, it's only an excerpt from a blog post he wrote which wasn't even intended as a potentially convincing argument, merely part of his telling a story about meeting Robin Hanson and the two of them discussing cryonics. Bryan's actual point isn't the above quoted (and rather low-quality) bit of argumentation, it is the assertion that "uploading doesn't count as life extension", and that doesn't appear in the quote. So, while TakeOnIt might be a valuable resource for researching on a topic for the purposes of argument
Perhaps - let me know if I'm wrong - TakeOnIt argumentation is more fine-grained that it initially seems. To illustrate, I just added to the TakeOnIt database: 1) An implication between the question "Is cryonics worthwhile?" and "Could a computer ever be conscious?". 2) Bryan Caplan's opinion on whether a computer could be conscious. This new data now shows up in the cryonics question: The cryonics question is now more "fine grained" / less "course grained". Of course, you can continue to add more sub-debates to the cryonics question, to make it even more fine grained. Is this sufficiently fine-grained to be useful to you? I have a strong intuition - once again perhaps I'm wrong - that a system for collaborative argument mapping has to be pretty darn simple in order to work. I resonate with Eliezer's comment here: "I suspect it will not have lots of fancy argument types and patterns, because no one really uses that stuff." Is this not true? If not, then what would you like to see added to TakeOnIt to make it more useful to you?
I don't quite see how it works. Bryan Caplan has some other theory of identity and consciousness than the information state theory. He doesn't express it very well, it is not decomposed, we cannot add evidence or propositions for or against specific pieces of it. It seems like that kind of functionality is what the OP is looking for.
The functionality is already there... Bryan's position on cryonics is at least partly based on his doubts regarding conscious computers. How do we represent this idea? Add the following logical implication to the TakeOnIt database (done): "If ~p -> possibly ~q" where p="Could a computer ever be conscious?" and q="Is cryonics worthwhile?"
1Eliezer Yudkowsky
Er... this actually has almost no implications for cryonics. You'd just repair the old brain in situ.
Its not important - my point was I just didn't see how to break down the argument to focus on that flaw, but apparently you can. But to explain it Bryan's article was a response to a discussion he had with Robin. Apparently Robin focused on neuros and uploading in the discussion - I doubt if Bryan has a full understanding of all the options available for cryo and the possible revival technologies.
Point taken. I removed the implication to question "p" per your suggestion and added implications from question q (q="Is cryonics worthwhile?") to the questions: a) "Is information-theoretic death the most real interpretation of death?" b) "Is cryonic restoration technically feasible in the future?" c) "Is there life after death?" where the implications are: a -> possibly q ~b -> necessarily q c -> necessarily ~q ( See the result here: )
0Eliezer Yudkowsky
Don't you mean ~b -> necessarily ~q? Also, for c, you must specify, "Is there pleasant life after death?"
Yes, it should have been ~b -> necessarily ~q. LOL. The idea that someone might actually expect an unpleasant life after death reminds me of some sort of twisted comic plot: the protagonist who's confident that they're going to hell so tries to postpone eternal suffering with cryonics. Seriously however, you're right. Here's another possible qualification: are we talking about a finite or infinite life after death? In light of these possibilities, I changed "c -> necessarily ~q" to "c -> possibly ~q". I can't change the wording of the question "Is there life after death" because that question in its simple general form is already used in many other contexts on TakeOnIt. At one point I'd considered allowing annotating an implication (e.g. to express qualifications, exceptions, etc.), but the complexity of the feature didn't seem worth it.
2Eliezer Yudkowsky
I'm not sure, but I think I heard at least one story about someone who actually did this.
Wasn't that Paris Hilton? ;)
0Eliezer Yudkowsky
false alarm, she's not signed up

Hello everyone, I hope you don't mind me joining in on this 8 year old post. I've been working on ideas like this since 2012 and just found this. My current experiment is Reason Score where I am working on a way to measure the reasonableness of a claim based on the pro and con claims added to it. This will hopefully reduce cognitive biases by forcing people to add reasons to affect the score instead of votes. In the least it will encourage people to think through their claims.

It's not documented well so it might be best if someone has some time to debate me on a topic and see if it provides benefit. Any takers?

Some relevant links: structured debate is about how this should work, and dispute resolution technology lists some attempts to implement a solution.

Those pages are both GNU-FDL licensed (soon to be Creative Commons as soon as I can get around to it), so please feel free to snag anything useful and repost it on the LW wiki.


Thanks for writing this, this is an interesting area, and improving decision making is a worthwhile goal.

However, what I am a bit skeptical about is the extent to which people want to improve the decision making process. In LW-circles I can see that desire, but in the world at large it seems more important to win the argument than necessarily be right; there is some more work to do before they would even want to use the tools.

If we focus on LW-readers which are (hopefully) more interested in truth seeking, it would be interesting to see if there has been a... (read more)

There is an option in the bCisive application, under the "spaces" tab to turn on guest access. It should supply you with an URL you can include in your post here. Without turning that option on, we would have to register, and you would have to invite each of us to view the argument map.

So: "spaces" -> "cryonics" -> "manage" -> turn on guest access

I thought I had enabled guest access. My apologies. I'll check. EDIT: the URL is fixed. Thanks for the heads-up.

Hey everyone! It appears I'm six years late to the party, but better late than never.

I've been building a website for the last few months which is very close to the ideas presented in this article. I've summarized some features of it, and added an entry to the wiki page:

Debate Map: Web platform for collaborative mapping of beliefs, arguments, and evidence.


  • Collaborative creation, editing, and evaluation of debate/argument maps.
  • Open source. (under the MIT license)
  • Developed using modern web technologies. (react-js, redux, firebase)
  • Built-in probabili
... (read more)
Playing around with the debates on firefox causes graphical glitches Chrome seems to work, but these submenus don't close after you click on them
Yeah, I use Chrome myself, so compatibility in Firefox breaks sometimes. (and I forget to check that it's working there more often) I'll look into it relatively soon. As for the submenus not closing when you re-press their sidebar buttons, I just haven't coded that yet. Should be a one line change, so it will probably be added by tomorrow. Thanks for checking it out. EDIT: Okay, I tried opening it in Firefox, and could not reproduce the "black boxes" issue in your screenshot. What version of Firefox are you using? Also, I've now updated the submenu/sidebar buttons to close the menus when re-pressed. (and updated their appearance a bit)
It appears I can't replicate it either. I may have updated Firefox since last week or something? 54.0.1 (32-bit) is my current version.

Here's a new one which just crossed my radar: Wrangl. The link goes to a representative "argument".


There is a web-based tool being worked on at MIT's collective intelligence lab. Couldn't find the direct link to the project, but here's a video overview: Deliberatorium

There is also Scott Aaronson's worldview manager. This is designed to point out hidden contradictions (or at least tensions) between one's beliefs, by using programmed in implications to exhibit (possibly long) inferential chains that demonstrate a contradiction.

As such, it does have an OK DSL for such inferences.

EDIT: unfortunately, the site seems to be down, with broken SQL queries...

I think I've come across this once before. It wouldn't have sprung to mind as connected to argument mapping, so thanks for the connection. I'm afraid, though, that from the outside it doesn't look like it contributes much to what I'm after, which is exposing the structure of the cryonics debate or decision. Or is there some place where the source code to the inferential chains is published? The intent is to have a shared model, that allows a conversation to turn away from "doesn't - does so", and toward "this spot is where we disagree".
The entire source code and history is published at and this includes the sample worldviews: includes lines such as: (CHOICE <=> TRICHOTOMY) (CHOICE => !MEASURE) (CHOICE => UNION) which are not the inferential chains, just the basic building blocks. The inferential chains themselves are generated on the fly. No, it doesn't by itself lead to a good visualization of the argument structures, though it should be pretty easy to write some code to run these through graphviz's dot (or any other) graph visualizer. You do have to figure out how to represent harder implications, such as "(DETERMIN & FREEWILL => !NOFTL)", but a box per conjunction/disjunction isn't too hard. Yes, this doesn't quite do that yet, but it seems to be a reasonable starting point.
I wasn't sure I liked this idea, but I'm definitely in favor of it now.
I suspect that for non-trivial discussions, turning your arguments into one of these formal models would take more skull sweat than the usual ad hoc methods of identifying where exactly two people disagree. And the people who really need help with this are the people who would take one look at the premise of the worldview manager and laugh contemptuously, then not think about it any more. I also suspect that the process of actually making one of these formal models would be a good exercise in breaking down your ideas, and could give interesting inights. The model of libertarian ideas is a good example of this.

I'd love to see more tools like Bcisive! Thanks!

[This comment is no longer endorsed by its author]Reply

I have using Rationale from Austhink back when I was taking Philosophy classes. It was actually pretty good compared to what others offer but unfortunately limited to Windows. Ever since that time I have been thinking on and of about what software could be developed to both share ideas and refine our collective thinking in a better way. While there are tons of projects attempting such things most are either horribly designed or targeted at a non existing market.

I've read through the discussion above and I wonder why a semantic web approach wasn't mentioned... (read more)

I thought a lot about creating such a system and how it would look a number of years ago, but never did make any good progress on it. The point where I got stuck was to take a particular blog post with lots of debate in the comments and try to dissect it in different ways and see what ended up being the most useful. I found I didn't have the focus to do so.

Anyway, there's Truth Mapping, which I think sucks for quite a number of reasons.

I would love to see this turn into a software development project. I would be happy to participate, though I won't have much time any time soon.

Some thoughts

  • Sentences are powerful because we are hard-wired to use language, so if we can leverage this then all the better. It sounds like TakeOnIt is going down this path (though I haven't looked at that project).

  • Re crunching numbers: Ensuring the argument is always crunchable will mean the software will have to place strong restrictions on the structure and relations at all points during the argument's con

... (read more)