I put substantial of effort into making my post Many Weak Arguments vs. One Relatively Strong Argument as clear as possible, but as Luke said:
This is a messy subject, and one that's difficult write about, and I appreciate you tackling the topic. I think there are some important qualifications to make about this post, as others have noted. But I know that when writing about messy subjects, it's hard to avoid "death by a thousand qualifications."
So a lot of aspects of my thinking didn't percolate into my post. With this in mind, I decided to address some of the questions and concerns that people raised in the comments on my post. The discussion below is intended for people who read the aforementioned article, and who have nagging concerns and questions.
I'll address various questions and comments in turn.
Motivated cognition: Unnamed wrote:
Motivated reasoning is a bigger risk when dealing with weak arguments, since it is relatively easy to come up with weak arguments on the side that you favor, but it is hard to make an argument rigorous just because you want it to be true. It also seems easier to ignore various weak arguments on the other side (or dismiss them as not even worth considering) than to dismiss a strong argument on the other side.
I think that Unnamed raises a genuine weakness of the "many weak arguments" approach, but I think that the issue is smaller than it initially appears.
I didn't mean to suggest that one could consider question, choose a position, come up with a bunch of weak arguments in favor of the postion, note that they're largely independent, and conclude that the position is true. "Weak argument" is relative, and in general, the weak arguments on one side of the argument will be weaker than those on the other side.
My suggestion is that one should make a list of many weak arguments for a position and against a position, consider them all in juxtaposition, and then make an assessment of the direction in which the principle of consilience points, and how strongly it points in that direction. My reason for highlighting the argument that "Penrose is a great physicist and so is unusually likely to be right in his views about consciousness" was to give an example of a weak argument on "the other side" that one should pay (a small amount of) attention to.
If a question is a high stakes question, one should solicit weak arguments from people who support the position opposite to one's own, and consider them in juxtaposition with the weak argument that one has generated oneself.
The ostensible unbalanced quality of my discussion of the sample claim
The "majoring in a quantitative subject increases earnings (on average, for those on the fence)" example may have come across as unbalanced, on account of my not giving arguments against the claim (beyond the counterarguments). However, note however that if a position is in fact true, one would expect "many weak arguments" to systematically support the position more than its negation
I started under the presumption that it's more likely than not that majoring in a quantitative subject increases earnings (on average, for those on the fence), which I believed with low confidence, and then I investigated further. Considering weak arguments raised my confidence in its truth.
This is exactly what one would expect if the statement is in fact true. If the statement was false, then I would have come across more arguments against its truth, and stronger arguments against its truth. I'm fully open to considering arguments against its truth (beyond the counterarguments that I gave), but nobody offered any such arguments. This suggests that the fact that I asymmetrically found arguments favoring it rather than arguments opposing it was not driven by me having of a predetermined bottom line, but rather, by there genuinely being a lot more reasons for believing the claim than for disbelieving the claim.
Selection effects and non-independence: Unnamed wrote:
Selection effects will tend to expose you to more weak arguments on one side of an issue; e.g. if you are surrounded by Blues then you will be exposed to lots of weak arguments in favor of Blue positions, and few arguments in favor of Green positions. A person in this Blue-slanted situation has a better chance of finding their way into the pro-Green camp on an issue if they ignore the argument count and instead only compare the strongest pro-Blue argument that they have seen with the strongest pro-Green argument that they have seen (or, even better, the steel-manned version).
Nonindependence: a set of arguments on a given issue are rarely independent; arguments which share a conclusion often have strong (and perhaps hidden) dependencies and interrelationships. For example, a large fraction of the set of arguments may all rely on the same methodology, or come from the same group of people, or be (perhaps indirect) consequences of a single piece of evidence, or share a single auxiliary assumption. So a set of seemingly independent arguments often provides less evidence than it appears.
I acknowledge these as serious weaknesses of the "many weak arguments" approach. See my "bubble" example in this comment. I believe that using both approaches in conjunction yields better results than using the "many weak arguments" approach exclusively.
Argument structure: the structure of a complex argument is often important but neglected, and it is not accounted for by listing simple points in favor of each side. To take one example, the claim IF (A or B or C or D or E) THEN Z has a very different structure from the claim IFF (A & B & C & D & E) THEN Z, but moderate evidence against D would appear similarly as "a weak argument against the claim" in both cases. Making a strong argument requires engaging with the structure of the argument.
As above, I acknowledge this as a weakness "many week models approach," and think that using both approaches in conjunction is better than using the "many weak arguments" approach exclusively.
At the same time, I think that the degree to which argument is conjunctive vs. disjunctive is often highly nonobvious, so that the advantage that the "one relatively strong argument" approach has over the "many weak arguments" approach on this front is smaller than it might initially seem. See for example, Holden Karnofsky's Objection 3 in his post about the former version of MIRI.
The 80/20 rule: in many domains, a small fraction of the things carry a large portion of the weight, and a useful heuristic is to focus on that small fraction (e.g., the 20% of effort that produces 80% of the results). Which suggests that, in this domain, the strongest few arguments will carry most of the evidential weight on an issue, and the long tail of weak arguments will not matter much.
and Utilitarian wrote:
My main comment on your post is that it's hard to keep track of all of these things computationally. Probably you should try, but it can get messy. It's also possible that in keeping track of too many details, you introduce more errors than if you had kept the analysis simple.
I believe that it's possible to keep track of ~ four to eight weak arguments without too much difficulty, and that this number of weak arguments often suffices to beat the "one relatively strong argument" approach. I also believe that tacit rationality implicitly picks up on still more weak arguments, so that using one's gut feeling as an input makes the it possible to use even more weak lines of evidence than one would be able to otherwise.
I don't see how you can think that the probability of coming up with a weak argument for a position depends more on whether the position is true than on whether you think the position is true. The arguments clearly aren't independent, as they have the commonality that they all support the same position. The validity of your meta-argument has such a huge dependence on your ability to search argument space and evaluate all of them that I don't see how you can think that this compares to a single strong argument.
Given a claim, consider the question "What would the world look like if this claim were true?" and try to generate ~4-8 independent predictions. Then, look at the world, and check whether these predictions are borne out. If they are, then by Bayes' theorem, you can develop some confidence in the truth of the claim. If the predictions are not borne out, then by Bayes' theorem, you can develop some confidence that the claim is not true.