Two persons are trapped in a prison cell. The warden gives them a controversial question they disagree about, and promises to set them free if they manage to reach an honest agreement on the answer. They can discuss and debate for as long as they need, and all the relevant empirical data are available. Importantly, they are not allowed to just pretend to agree: they must genuinely find common ground with each other for the door of the prison cell to open. Needless to say, both participants want to escape the room as soon as possible, so they will do their best to reach a honest agreement (I know some of you would love to stay forever in a room with unlimited time and data – just pretend you want to leave the room for the sake of the thought experiment).
In most cases, a handful of good arguments from each side may be enough to settle the case. Sometimes, they would disagree on the meaning of the question itself, in which case they would first spend some time arguing about terminology, before arguing about the content of the question. In more complicated cases, the subjects might turn to a meta-discussion about the best method to reach agreement and get out of the room. If they must debate about whether to rely on the Scientific Method or the double-crux or any other advanced epistemic jutsu, they have all the time in the world to do that. The question is, is it always possible to escape the Argumentative Escape Room? Given unlimited time, will any two persons necessarily reach an agreement on any possible question, or are there cases where the two persons will never agree, despite their best efforts?
Of course, it is easy to find trivial cases where this will not work. For sure, if one participant is a human and the other is a pigeon, agreement might be hard to reach (although, you can't say the pigeon really disagrees either, right?). If one participant has Alzheimer's and forgets everything you say after two minutes, it will be hard to change their mind on any somewhat complicated topic. But these are edge cases.
A more difficult question is whether some people just lack the fundamental intelligence to understand certain arguments, or if anybody can eventually understand anything given enough time. To take an extreme case, suppose one of the participants is a rudimentary AI with a very limited amount of memory space. Some arguments based on experimental data will never fit in that memory. It might be possible, in principle, to compress the data by carefully building layers of abstraction on top of each others, but there is a limit. Likewise, many mathematical proofs require logical disjunction, where you split the claim into a number of particular cases, and prove you are right for each case taken separately. If you are arguing with an AI who firmly disbelieves the 4-color theorem but lacks the hardware to survey the 1482 distinct cases, it is going to be very hard to truly convince it. Without knowing how the brain works, I am not sure how this would translate to humans debating "normal" controversial questions. Let's say your argument involves some advanced quantum mechanics. Most people won't understand it at first, but since you have all the time you want, you could just teach QM to the other participant until she gets your point and can agree/disagree with you. I have good hopes that most humans could eventually understand QM given enough time and patience. But it is not clear what are the absolute limits of one particular human brain, and whether these limits differ from person to person.
The problems I mentioned so far are merely "technical" difficulties. If we leave these aside, it seems reasonable to me that the two players will reach agreement on pretty much any factual statement or belief. If everything else fails, both parties can agree that they do not know the correct answer to the question, that more research is needed, that the question does not make sense, that the problem is undecidable. The real problem lies on the other branch of Hume's fork. What happen if we ask the two participants to agree on moral values?
Is it okay to kill a cow for food? Is it okay to steal bread if your family is starving? Is it okay to kill a stolen cow for food if your family is starving? There is a Nature Versus Nurture kind of problem here. If values are entirely cultural, or come entirely from lived experience, then there is no reason to think that, after a sufficient time spent together, the two participants will never put their sacred values into perspective and find common ground about what is okay or not. On the other hand, if values are in part influenced by your brain's mechanisms for emotion, empathy or instinct, like the structure of your amygdala or the sensitivity of your oxytocin receptors, then it's entirely possible that two people will simply have different values, no matter how long they discuss it. We already know from classical twin studies that political opinions are in large part influenced by genetics. In developed countries, genetic factors are responsible for about half of the variance in attitudes towards egalitarianism, immigration and abortion. They might explain one third of the variance in patriotism, nationalism, and homophobia. One study suggested that an intra-nasal administration of oxytocin leads to increased ethnocentrism (but check out this skeptical paper for good measure). There is even a strange study were researchers could bias the reported political opinions of participants by stimulating parts of their brain with magnetic fields (that's right, scientists MANIPULATED people's views on IMMIGRATION using MAGNETS. Please, never tell my grandmother about this study). Thus, it is pretty clear that our opinions and values are not just the result of experience and reasoning, but also involve a lot of weird brain chemistry that we might no be able to change. Genetic differences are only one obvious factor of inescapable disagreement, but they are likely not the only one. For example, it is easy to imagine that some experiences will leave irreversible marks on one's psyche (for an interesting illustration, look at the story of Gudrun Himmler). Can such barriers ever be overcome through discussion? I'm not sure.
But that is just a fun thought experiment with mildly philosophical implications about the existence of objective truth. Since unlimited time is quite uncommon in the real world, and since reaching honest agreement is rarely the only goal of people who argue with each other, does it ever matter in practice? I think this thought experiment is important, because it clarifies our underlying assumptions about how we collectively handle disagreement.
When one defends the marketplace of ideas, deliberative democracy and absolute free speech, it is implicitly assumed that, for all practical purposes, any disagreement can eventually be solved through discussion and explanation. If it turns out some people will simply never agree because their minds operate in fundamentally different ways, then the marketplace of ideas probably needs a patch. The scenario that Karl Popper describes in his "paradox of intolerance" is precisely such a situation: there are very intolerant people out there who simply can't be reasoned with, so the best thing you can do is silence them. One essay from Scott Alexander describes two approaches to politics: mistake and conflict. Mistake theory is when you believe everybody wants to benefit the collective, and disagreements come from people being mistaken about the best way to achieve that. Conflict theory is when you believe that people are just advocating for their own personal advantage, and disagreements come from people serving different goals. On first sight, those who believe it is usually possible to escape the room might gravitate towards Mistake Theory, while those who think otherwise might be driven to Conflict Theory. However, things are more complicated.
In a recent study, Alexander Severson found that, when people are presented evidence that political opinions have genetic influences, they typically become more tolerant of the other side. From the conclusion part:
“We proudly weaponize bumper stickers and traffic in taunt-infused comment-thread witticisms in the war against the political other, all in part because we believe that the other side chooses to believe what they believe freely and unencumbered. [...] In disavowing this belief and accepting that our own ideologies are partially the byproduct of biological and genetic processes over which we have no control, we may end up promoting a more tolerant and kinder civil society.”
Somehow, since the outgroup's obviously wrong opinions are altered by their genes, it's not entirely their fault if they disagree with you, so it becomes a forgivable offense. Alternatively, if differences in our opinions partially reflect differences in our bodies, then peace is only possible if we accept the coexistence of a plurality of opinions, and we may as well embrace it. Interestingly, in this study, about 20% of the participants ignored all the presented evidence, firmly rejecting the idea of any possible genetic influence on opinions. Perhaps the evidence that Severson showed them was not all that convincing, or perhaps the belief that genetics can influence beliefs is itself influenced by genetics, which, at least, would be fun to argue.
I'm curious about whether this question has already been treated by other people, in theory or – even better – experimentally. If you know of anything like that, please let me know.
One factor here is that say there is a tradeoff between tax rate and how good the roads are. Higher taxes = better roads and the inverse. It may be impossible for 2 people who each value the roads vs tax percentage differently to agree on what the exact tax rate should be. However, it should be possible for them to analyze the data, find a formula that relates tax rate to road quality, and then to determine at least what a given proposed policy will even do.
Right now in politics, people are making decisions on critical issues in knee jerk, tribal ways without any reliable way to agree on what the consequences of each decision will even do. For example, tax rates vs economic activity. "trickle down economics" and the "laffer curve" are empirically testable predictions. Therefore any decision as to the optimal tax rate or economic policy's consequence should at least be analyzed by the rational predicted outcome, not by tribal identity.
Again it may be impossible for the 2 people to agree on tradeoffs, for example some policies may increase total wealth while others may increase median wealth. One makes the nation as a whole more influential and powerful, while the other makes the 50th percentile citizen better off.
But at a minimum it should be honest and clear what the outcome will likely be, rather than right now, where it is possible that both policies being considered by votes are terribly suboptimal, decreasing both median and total wealth vs other possible choices.
In theory 2 rational agents could compare data and eventually arrive on an algorithm to produce a consensus view on what the past outcomes even where, and then what the probable outcomes are for proposals.
The term honest agreement does a lot of heavy lifting in your argument. There are various ways to change the beliefs that we commonly call dark arts that can help to switch one's beliefs in a desired direction that might be used instead of double crux which is designed to lead to epistemically sound agreements.
Irreconcilable positions exist. The point isn't religious conversion, the point is to come up with a way for the irreconcilable to exist side by side without everyone having to murder each other.
As far as I'm concerned that's a matter of borders and sovereignty in the case of the truly irreconcilable. It is only through a unifying principle that any community can continue to exist, much less thrive.
Given that values have a huge effect on outcomes it makes sense they'd be subject to natural selection.