[Link] Eleven dogmas of analytic philosophy

3CarlShulman

-1Kawoomba

2prase

0Kawoomba

0prase

0Kawoomba

0prase

0gwern

-1Kawoomba

1gwern

0Kindly

0Kawoomba

0Carinthium

New Comment

In the alternative, intuitions still give evidence, the suggestion is merely to make more discriminating use of them. And a claim that intuitions are uncorrelated with truth would be pretty hard to defend.

And a claim that intuitions are uncorrelated with truth would be pretty hard to defend.

Depends on whose intuitions, and the truth of which class of statements. Truth value of a randomly constructed statement? Almost certainly no correlation. Truth value of a "relevant" statement, relevant in the context in which our intuitions were honed? Probably correlated.

EDIT: Requesting explanation for the downvote.

Truth value of a randomly constructed statement? Almost certainly no correlation.

What do you mean by "randomly constructed statement" (I request example).

Requesting explanation for the downvote.

Please don't, especially if it's a single vote. Someone (not me) didn't like your comment, probably for no deep and important reasons. Karma discussions are inefficient use of resources.

What do you mean by "randomly constructed statement" (I request example).

Boolean formula (e.g. 3-SAT) with a random number of clauses, say around one googol (a very small example if the number of clauses is truly a random number).

Then I would agree that intuitions don't provide evidence for such statements, although it is a little bit confusing to say that there is no correlation between intuitions and truth for this class of statements since people *don't have any* intuitions pertaining to members of this class of statements.

To further nitpick, one googol is small as compared to what exactly?

(I'd find it way more natural for "randomly constructed statement" to be selected from statements people actually construct and have intuitive opinions about, given the context of the discussion.)

Maybe I can get my point across using this analogy:

"You would expect a human to be able to remember a random natural number."

"No - because only an infinitessimal fraction of random numbers fall into the range of having a number of digits that fit into a typical human's memory."

"Ah, I thought we are only talking about typical natural numbers in human use, from which we randomly choose one."

The discussion we're leading has some similarities with talking about a "random mind" - discussing our little corner of the mindspace versus choosing from the whole of it.

Similarly, "truth" is a property of a vastly, vastly bigger class of statements than just those our intuitions were conditioned on. Of course there are heuristics we can use even on e.g. enormous k-SAT statements, but would we still call that a human intuition? I'm not talking about uncomputability here, merely on the nonapplicability of human intuition for the vast majority of potentially true statements.

(I'd find it way more natural for "randomly constructed statement" to be selected from statements people actually construct and have intuitive opinions about, given the context of the discussion.)

If indeed you only choose from statements that people "have intuitive opinions about", it would of course follow that people have intuitive opinions about said statements.

The googol amount of clauses was referred to as small if e.g. the number of clauses is randomly chosen as a natural number.

The original post was about analytic philosophy and, in particular, about how much intuitions should be trusted. This is a pretty interesting question when we are investigating hypotheses whose truth or falsity are subject to human intuitions. Therefore I find it natural to interpret the statement you were initially responding to, i.e.

And a claim that intuitions are uncorrelated with truth would be pretty hard to defend.

as refering to statements inside the domain of human intuitions. There is certainly no disagreement between us about the fact that human intuitions isn't applicable to all statements.

The googol amount of clauses was referred to as small if e.g. the number of clauses is randomly chosen as a natural number.

I understood that you sort of say that googol is small as compared to a (typical?) random natural number, but I am not sure what it exactly means. What is a typical random natural number? Do you have in mind some specific distribution whose mean is greater than googol? What would be a number which weren't small in comparison to random natural numbers?

(I have a feeling that my responses may appear somewhat obnoxious. If it is so, it's not intentional.)

Actually, 3-SAT exhibits an interesting behavior where there's a 'phase transition' between likely satisfiable and unlikely to be satisfiable:

In the early 1990s, it was observed that the probability that a random 3-SAT instance is satisfiable exhibits sharp threshold behavior when the control parameter α = c/v passes a critical value (Cheeseman, Kanefsky, and Taylor 1991; Mitchell, Selman, and Levesque 1992). The width of the window in which this solubility phase transition takes place becomes narrower as the instance size grows. Most interestingly, a wide range of state-of-the-art SAT solvers exhibit dramatically longer runtimes for instances in this critical region. Intuitively, note that instances are underconstrained when α is small (there are few constraints, and therefore many solutions), and overconstrained when α is large (thereare many constraints, making it relatively easy to derive a contradiction). The so-called phase transition point occurs between these extremes, when the probability of generating a satisfiable instance is 0.5.

Crawford and Auton (1996) confirmed these findings in an extensive empirical study and proposed a more accurate formula for identifying the phase transition point: c = 4.258 · v + 58.26 · v −2/3 . Kirkpatrick and Selman (1994) used finite-size scaling, a method from statistical physics, to characterize size-dependent effects near the transition point, with the width of this transition narrowing as the number of variables increases. Yokoo (1997) studied the behavior of simple local search algorithms on uniform random 3-SAT instances, observing a peak in the hardness for solving satisfiable instances at the phase transition point. He attributed thishardness peak to a relatively larger number of local minima present in critically constrained instances, as compared to overconstrained satisfiable instances. Hoos and Stutzle (1999) further investigated the behavior of stochastic local search algorithms on random 3-SAT instances at the phase transition, demonstrating substantial runtime variability across sets of instances with the same number of variables and clauses, and showing that the runtime over independent runs on the same instance tends to be exponentially distributed (for near-optimal parameter settings of the algorithm).

So for a random 3-SAT with a googol clauses we could make a very accurate prediction about whether it is satisfiable.

(I was reading about this because even with tinier 3-SATs there's now polynomial-time algorithms with 70% classification accuracy, and I thought this might be useful in a successor to Bitcoin.)

Interesting, although I'm not sure what you say follows:

As the number of clauses increases, so should the number of variables (randomly chosen). That is, the alpha that your result mentions can well fluctuate between over-and-underconstrained values even with a googol clauses. Alpha, being the clauses-to-variables ratio does not increase solely because the clauses increase, since the variables are randomly filled in as well.

Even for overconstrained values of alpha, the problem is still in NP, i.e. exponential in the worst case. Being able to make an accurate prediction for overconstrained values is certainly possible, but would we call that heuristic an "intuition"?

As the number of clauses increases, so should the number of variables (randomly chosen).

Why should it? You didn't say anything about that.

Even for overconstrained values of alpha, the problem is still in NP, i.e. exponential in the worst case. Being able to make an accurate prediction for overconstrained values is certainly possible, but would we call that heuristic an "intuition"?

I don't think there's a difference between heuristics and intuitions, or rather, I think many intuitions may just be how heuristics feel on the inside.

As the number of clauses increases, so should the number of variables (randomly chosen). That is, the alpha that your result mentions can well fluctuate between over-and-underconstrained values even with a googol clauses.

For any particular 3-SAT instance, you can calculate α. If the random algorithm that generated it doesn't favor any particular clause types, then the distribution of outputs with c clauses and v variables should be uniform among all such 3-SAT instances, and therefore we can use α to figure out if the formula is (most likely) satisfiable.

Furthermore, unless you deliberately try to generate random instances with a particular value of α, I imagine under most models the value of α will be (with high probability) well on one side of the threshold, and so the probability that your formula is satisfiable will be either very nearly 1 or very nearly 0, depending on the random model.

Yes, you can calculate alpha, then depending on its value predict whether the formula is satisfiable with a high probability. Is that still intuiting an answer?

The question is not whether - faced with a typically very, very large (if randomly chosen) number of clauses - you can implement smarter algorithms, the question is whether human-level intuitions will still help you with randomly constructed truth statements.

Not whether they can be resolved, or whether there are neat tricks for certain subclasses.

Of course if the initial use of "intuition" that sparked this debate is used synonymously with "heuristic", the point is moot. I was referring to the subset that is more typically referred to as human intuitions.

(I should've said k-sat anyways.)

The Elven Dogmas article has it's points, but it isn't perfect.

3: A Coherentist has good reason to use moral intuitions as evidence for moral conclusions (and arguably others as well).

6: As a prescriptive method, arguments using formal logic can be quite reasonable for detailing what a person SHOULD conclude, given the assumption that they want their beliefs to fit the evidence. (Depends on context)

7: The claim that "The best thinking is both cognitive and emotional" is only backed up for ethics. Other than goal selection (which cannot be purely cognitive), I don't see any reason to believe it to be so.

8: Granted the absence of variants on the Evil Demon Argument (and arguably even if not so granted), some truths, if not much of any practical importance, can be demonstrated to be true in any possible world.

9: Treating thoughts as propositional attitudes may not reflect reality, but as a means to determine a thought's correlation with reality they are, though not perfect, reasonable in the proper context.

11: Different individuals, particuarly across cultures, will have different ethical sentiments and thus different ethical goals on any level. Either you adopt subjectivism or nihilism and say there is no such thing as an objective ethics (admittedly reasonable but contrary to what the article seems to imply), or you accept that attempting to ethically reason based on common goals of morality doesn't work.

Closely related to some of Luke's recent discussions about philosophy, philosopher Paul Thagard has recently called for changes to the way we do philosophy:

In the same article, Thagard also lists eleven areas where modern philosophy goes awry. For example:

Source: Philosopher, Paul Thagard