I'm looking for hard philosophical questions to give to people to gauge their skill at philosophy.

So far, I've been presenting people with Newcomb's problem and the Sleeping Beauty problem. I've also been presenting them with contrarian opinions and asking them to evaluate them, and I have a higher opinion of them if they avoid just icking away from the subject.

What other problems should I use? 

New Comment
36 comments, sorted by Click to highlight new comments since:

What query are you trying to hug?

I'm trying to test their philosophical ability. Some people immediately and intuitively notice bad arguments and spot good ones.

What decision rests on the outcome of your test?

I think there's a problem with your thinking on this - people can spot patterns of good and bad reasoning. Depending on the argument, they may or may not notice a flaw in the reasoning for a wide variety of reasons. Someone who is pretty smart probably notices the most common fallacies naturally - they could probably spot at least a few while watching the news or listening to talk shows.

People who study philosophy are going to have been exposed to many more diverse examples of poor reasoning, and will have had practice identifying weak points and exploiting them to attack an argument. This increases your overall ability to dissolve or decompose arguments by increasing your exposure and by equipping you with a trick bag of heuristics. People who argue on well moderated forums or take part in discussions on a regular basis will likely also pick up some tricks of this sort.

However, there are going to be people who can dissolve one problem, but not another because they have been exposed to something sufficiently similar to one (and are thus probably have some cached details relevant to solving it) but not so for the other:

E.g. a student of logic will probably make the correct choice in the Wason Selection Task and may be able to avoid making the conjunction fallacy, but they may not two box because they fall into the CDT reasoning trap. However, a student of the sciences or statistics may slip up in the selection task but one box, by following the EDT logic.

So if you're using this approach as an intelligence test, I'd worry about committing the fundamental attribution error pretty often. However, I doubt you're carrying out this test in isolation. In practice, it probably is reasonable to engage people you know or meet in challenging discussions if you're looking for people that are sharp and enjoy that sort of thing. I do it every time I meet someone who seems like they might have some inclination toward that sort of thing.

It might help if you provide some context though - who are you asking and how do you know them? Are you accosting strangers with tricky problems or are you probing acquaintances and friends?

How did you respond to Newcomb and Sleeping Beauty the first time you encountered them, before reading any discussion of them?

I came across both on LW, and I read discussion immediately.

What in the world is "skill at philosophy"?

I've also been presenting them with contrarian opinions and asking them to evaluate them, and I have a higher opinion of them if they avoid just icking away from the subject.

You have a higher opinion of people who make socially foolish decisions?

[-][anonymous]30

.

I bestow a higher likelihood of long-term closeness on persons who "avoid just icking away from the subject."

Oh, I apologize. I entirely misread what you were doing, I think.

I sorta think you can't possibly disagree with this, or you wouldn't be here.

Um... kind of? I guess it depends on what sort of contrarian opinions you were sharing and what sort of setting you were doing it in.


The latter part assumed you were mainly replying to the second question I asked. I apologize for the bluntness of those questions, also. However, I would like to clarify my first question slightly.

When I see the phrase "skill at philosophy" it makes me think of professional philosophers. You probably are not trying to test for the kinds of skills which are found in professional philosophers, because most of these skills cannot be tested through informal questioning. I now realize that you were trying to test for, I think, the ability to think logically about philosophical topics and openness to unpopular ideas. Sorry for the misinterpretation.

What in the world is "skill at philosophy"?

On the other hand, I suspect that it is possible to rank people according to their skill at philosophy, and come up with an ordering that's reasonably widely agreed, as long as the points are not too close. Just for fun, here's a few to rank...

So I guess there is such a thing.

Beyond the obvious signaling opportunity of saying that creationists are the worst people ever, I'm not having an easy time figuring out which way the ranking should go between a celebrity who appears to be totally apathetic towards philosophy and a creationist apologist who is enthusiastically doing very bad philosophy.

I also wonder how much agreement there would be if we tried to establish the ranking between Richard Dawkins and Jerry Fodor.

I do not really agree with Fodor on most issues, but Jerry Fodor(2010) is very different from Jerry Fodor(1978).

Are you looking for problems with a counter-intuitive, yet widely accepted answer among academics?

Well, I'm mostly using these on people who haven't read much or any philosophy, so those would work. That said, I think that a lot of smart people can get to the right answer even when there isn't any consensus in the philosophical community.

[-]Shmi40

If there is no consensus, how do you know what answer is "right"? Surely if it was a simple matter of computation or logic, there would be a consensus.

As far as I can tell, he is judging "rightness" by how closely it approximates Less Wrong doctrine.

There are so many variables on where someone's thinking could be biased or incomplete that if one is going to take these questions seriously, I think a heuristic approach would be most helpful rather than seeing if someone independently comes to your conclusion.

Off the top of my head I would give points for trying to falsify themselves, taking into account human bias (if they already had knowledge of the literature on bias), asking clarifying questions instead of going with an incomplete interpretation of the problem, a willingness to be criticized when the criticism is correct, and a willingness to brush badly constructed criticism as side.

Surely if it was a simple matter of computation or logic, there would be a consensus.

Optimist, eh? :D

A standard Bayesian problem would work great. I paid my 13 year old nephew $1 to solve one.

Also: If you call a tail a leg, how many legs does a horse have?

Be careful how you reward people for mental tasks if you care about the long term cultivation of their mind.

If you call a tail a leg, how many legs does a kangaroo have? If you call an arm a leg, how many legs does a human have? There's a whole sequence on the trouble with putting too much store in the meanings assigned to words.

I'd settle for a well-thought-out answer, even if it's not the one I agree with.

Searle's Chinese Room is a great (awful) case to test out how well people think. The argument can be attacked (successfully) in so many different ways, it is a good marker of both ability to analyze an argument and ability to think creatively. Even better if after your interlocutor kills the argument one way, you ask him or her to kill it another, different way. (Then repeat as desired.)

What do you mean by "great (awful)"? Do you mean that the thought experiment itself is an awful argument against AI, but describing the argument is a good way to test how people think?

Yes, that's exactly what I mean. The argument itself is terrible. But it invites so many reasonable challenges that it is still very useful as a test of clear thinking. So, awful argument; great test case.

On a related note, I remember the day when I found out my PhD advisor (a computability theorist!) revealed that he believed the argument against AI from Gödel's incompleteness theorem. It was not reassuring.

[-]TimS00

Smarter than human AI, or artificial human level general intelligence?

The latter.

Ya.

Picture a room larger than Library of Congress which answers a simplest question in a million years, and the argument entirely dissolves. Imagine some nonsense the way Searle wants you to (small room, talks fast enough), take possibility of such as a postulate, and you'll create yourself a logically inconsistent system* in which you can prove anything including impossibility of AI.

*Postulating that, say, good ol zx spectrum can run human mind equivalent intelligence in real-time on 128 kilobytes of ram, is ultimately postulating a mathematical impossibility, and you should in principle be able to get all the way to 1=2 from there.

I'm not sure I understand the Library of Congress bit, but the footnote is exactly right. Even so, that is only one way of resisting Searle's argument. The point for me is that we can measure cleverness to some tolerance by how many ways one finds to fault the argument. For example:

a. The architecture is completely wrong. People don't work by simple look-up tables.

b. Failure of imagination. We are asked to imagine something that passes the Turing test. Anyone convinced by the argument is probably not imagining that premiss vividly enough.

c. The argument depends on a fallacy of division/composition. Searle argues that the system does not understand Chinese since none of its parts understand Chinese. But some humans understand Chinese, and it is implausible that any individual human cell understands Chinese. So, the argument is logically flawed.

d. In order to have an interactive conversation, the room needs to have something like a memory or history. Understanding isn't just about translation but about connecting language to other parts of life.

e. Similarly to (d), the room is not embodied in any interesting way. The room has no perceptual apparatus and no motor functions. Understanding is partly about connecting language to the world. Intelligence is partly about successful navigation in the world. Connect the room to a robot body and then present the case again.

...

Further challenges could be given, I think. But you get the idea.

I meant, the room got to store many terabytes of information, very well organized too (for the state dump of a chinese speaking person). It's a very big room, library sized, and there's enormous amount of paper that gets processed before it says anything, and enormous timespan.

The argument relies on imagining a room that couldn't possibly have understood anything; imagine the room 'to scale' and the timing to scale, and then assertion that room couldn't possibly have understood anything loses ground.

There's another argument like chinese room, about giant archive of answers to all possible questions. Works by severely under-imagining size of the archive, too.

There's another argument like chinese room, about giant archive of answers to all possible questions. Works by severely under-imagining size of the archive, too.

Agreed.

Brief discussion of free will / determinism, followed by "What observations make you think you have free will?"

If the question is novel, this seems like a fairly straightforward (and open-ended) test of question-answering.

By "hard problem" do you mean harder than "If a tree falls in a forest does it make a sound?" or as hard as the hard problem of consciousness?

Would a Star Trek style teleporter teleport you or result in a new person (in a universe where you can be made of different atoms)? What if it creates the duplicate without destroying the original? Is there any action you can take that preserves identity?

Trolley problem. For that matter, utilitarianism vs. deontological ethics.

Copenhagen vs. Many Worlds. Many Worlds vs. Timeless. Those require an understanding of quantum physics, though.

Those are good ideas. I've been using the trolley problem.

[-][anonymous]00

Ask them if they think preservation of identity over a normal human lifespan is a coherent desire.

[This comment is no longer endorsed by its author]Reply