Just giving a short table-summary of an article by James Shanteau on which areas and tasks experts developed a good intuition - and which ones they didn't. Though the article is old, the results seem to be in agreement with more recent summaries, such as Kahneman and Klein's. The heart of the article was a decomposition of characteristics (for professions and for tasks within those professions) where we would expert experts to develop good performance:


Good performance Poor performance

Static stimuli

Decisions about things

Experts agree on stimuli

More predictable problems

Some errors expected

Repetitive tasks

Feedback available

Objective analysis available

Problem decomposable

Decision aids common

Dynamic (changeable) stimuli

Decisions about behavior

Experts disagree on stimuli

Less predictable problems

Few errors expected

Unique tasks

Feedback unavailable

Subjective analysis only

Problem not decomposable

Decision aids rare

I do feel that this may go some way to explaining the expert's performance here.

New to LessWrong?

New Comment
28 comments, sorted by Click to highlight new comments since: Today at 10:56 AM

The headers could be changed to "easy problem", "hard problem". I'd make the same breakdown for machine learning.

Yeah, that's kind of an issue here. What if you've got to work on a hard problem? I'd rather know what influences expert ability on a given domain, in case I want to be a good expert but don't have a free choice of which problems not to work on.

The lower entries on the table seem to be susceptible for moving from right to left. As for the top ones - well they proclaim that you should widen your error bars.

In practice, it's often been found that simple algorithms can perform better than experts on the right handed problems. We can't really have an algorithm for designing AI, but maybe for timeline work it could be good?

I strongly suspect that the primary result of such an algorithm would be very wide error bars on the timeline, and that it would indeed outperform most experts for this reason. You can't get water from a stone, nor narrow estimates out of ignorance and difficult problems, no matter what simple algorithm you use. Though I would be quite intrigued to be proven wrong about this, and I have seen Fermi estimates for quantities like e.g. the mass of the Earth apparently extract narrow and correct estimations out of the sums of multiple widely erroneous steps.

and I have seen Fermi estimates for quantities like e.g. the mass of the Earth apparently extract narrow and correct estimations out of the sums of multiple widely erroneous steps.

out of how many wrong/wide estimates using the same method?

You still might be able to get some use out of knowing which claims of expertise are at least plausible.

The chart gives on and only one hint - do something to move your problem from the hard column to the easy column.

Some of the examples given with good performance - eg firefighting, chess - are not easy problems, they can be phenomenally hard to master. It's just that they can be mastered.

Those characteristics aren't static. Consider the development of metallurgy over the last thousand years as an example: during the early development of steel, metalworking had many of the second set of characteristics, and blacksmiths of the time would not be considered experts today (although they would do better than most people).

Those are characteristics of tasks and professions where we should expect that experts exist and have good performance, because many of those characteristics are created along with experts.

The author made that point in the paper! As fields improve, they move from the right to the left of the table.

I misunderstood- I thought that is was a way to determine how well experts would perform in a field, based on those characteristics.

One could also read it as a way to determine if experts exist in a field, based on those characteristics.

(retracted, see below)

[This comment is no longer endorsed by its author]Reply

To me this reads like changing the subject to your favorite topics. But, in fact, you don't want to have a public discussion about them, so this winds up seeming pretty useless.

Perhaps I'm wrong, do you have some line of investigation into institutional incentives or ideology that would, e.g. greatly help Stuart in his effort to parse expert opinion on AI timelines? Or is his problem an exception?

You're right, these topics do make me sound like a broken record, and I also didn't take into account the broader context. It's just that I'm really irritated with papers like these.

Why not make a top-level post or two that you can just link back to occasionally? This would also help to avoid derailing new comment threads, as discussion could take place at said posts.

I tried that a while ago, but the results were disappointing enough that in the meantime I've grown somewhat embarrassed by that post. (Disappointing both in terms of the lack of interesting feedback and the ruckus occasioned by some concrete examples that touched on controversial topics, which I avoided with less scrupulousness back then.) For whatever reason, insofar as I get interesting feedback here, it looks like I get more of it per unit of effort when I stick to run-of-the-mill commenting than if I were to invest effort in quality top-level posts. (I don't think this is a general rule for all posters here, though.)

The post has 63 upvotes and has been repeatedly linked to. Talking about controversial hypotheses in the hypothetical and presenting them by citation/quotation seem like manageable ways to reduce some of those downsides.

I'm not complaining about a lack of upvotes and links, but about the lack of responses that leave me with more insight than I started with, and also a general lack of understanding of the nature and relevance of the problems I'm trying to discuss. I'd rather have a comment buried deep in some obscure subthread with zero upvotes, which however occasions a single insightful response, than a top-level post upvoted to +200 and admiringly linked from all over the internet, which however leaves me with no significant advance in insight (and possibly only reinforces my biases with the positive attention).

(Not that I'm always optimizing for feedback, of course -- sometimes I just fall prey to the "someone is wrong on the internet" syndrome. But, for whatever reason, as embarrassing as such episodes may be, they fill me with less dissatisfaction in retrospect than failures of systematic and planned effort.)

Even within the pure feedback-egoist framework (really?) do you think people haven't had that post in mind in later discussions with you?

It's hard to tell, but if they have been influenced by that post, then considering the lack of adequate reception of the post in the first place, this probably didn't improve their understanding of my comments, and has perhaps even worsened it.

Also, I don't claim to be anywhere near the ideal of optimizing for feedback in practice. After all, "When vanity is not prompting us, we have little to say." But I would certainly change my posting patterns if I were convinced that it would improve feedback.

I also don't think that low returns from top-level posts are a general rule -- it's probably mainly due to my lack of writing skills (particularly in English) that results in more readable and cogent writing when I'm confined to the shorter space and pre-established context of a comment.

(Although, on the other hand, one general problem is the lack of any clear and agreed-upon policy for what is on-topic for LW, which makes me, and I suspect also many other people, reluctant to start discussions about some topics, but ready to follow up when others have already opened them and found a positive reception.)

[-][anonymous]12y40

For whatever reason, insofar as I get interesting feedback here, it looks like I get more of it per unit of effort when I stick to run-of-the-mill commenting than if I were to invest effort in quality top-level posts. (I don't think this is a general rule for all posters here, though.)

What about comments that are basically article length posted in the open thread? I've found them a useful place to write drafts and have had some interesting commentary on them.

Also I would say you are heavily underestimating just how useful it is to other LWers to have a well thought out article on a certain subject available for future reference, even if the comment section isn't that great.

Incentives explain a lot. Add to that incomplete information and intellectual and emotional biases. There are reasons that large organizations are often such hell holes.

Another help is evolutionary theory. If an organization actually solves a problem, it will cease to exist. But if it makes the problem worse, it will be eternal and grow without bounds.

I think your evolutionary theory explanation is a bit underspecified.

Since organizations don't have offspring, classical natural selection can't be occurring. It's conceivable that even if organizations have the tendency to go bad as you describe, new uncorrupted organizations may be created and old organizations may be dissolved at such rates that the vast majority of extant organizations are effectively solving the problems they set out to solve. Or the steady state may be bleaker as you suggest.

Since organizations don't have offspring, classical natural selection can't be occurring.

Natural selection can occur without offspring. Natural selection is just having differential kill rates for differing features.

It's conceivable that ...

Yes. Conceivable. The major counter forces are the ability of the first organization to crowd out newcomers by starving them for resources, or infect newcomers with their corruption.

Only if it makes the problem worse in a way that is not obvious to those who care about solving it. If the American Lung Association went around giving out cigarettes, people might eventually catch on.

Not necessarily if it simultaneously produced research saying that cigarettes are good for your lungs.

Nah. Eventually some folks would decide enter the lung-doctoring profession because their parents had died of lung cancer and they actually wanted to cure it. People are not predictably 100% short-sighted and mercenary; the Prisoner's Dilemma is not the "Prisoner's Stupid Question – Obviously Everyone Defects All The Time."

Well, medicine was dominated by crackpots and charlatans for millennia and no one seems to have noticed until recently. For example, Mercury was used as a cure for all sorts of things for quite a long time. Not to mention how long bloodletting was used in medicine. I could give more examples but you get the idea.