by [anonymous]
2 min read30th Sep 201017 comments

17

Sometimes it's obvious who the good scientists are.  They're the ones who have the Nobel Prize, or the Fields Medal.  They're the ones with named professorships.  But sometimes it's not obvious -- at least, not obvious to me.  

In young, interdisciplinary fields (I'm most familiar with certain parts of applied math) there are truly different approaches.  So trying to decide between approaches is at least partly tied to whether you think something is, say, really a biology problem, a computer science problem, or a mathematics problem.  (And that's influenced by your educational background.)  There are issues of taste: some people prefer general, elegant solutions, while some people think it's more useful to have a precise model geared to a specific problem.  There are issues of goals: do we want to build a tool that can be brought to market, do we want to prove a theorem, or do we want to model what a biological brain does?  And there's always tension between making assumptions about the data that allow you to do prettier math, versus permitting more "nastiness" and obtaining more modest results.

There's a lot of debate, and it's hard for a novice to make comparisons; usually the only thing we can do is grab the coattails of someone who has proven expertise in an older, more traditional field.  That's useful for becoming a scientist, but the downside is that you don't necessarily get a complete picture (as I get my education in math, I'm going to be more inclined to believe that the electrical engineers are doing it all wrong, even though the driving *reason* for that belief is that I didn't want to be an electrical engineer when I was 18.)

I'm hankering for some kind of meta-science that tells you how to judge between different avenues of research when they're actually different.  (It's much easier to say "Lab A used sounder methodology than Lab B," or "Proof A is more general and provides a stronger result than Proof B.")  Maybe it's silly on my part -- maybe it's asking to compare the incomparable.  But it strikes me as relevant to the LW community -- especially when I see comments to the effect that such-and-such approach to AI is a dead end, not going to succeed, written as though the reason why should be obvious.  I don't know about AI, but it does seem that correctly predicting which research approaches are "dead ends" is a hard problem, and it's relevant to think about how we do it. What's your methodology for deciding what's worth pursuing?

(Earlier I wrote an article called "What is Bunk?" in which I tried to understand how we identify pseudoscience.  This is roughly the same question, but at a much higher level, when the subjects of comparison are all professional scientists writing in peer-reviewed journals.)

New Comment
17 comments, sorted by Click to highlight new comments since: Today at 12:51 PM
[-][anonymous]14y70

I'm not sure if I can answer, but I can give historical background. This is by no means a new question--philosophy of science has been going at this for decades. What we are really talking about is comparing methodologies and standards of evidence. There are a lot of responses to this question, the most important probably being Thomas Kuhn's and John Dewey's.

Kuhn's response is that it is almost impossible; we will always use arguments centered in our own standard of evidence to claim that our standards are good. This isn't quite relativist, but it suggests that we cannot do much to compare standards because we would have to jump out of the system to do so. However, Kuhn did believe that better methodologies and standards of evidence have "more explaining power," but he didn't really quantify what this means.

Dewey's response is a little more useful. He argued that we can compare the effectiveness of particular solutions, methodologies, and standards of evidence by looking at their ability to enhance our instrumental rationality. If one particular methodology does not give us as much instrumental rationality as another, the former is clearly inferior. (The advantage of this method is that it allows us to make a factual statement about methodologies, not just a normative one.)

In terms of deciding what's worth pursuing, I will direct you to Larry Laudan and Irme Lakatos. They discussed the idea of competing "research programs" and "research traditions" in science and explored how we should handle this. David Hull also offers some interesting (and much more well-defined and practical) solutions. But this takes you deeper into philosophy of science and is a bit harder to understand.

(Edited for grammar and clarity)

(as I get my education in math, I'm going to be more inclined to believe that the electrical engineers are doing it all wrong, even though the driving reason for that belief is that I didn't want to be an electrical engineer when I was 18.)

Reminded me of this post.

(as I get my education in math, I'm going to be more inclined to believe that the electrical engineers are doing it all wrong, even though the driving reason for that belief is that I didn't want to be an electrical engineer when I was 18.)

I'm reminded of the infamous "delta function" - it's something that makes no sense from the perspective of standard undergraduate calculus, but engineers used it all the time... and it always gave the right answer. That last part is the part that really drove the mathematicians up the wall. ;) (Someone eventually did come up with a formalization in which it made sense, though.)

Is anything known about how the delta function was developed?

I was unable to find firm online references, but your Google-fu may be stronger than mine.

I'd venture to suggest that this can be a problem in any cutting edge field. For example, look at computational complexity. Resolving questions like P ? = NP are tough. There are many different proposed methods of attack and lots of people saying that methods that other people are working on don't stand a chance or are extremely unlikely to succeed. So even in fields that are not young and are not interdisciplinary this sort of problem can exist. This difficulty may be a natural corollary of simply doing research on the cutting edge of a field. The only reason this might seem more prominent in young fields is that, if I may abuse a metaphor, in those fields there's a much larger surface area that is the cutting edge.

Find the best published case anyone has made for each research program. If you have access, ask the most capable people in each program to share their reasons for working on it as opposed to alternatives.

This should at least be interesting. I expect that there are smart people with good aesthetic sense, intuition, or just luck, working in a great direction, who will fail to communicate any of that effectively (obviously luck is incommunicable). But there should be some smart people in all of the viable programs who have spent time thinking about precisely these questions.

Robin Hanson recently described (and partially endorsed) the tendency for many bright people to avoid thinking about the big picture once they've committed to a course (instead focusing on achieving the rewards they expect from it).

So trying to decide between approaches is at least partly tied to whether you think something is, say, really a biology problem, a computer science problem, or a mathematics problem.

This is an especially interesting problem, because it seems very difficult to rationally assess what kind of approach to an especially open problem is most likely to work. If we're looking at, for example, AI as a research problem, then what kind of evidence could we gather that would lead us to believe that one approach is likely to be more fruitful than another?

Gathering this evidence would seem to require that we know which features of intelligence are most important (so we can decide what details can be abstracted away in our approach and which details need to be modeled), but we really don't have access to that kind of information, and it's not clear what would give us access to it (this insight alone would constitute a large amount of progress on understanding intelligence).

This suggests important questions about the role of rationality in science. Namely, for all the talk of the "weapons-grade rationality" that Less Wrong offers, are such rationality techniques very useful for really hard scientific problems where we're ignorant of large chunks of the hypothesis space, and where accurately assessing the weight of the hypotheses we do know is highly nontrivial?

Edit: see comment below for why I think the last paragraph is wrong.

are such rationality techniques very useful for really hard scientific problems...?

I now think that this was hyperbole. It seems obvious to me that the first and second fundamental questions of rationality are of fundamental importance to science. Namely:

  • What do I think I know, and why do I think I know it?
  • What am I doing, and why am I doing it?

The first question is essential for keeping track of the evidence you (think you) have, and the second question is essential for time management and for fighting akrasia, which is useful to those scientists who are mere mortals in their ability to be productive.

Rationality won't magically solve science, but it clearly makes it (at least slightly) easier.

[-][anonymous]14y00

Exactly my point.

And in AI in particular, it's hard to judge by the standards of "instrumental rationality." You could say "The best guys are the ones who make the best prototypes." But there's always going to be someone who could say "That's not a prototype, that has nothing to do with general AI," and then there's someone else who'll say "General AI is an incoherent notion and a pipe dream; we're the only ones who can actually build something."

This is essentially tangential, but I would promptly walk away from anyone who said "General AI is an incoherent notion," given that the human brain exists.

No ... that's NATURAL intelligence. Also organic, non-GMO, and pesticide free :)

What's your methodology for deciding what's worth pursuing?

Just pick something and go with it. My working assumption is that there is a lot of worthwhile stuff out there. I pick an approach, and instead of spending a lot of time worrying about whether it's the "best", I spend that time working on the one I've picked.

[-][anonymous]14y40

In practice, sure, that's fine. As a career choice, I actually want to get some research done, so I'm likely to take an approach similar to that of a professor at my school.

The thing is, I'm starting to hear people making all sorts of claims like "Those people aren't really doing science," "That researcher isn't going to get anywhere with his approach" and I want to know when I should find those comments credible. I get curious, you know?

[-][anonymous]14y20

The problem with this is that there are a nearly infinite number of hypotheses to explore, and we can't examine them all. So we need some kind of criteria, even something as simple as k-complexity (for hypotheses) or ease of use (for models and methodologies). A good exploration of this idea can be found in chapter 7 of this introductory book.

This seems reasonable to me. On the one hand, every available approach to an open problem is going to be deficient in some way, but on the other hand it's difficult to figure out the missing insights if you don't know what insights already exist. The best way to deal with this is probably to just study lots of different things (everything, if possible).

This also probably pertains more to theoretical science. Empirical science seems like it operates more in the realm of "what kinds of facts have we not gathered that might be important?" rather than necessarily talking about theoretical insight that could be gained.

Ugh. Science methodology is hard.

Ideal solution: "test" the competence of researchers by keeping them in the dark about new results and seeing if they can rediscover them. Practical solution: seek out the opinions of those well versed in more than one field.