Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

I think (perhaps) the distinction that I was trying to make in my previous post "Cars and Elephants": a handwavy argument/analogy against mechanistic interpretability is basically the distinction between engineering and reverse engineering.  

Reverse engineering is analogous to mechanistic interpretability; engineering is analogous to "well-founded AI" (to borrow Stuart Russell's term).

So it seems worth exploring the pros and cons of these two approaches to understanding x-safety-relevant properties of advanced AI systems.  

As a gross simplification,[1] we could view the situation this way:

  • Using deep learning approaches, we can build advanced AI systems that are not well understood.  Better reverse engineering would make them better understood.
  • Using "well-founded AI" approaches, we can build AI systems that are well understood, but not as advanced.  Better engineering would make them more advanced. 

Under this view, these two approaches are working towards the same end from different starting points.  

A few more thoughts:

  • Competitiveness arguments favor reverse engineering.  Safety arguments favor engineering.
  • We don't have to choose one.  We can work from both ends, and look for ways to combine approaches.
  • I'm not sure which end is easier to start from.  My intuition says that there is the same underlying difficulty that needs to be addressed regardless of where you start from,[2] but the perspective I'm presenting seems to suggest otherwise.
  • There may be some sort of P vs. NP kind of argument in favor of reverse engineering, but it seems likely to rely on some unverifiable assumptions (e.g. that we will in fact reliably recognize good mechanistic interpretations).
  1. ^

     I know people will say that we don't actually understand how "Well founded AI" approaches work any better.  I don't feel equipped to evaluate that claim beyond extremely simple cases, and don't expect most readers are either.

  2. ^

    At least if your goal is to get something like an AGI system, the safety of which we have justified confidence in.  This is perhaps too ambitious of a goal.

New to LessWrong?

New Comment
3 comments, sorted by Click to highlight new comments since: Today at 5:55 AM

In reality you often use both?

For example - many DL ideas have a neurosci inspiration, which is essentially reverse engineering the brain. Then you combine that with various other knowledge to engineer some new system. But then you want to debug it - and interpretability tools are essentially debugging tools - so debugging is a form of targeted reverse engineering (figuring out how a system actually works in practice to improve it).

An angle I think is relevant here is that a sufficiently complex, "well founded" AI system is still going to be fairly difficult to understand. i.e. a large codebase, where everything is properly commented and labeled, might still have lots of unforeseen bugs and interactions the engineers didn't intend. 

So I think before you deploy a powerful "Well Founded" AI system, you'll probably still need a kind of generalized reverse-engineering/interpretability skill to explain how the entire process works in various test cases.

I don't really buy this argument.  

  • I think the following is a vague and slippery concept: "a kind of generalized reverse-engineering/interpretability skill".  But I agree that you would want to do testing, etc. of any system before you deploy it.
  • It seems like the ambitious goal of mechanistic interpretability, which would get you the kind of safety properties we are after, would indeed require explaining how the entire process works. But when we are talking about such a complex system, it seems the main obstacle to understanding for either approach is our ability to comprehend such an explanation.  I don't see a reason to say that we can surmount that obstacle more easily via reverse engineering than via engineering.  It often seems to me that people are assuming that mechanistic interpretability addresses this obstacle (I'm skeptical), or that (effectively) the obstacle doesn't actually exist (in which case why can't we just do it via engineering?)