LESSWRONG
LW

Interpretability (ML & AI)AI
Frontpage

11

The Computational Complexity of Circuit Discovery for Inner Interpretability

by Bogdan Ionut Cirstea
17th Oct 2024
1 min read
2

11

This is a linkpost for https://arxiv.org/abs/2410.08025
Interpretability (ML & AI)AI
Frontpage

11

The Computational Complexity of Circuit Discovery for Inner Interpretability
2Lucius Bushnaq
2Noosphere89
New Comment
2 comments, sorted by
top scoring
Click to highlight new comments since: Today at 3:26 PM
[-]Lucius Bushnaq11mo20

At a very brief skim, it doesn't look like the problem classes this paper looks at are problem classes I'd care about much. Seems like a case of scoping everything broadly enough that something in the defined problem class ends up very hard. 

Reply2
[-]Noosphere8911mo20

Is that because you already believed that enumerative interpretation agendas were unlikely to succeed?

Reply
Moderation Log
More from Bogdan Ionut Cirstea
View more
Curated and popular this week
2Comments

Authors: Federico Adolfi, Martina G. Vilas, Todd Wareham.

Abstract:

Many proposed applications of neural networks in machine learning, cognitive/brain science, and society hinge on the feasibility of inner interpretability via circuit discovery. This calls for empirical and theoretical explorations of viable algorithmic options. Despite advances in the design and testing of heuristics, there are concerns about their scalability and faithfulness at a time when we lack understanding of the complexity properties of the problems they are deployed to solve. To address this, we study circuit discovery with classical and parameterized computational complexity theory: (1) we describe a conceptual scaffolding to reason about circuit finding queries in terms of affordances for description, explanation, prediction and control; (2) we formalize a comprehensive set of queries that capture mechanistic explanation, and propose a formal framework for their analysis; (3) we use it to settle the complexity of many query variants and relaxations of practical interest on multi-layer perceptrons (part of, e.g., transformers). Our findings reveal a challenging complexity landscape. Many queries are intractable (NP-hard, \sigma_2^p hard), remain fixed-parameter intractable (W[1]-hard) when constraining model/circuit features (e.g., depth), and are inapproximable under additive, multiplicative, and probabilistic approximation schemes. To navigate this landscape, we prove there exist transformations to tackle some of these hard problems (NP- vs. \sigma_2^p-complete) with better-understood heuristics, and prove the tractability (PTIME) or fixed-parameter tractability (FPT) of more modest queries which retain useful affordances. This framework allows us to understand the scope and limits of interpretability queries, explore viable options, and compare their resource demands among existing and future architectures.

Seems like bad news for enumerative interp agendas.