In your preferred area of AI alignment, what is the simplest concrete unsolved problem?

By "simplest", ideally the problem has been solved when any of the conditions are weakened. However, this isn't always possible, so a simpler solved version of the problem could also work (e.g., Goldbach's weak conjecture is known to be true.)

By "concrete", I mean something where given the statement of the problem and a proposed solution, a neutral third party would be able to consistently determine whether it's solved or not (e.g., not "explain [some theory] in a good way").

New Answer
New Comment

1 Answers sorted by

Evan R. Murphy

Jan 27, 2023

31

I would check out the 200 Concrete Open Problems in Mechanistic Interpretability post series by Neel Nanda. Mechanistic interpretability has been considered a promising research direction by many in the alignment community for years. But it's only in the past couple months that we have an experienced researcher in this area laying out specific concrete problems and providing detailed guidance for newcomers.

Caveat: I haven't myself looked closely at this post series yet, as in recent months I have been more focused on investigating language model behaviour than on interpretability. So I don't have direct knowledge that these posts are as useful as they look.

I have the impression that Neel Nanda means something different by the word "concrete" than agg, when agg considers problems of the type "explain something in a good way" not a concrete problem.

For example, I would think that "Hunt through Neuroscope for the toy models and look for interesting neurons to focus on." would not matcg agg's bar for concreteness. But maybe other problems from Neel Nanda might.

5agg1y
Well, I don't consider "explain something in a good way" an example of a concrete problem (at least for the purposes of this question)—that was a counterexample. Some of the other problems listed definitely do seem interesting!
1harfe1y
yes, sorry, I meant to say the opposite. I changed it now.