Mark Xu

I do alignment research at the Alignment Research Center. Learn more about me at markxu.com/about

Sequences

Intermittent Distllations
Training Regime

Wiki Contributions

Comments

Sorted by

I agree ergonimics can be hard to verify. But some ergonomics are easy to verify, and chairs conform to those ergonomics (e.g. having a backrest is good, not having sharp stabby parts are good, etc.).

I agree that there are some properties of objects that are hard to verify. But that doesn't mean generation is as hard as verification in general. The central property of a chair (that you can sit on it) is easy to verify.

Mark XuΩ460

I think catastrophe detectors in practice will be composed of neural networks interacting with other stuff, like scientific literature, python, etc.

With respect to the stuff quoted, I think all but "doing experiments" can be done with a neural net doing chain of thought (although not making claims about quality).

I think we're trying to solve a different problem than trusted monitoring, but I'm not that knowledgeable about what issues trusted monitoring is trying to solve. The main thing that I don't think you can do with monitoring is producing a model that you think is unlikely to result in catastrophe. Monitoring lets you do online training when you find catastrophe, but e.g. there might be no safe fallback action that allows you to do monitoring safely.

Separately, I do think it will be easy to go from "worst-case" NN-tail-risk estimation to "worst case" more general risk estimation. I do not think it will be easy to go from "typical case" NN-tail-risk estimation to more general "typical case" risk estimation, but think that "typical case" NN-tail-risk estimation can meaningfully reduce safety despite not being able to do that generalization.

Re. more specific hopes: if your risk estimate is conducted by model with access to tools like python, then we can try to do two things:

  • vaguely get an estimate that is as good as the estimate you would get if you replaced "python" with your model's subjective distribution over the output of whatever it runs through python.
  • learn some "empirical regularities" that govern how python works (as expected by your model/SGD)

(these might be the same thing?)

Another argument: one reason why doing risk estimates for NN's is hard is because the estimate can rely on facts that live in some arbitrary LLM ontology. If you want to do such an estimate for an LLM bureaucracy, some fraction of the relevant facts will live in LLM ontology and some fraction of facts will live in words passed between models. Some fraction of facts will live in a distributed way, which adds complications, but those distributed facts can only affect the output of the bureacracy insofar as they are themselves manipulated by an LLM in that bureacracy.

Mark Xu110

This post uses "I can identify ways in which chairs are bad" as an example. But it's easier for me to verify that I can sit in a chair and that it's comfortable then to make a chair myself. So I don't really know why this is a good example for "verification is easier than generation".

More examples:

  • I can tell my computer is a good typing machine, but cannot make one myself
  • I can tell a waterbottle is water tight, but do not know how to make a water bottle
  • I can tell that my pepper grinder grinds pepper, but do not know how to make a pepper grinder.

If the goal of this post is to discuss the crux https://www.lesswrong.com/posts/fYf9JAwa6BYMt8GBj/link-a-minimal-viable-product-for-alignment?commentId=mPgnTZYSRNJDwmr64:

evaluation isn't easier than generation, and that claim is true regardless of how good you are at evaluation until you get basically perfect at it

then I think there is a large disconnect between the post above, which is positing that in order for this claim to be false there has to be some "deep" sense in which delagation is viable, and the sense in which I think this crux is obviously false in the more mundane sense in which all humans interface with the world and optimize over the products other people create, and are therefore more capable than they would have been if they had to make all products for themselves from scratch.

I think "basically obviates" is too strong. imitation of human-legible cognitive strategies + RL seems liable to produce very different systems that would been produced with pure RL. For example, in the first case, RL incentizes the strategies being combine in ways conducive to accuracy (in addition to potentailly incentivizing non-human-legible cognitive strategies), whereas in the second case you don't get any incentive towards productively useing human-legible cogntive strategies.

Mark Xu124

I don't think this characterization is accurate at all, but don't think I can explain the disagreement well enough for it to be productive.

Mark XuΩ8120

if you train on (x, f(x)) pairs, and you ask it to predict f(x') on some novel input x', and also to write down what it thinks f is, do you know if these answers will be consistent? For instance, the model could get f wrong, and also give the wrong prediction for f(x), but it would be interesting if the prediction for f(x) was "connected" to it's sense of what f was.

Mark Xu20

It's important to distinguish between:

  • the strategy of "copy P2's strategy" is a good strategy
  • because P2 had a good strategy, there exists a good strategy for P1

Strategy stealing assumption isn't saying that copying strategies is a good strategy, it's saying the possibility of copying means that there exists a strategy P1 can take that is just a good as P2.

Load More