LESSWRONG
LW

ChrisCundy
70040
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Avoiding AI Deception: Lie Detectors can either Induce Honesty or Evasion
ChrisCundy3mo10

That's a great question, sorry for the delayed reply!

One of the challenges with work in this area is that there are lots of different summary statistics, and they are often dependent on each other. I don't think any of them tell the whole story alone. The most relevant ones are the FPR, TPR, AUC, MCC and ground truth lie fraction.

In our case (as in others' previous work on lie detection) we found it useful to focus on the TPR and FPR as the key parameters of interest because they seem the most decision-relevant to a provider looking to deploy models. In particular, if a provider has a fixed budget to spend on investigating cases where the lie detector flags a response, then this essentially fixes a maximum feasible FPR (often we use a 1% FPR as an example of a pretty low FPR, but depending on how widespread the deployment is it might need to be much lower). In our paper, the lie detector is always the same network--we just vary the decision boundary to trade off increased FPR (and therefore cost) vs increased TPR.

I think if you were changing the detector itself, e.g. by choosing a different type of probe that got different AUC, you would see a connection between MCC and our lie rate. However, in our situation with a fixed detector, as we increase the TPR towards 100% we are changing the decision boundary away from the boundary which would maximize the MCC. Therefore the MCC is actually decreasing as we increase the TPR at the extremes.  However, you could reasonably say that this is a slightly artificial consequence of our setup, where there is no downside to increased FPR except for increased cost.

In the appendix we have an initial analysis of a scheme that we call SoLID-Defer, where false positives do have a cost in terms of the alignment of the model.

Reply
Transformers Represent Belief State Geometry in their Residual Stream
ChrisCundy1y153

The figures remind me of figures 3 and 4 from Meta-learning of Sequential Strategies, Ortega et al 2019, which also study how autoregressive models (RNNs) infer underlying structure. Could be a good reference to check out!

.
 

Reply
SolidGoldMagikarp (plus, prompt generation)
ChrisCundy3y30

Thanks for the elaboration, I'll follow up offline

Reply
SolidGoldMagikarp (plus, prompt generation)
ChrisCundy3y40

Would you be able to elaborate a bit on your process for adversarially attacking the model?
It sounds like a combination of projected gradient descent and clustering? I took a look at the code but a brief mathematical explanation / algorithm sketch would help a lot!

Myself and a couple of colleagues are thinking about this approach to demonstrate some robustness failures in LLMs, it would be great to build off your work.

Reply
22Avoiding AI Deception: Lie Detectors can either Induce Honesty or Evasion
3mo
2
37Illusory Safety: Redteaming DeepSeek R1 and the Strongest Fine-Tunable Models of OpenAI, Anthropic, and Google
Ω
7mo
Ω
0