LESSWRONG
LW

Wikitags

Distributional Shifts

Edited by abramdemski, et al. last updated 25th Aug 2022

Many learning-theoretic setups (especially in the Frequentist camp) make an IID assumption: that data can be split up into samples (sometimes called episodes or data-points) which are independently sampled from identical distributions (hence "IID"). This assumption sometimes allows us to prove that our methods generalize well; see especially PAC learning. However, in real life, when we say that a model "generalizes well", we really mean that it works well on new data which realistically has a somewhat different distribution. This is called a distributional shift or a non-stationary environment.

This framework (in which we initially make an IID assumption, but then, model violations of it as "distributional shifts") has been used extensively to discuss robustness issues relating to AI safety -- particularly, inner alignment. We can confidently anticipate that traditional machine learning systems (such as deep neural networks) will perform well on average, so long as the deployment situation is statistically similar to the training data. However, as the deployment distribution gets further from the training distribution, catastrophic behaviors which are very rare on the original inputs can become probable.

This framework makes it sound like a significant part of the inner alignment problem would be solved if we could generalize learning guarantees from IID cases to non-IID cases. (Particularly if loss bounds can be given at finite times, not merely asymptotically, while maintaining a large, highly capable hypothesis class.)

However, this is not necessarily the case.

Solomonoff Induction avoids making an IID assumption, and so it is not strictly meaningful to talk about "distributional shifts" for a Solomonoff distribution. Furthermore, the Solomonoff distribution has constructive bounds, rather than merely asymptotic. (We can bound how difficult it is to learn something based on its description length.) Yet, inner alignment problems still seem very concerning for the Solomonoff distribution. 

This is a complex topic, but one reason why is that inner optimizers can potentially tell the difference between training and deployment. A malign hypothesis can mimic a benign hypothesis until a critical point where a wrong answer has catastrophic potential. This is called a . 

So, although "distributional shift" is not technically involved, we can see that a critical difference between training and deployment is still involved: during training, wrong answers are always inconsequential. However, when you use a system, wrong answers become consequential. If the system can figure this difference out, then parts of the system can use it to "gate" their behavior in order to accomplish a treacherous turn. 

This makes "distributional shift" seem like an apt metaphor for what's going on in non-IID cases. However, buyer beware: eliminating IID assumptions might eliminate the literal source of the distributional shift problem without eliminating the broader constellation of concerns for which the words "distributional shift" are being used. 

Subscribe
1
Subscribe
1
treacherous turn
Discussion4
Discussion4
Posts tagged Distributional Shifts
48how 2 tell if ur input is out of distribution given only model weights
dkirmani
2y
10
13Nonlinear limitations of ReLUs
Q
magfrump
2y
Q
1
6Mesa-optimization for goals defined only within a training environment is dangerous
Rubi J. Hudson
3y
2
83Ambiguous out-of-distribution generalization on an algorithmic task
Wilson Wu, Louis Jaburi
5mo
6
39Speculative inferences about path dependence in LLM supervised fine-tuning from results on linear mode connectivity and model souping
Ω
RobertKirk
2y
Ω
2
35Have you heard about MIT's "liquid neural networks"? What do you think about them?
Q
Ppau
2y
Q
14
33Requirements for a STEM-capable AGI Value Learner (my Case for Less Doom)
Ω
RogerDearnaley
2y
Ω
3
30Breaking down the training/deployment dichotomy
Ω
Erik Jenner
3y
Ω
3
23Disentangling inner alignment failures
Ω
Erik Jenner
3y
Ω
5
21Causal representation learning as a technique to prevent goal misgeneralization
Ω
PabloAMC
3y
Ω
0
17Distribution Shifts and The Importance of AI Safety
Ω
Leon Lang
3y
Ω
2
16Why do we need RLHF? Imitation, Inverse RL, and the role of reward
Ω
Ran W
1y
Ω
0
11Thoughts about OOD alignment
Catnee
3y
10
3We are misaligned: the saddening idea that most of humanity doesn't intrinsically care about x-risk, even on a personal level
Christopher King
2y
5
1Is there a ML agent that abandons it's utility function out-of-distribution without losing capabilities?
Christopher King
2y
7
Load More (15/15)
Add Posts