Bogdan Ionut Cirstea

Posts

Sorted by New

Wiki Contributions

Comments

This paper links inductive biases of pre-trained [language] models (including some related to simplicity measures like MDL), path dependency and sensitivity to label evidence/noise: https://openreview.net/forum?id=mNtmhaDkAr

Here's a recent article on the inductive biases of pre-trained LMs and how that affects fine-tuning: https://openreview.net/forum?id=mNtmhaDkAr

There also seems to be some theoretical and empirical ML evidence for the perspective of in-context learning as Bayesian inference: http://ai.stanford.edu/blog/understanding-incontext/

'We conjecture that reinforcement strengthens the behavior-steering computations that guide a system into reinforcement events, and that those behavior-steering computations will only form around abstractions already represented inside of a system at the time of reinforcement. We bet that there are a bunch of quantitative relationships here just waiting to be discovered -- that there's a lot of systematic structure in what learned values form given which training variables. To ever get to these quantitative relationships, we'll need to muck around with language model fine-tuning under different conditions a lot.' -> this could be (somewhat) relevant: https://openreview.net/forum?id=mNtmhaDkAr

From https://mailchi.mp/b3dc916ac7e2/an-80-why-ai-risk-might-be-solved-without-additional-intervention-from-longtermists : 'On the point about lumpiness, my model is that there are only a few underlying factors (such as the ability to process culture) that allow humans to so quickly learn to do so many tasks, and almost all tasks require near-human levels of these factors to be done well. So, once AI capabilities on these factors reach approximately human level, we will "suddenly" start to see AIs beating humans on many tasks, resulting in a "lumpy" increase on the metric of "number of tasks on which AI is superhuman" (which seems to be the metric that people often use, though I don't like it, precisely because it seems like it wouldn't measure progress well until AI becomes near-human-level).'