LESSWRONG
LW

212
ojorgensen
199Ω19211
Message
Dialogue
Subscribe

AI Safety Researcher, my website is here.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
2ojorgensen's Shortform
2y
1
You should consider applying to PhDs (soon!)
ojorgensen10mo108

Strong upvote!

One thing I'd emphasise is that there's a pretty big overhead to submitting a single application (getting recommendation letters, writing a generic statement of purpose), but it doesn't take much effort to apply to more after that (you can rejig your SOP quite easily to fit different universities). Given the application process is noisy and competitive, if you're submitting one application you should probably submit loads if you can afford the application costs. Good luck to everyone applying! :))

Reply
Because of LayerNorm, Directions in GPT-2 MLP Layers are Monosemantic
ojorgensen2y10

Yeah I think we have the same understanding here (in hindsight I should have made this more explicit in the post / title).

I would be excited to see someone empirically try to answer the question you mention at the end. In particular, given some direction u and a LayerNormed vector v, one might try to quantify how smoothly rotating from v towards u changes the output of the MLP layer.  This seems like a good test of whether the Polytope Lens is helpful / necessary for understanding the MLPs of Transformers (with smooth changes corresponding to your 'random jostling cancels out' corresponding to not needing to worry about Polytope Lens style issues).

Reply
Open Thread – Winter 2023/2024
ojorgensen2y167

It would save me a fair amount of time if all lesswrong posts had an "export BibTex citation" button, exactly like the feature on arxiv.  This would be particularly useful for alignment forum posts!

Reply1
Against Almost Every Theory of Impact of Interpretability
ojorgensen2y10

One central criticism of this post is its pessimism towards enumerative safety. (i.e. finding all features in the model, or at least all important features). I would be interested to hear how the author / others have updated on the potential of enumerative safety in light of recent progress on dictionary learning, and finding features which appear to correspond to high-level concepts like truth, utility and sycophancy. It seems clear that there should be some positive update here, but I would be interested in understanding issues which these approaches will not contribute to solving.

Reply
Some ML-Related Math I Now Understand Better
ojorgensen2y62

But this does not hold for tiny cosine similarities (e.g. 0.01 for n=12288, which gives a lower bound of 2 using the formula above). I'm not aware of a lower bound better than n for tiny angles.

Unless I'm misunderstanding, a better lower bound for almost orthogonal vectors when cosine similarity is approximately 0 is just n, by taking an orthogonal basis for the space. 

My guess for why the formula doesn't give this is because it is derived by covering a sphere with non-intersecting spherical caps, which is sufficient for almost orthogonality but not necessary.  This is also why the lower bound of 2vectors makes sense when we require cosine similarity to be approximately 0, since then the only way you can fit two spherical caps onto the surface of a sphere is by dividing it into 2 hemispheres.

This doesn't change the headline result (still exponentially much room for almost orthogonal vectors), but the actual numbers might be substantially larger thanks to almost orthogonal vectors being a weaker condition than spherical cap packing.

Reply
Mech Interp Puzzle 1: Suspiciously Similar Embeddings in GPT-Neo
ojorgensen2y20

(Potential spoilers!)

There is some relevant literature which explores this phenomenon, also looking at the cosine similarity between words across layers of transformers. I think the most relevant is (Cai et. al 2021), where they also find this higher than expected cosine similarity between residual stream vectors in some layer for BERT, D-BERT, and GPT. (Note that they use some somewhat confusing terminology: they define inter-type cosine similarity to be cosine similarity between embeddings of different tokens in the same input; and intra-type cosine similarity to be cosine similarity of the same token in different inputs. Inter-type cosine similarity is the one that is most relevant here).

They find that the residual stream vectors for GPT-2 small tend to lie in two distinct clusters. Once you re-centre these clusters, the average cosine similarity between residual stream vectors falls to close to 0 throughout the layers of the model, as you might expect.

 

Reply
ojorgensen's Shortform
ojorgensen2y10

Problem: we want to make it hard for ML systems (trained via SGD) to perform naive gradient hacking. By naive gradient hacking, I mean "being able to keep some weights of the network constant for an arbitrary step of SGD".

Solution: do "stochastic" regularisation, e.g. sample the amount of regularisation we perform randomly (could use quantum stuff if we want true randomness). This seems like it should make naive gradient hacking almost impossible - in order to keep some target weights unchanged, you'd have to match their +ve contribution to the loss to the degree of regularisation. If the degree of regularisation is stochastic, the loss contribution must also be stochastic, which is not possible with a determinstic NN!

The question of how useful this is from preventing any gradient hacking depends on how stable the loss landscape around some "deceptive / gradient hacking minimum is". Seems possible the surrounding loss landscape could be pretty unstable to random pertubations?

Reply
Excessive AI growth-rate yields little socio-economic benefit.
ojorgensen2y51

Just a nit-pick but to me "AI growth-rate" suggests economic growth due to progress in AI, as opposed to simply techincal progress in AI. I think "Excessive AI progress yields little socio-economic benefit" would make the argument more immediately clear.

Reply
EIS XI: Moving Forward
ojorgensen3yΩ110

Rando et al. (2022)

This link is broken btw!

Reply
Abuse in LessWrong and rationalist communities in Bloomberg News
ojorgensen3y50

Didn't get that impression from your previous comment, but this seems like a good strategy!

Reply
Load More
21Understanding Counterbalanced Subtractions for Better Activation Additions
2y
0
13Because of LayerNorm, Directions in GPT-2 MLP Layers are Monosemantic
2y
3
64UK Foundation Model Task Force - Expression of Interest
2y
2
2ojorgensen's Shortform
2y
1
17(Extremely) Naive Gradient Hacking Doesn't Work
3y
0
4Which Issues in Conceptual Alignment have been Formalised or Observed (or not)?
Q
3y
Q
0
20Strange Loops - Self-Reference from Number Theory to AI
3y
6
4Evaluating OpenAI's alignment plans using training stories
3y
0
11Disagreements about Alignment: Why, and how, we should try to solve them
3y
2
Distributional Shifts
3 years ago
(+4/-4)