LESSWRONG
LW

7vik
377590
Message
Dialogue
Subscribe

I research intelligence and it’s emergence and expression in neural networks to ensure advanced AI is safe and beneficial. 

Current interests: neural network interpretability, alignment/safety, unsupervised learning, and deep learning theory. 

For more, check out my scholar profile and personal website.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Among Us: A Sandbox for Agentic Deception
7vik4mo30

Thanks! Yep, makes sense - that's one of the things we'll be working on and hope to share some results soon!

Reply1
Among Us: A Sandbox for Agentic Deception
7vik5mo10

Sorry - fixed! They should match now - I'd forgotten to update the figure in this post. Thanks for pointing it.

Reply
Some lessons from the OpenAI-FrontierMath debacle
7vik7mo31

They say it was an advanced math benchmark to test the limits of AI, not a safety project. But a number of people who contributed would have been safety-aligned and would not have wanted to if they knew OpenAI will have exclusive access.

Reply
Some lessons from the OpenAI-FrontierMath debacle
7vik7mo50

I don't think this info was about o3 (please correct me if I'm wrong). While this suggests not all of them were from the first tier, it would be much better to know what it actually was. Especially, since the most famous quotes about FrontierMath ("extremely challenging" and "resist AIs for several years at least") were about the top 25% hardest problems, the accuracy on that set seems more important to update on with them. (not to say that 25% is a small feat in any case).

Reply
Some lessons from the OpenAI-FrontierMath debacle
7vik7mo*52

I definitely don't see a problem with taking lab funding as a safety org. (As long as you don't claim otherwise.)

 

I definitely don't have a problem with this as well - just that this needs to be much more transparent and carefully though-out than how it happened here.

 

If you think they didn't train on FrontierMath answers, why do you think having the opportunity to validate on it is such a significant advantage for OpenAI?

My concern is that "verbally agreeing to not use it for training" leaves a lot of opportunities to still use it as a significant advantage. For instance, do we know that they did not use it indirectly to validate a PRM that could in turn help a lot? I don't think making a validation set out of their training data would be as effective.

Re: "maybe it would have took OpenAI a bit more time to contract some mathematicians, but realistically, how much more time?": Not much, they might have done this indepently as well. (assuming the mathematicians they'd contact would be equally willing to contribute directly to OpenAI)

Reply
The Geometry of Feelings and Nonsense in Large Language Models
7vik11mo32

Thanks a lot! We had an email exchange with the authors and they shared some updated results with much better random shuffling controls on the WordNet hierarchy.

They also argue that some contexts should promote the likelihood of both "sad" and "joy" since they are causally separable, so they should not be expected to be anti-correlated under their causal inner product per se. We’re still concerned about what this means for semantic steering.

Reply
The Geometry of Feelings and Nonsense in Large Language Models
7vik1y40

I agree. Yes - would be happy to chat and discuss more. Sending you a DM.

Reply
The Geometry of Feelings and Nonsense in Large Language Models
7vik1y30

They use a WordNet hierarchy to verify their orthogonality results at scale, but doesn't look like they do any other shuffle controls.

Reply
The Geometry of Feelings and Nonsense in Large Language Models
7vik1y50

Thanks @TomasD, that's interesting! I agree - most words in my random list seem like random "objects/things/organisms" so there might be some conditioning going on there. Going over your code to see if there's something else that's different.

Reply
No wikitag contributions to display.
32Sparsity is the enemy of feature extraction (ft. absorption)
4mo
0
110Among Us: A Sandbox for Agentic Deception
5mo
7
141Auditing language models for hidden objectives
Ω
6mo
Ω
15
71Some lessons from the OpenAI-FrontierMath debacle
7mo
9
71Intricacies of Feature Geometry in Large Language Models
9mo
0
61The Geometry of Feelings and Nonsense in Large Language Models
1y
10