I'm a bit confused. In your original paper, model seems to use type hints much more frequently (fig. 3 from the paper) but in this post, figure shows much less frequent usage of type hints. Why?
I'm using LLMs for brainstorming for my research, and I often find annoying how sycophantic they are. Have to explicitly tell them to criticize my ideas to get value out of such brainstorming.
Awesome to see that. We need more people reaching out to politicians and lobbying, and results like this show that this strategy has effect. Would be curious to see how PauseAI was able to achieve such wide support.
I wonder if this paper could be considered to be a scientific shitpost? (Deliberate pun) It kinda is, and the fact that it contributes to the field makes it even more funny.
Interesting paper. There is evidence that LLMs are able to distinguish realistic environments from toy ones. Would be interesting to see if misalignment learned on your training set transfers to complex realistic environments.
Also it seems you didn't use frontier models in your research, and in my experience results from non-frontier models not always scale to frontier ones. Would be cool to see the results for models like DeepSeek V3.1
This was written in the Claude 4 system card. Made sense to test Claude and not other LLMs
When Anthropic published their reports on Claude blackmailing people and contacting authorities if the user does something that Claude considers illegal, I've seen a lot of news articles vilifying Claude. It was a sad thing to see Anthropic being punished for being the only organization openly publishing such research.
Thanks for the clarification. It seems I got a bit confused.
Thank you!