Simon Lermen

Wiki Contributions

Comments

Sorted by

I don't have a complete picture of Joshua Achiam's views, the new head of mission alignment, but what I have read is not very promising.

Here are some (2 year old) tweets from a twitter thread he wrote.

https://x.com/jachiam0/status/1591221752566542336

P(Misaligned AGI doom by 2032): <1e-6%

https://x.com/jachiam0/status/1591220710122590209

People scared of AI just have anxiety disorders.

This thread also has a bunch of takes against EA.

I sure hope he changed some of his views, given that the company he works at expects AGI by 2027

Edited based on comment.

I think it might be interesting to note potential risks of deceptive models creating false or misleading labels for features. In general I think coming up with better and more robust automated labeling of features is an important direction.

I worked in a group at a recent hackathon on demonstrating the feasibility of creating bad labels in bills method. https://www.lesswrong.com/posts/PyzZ6gcB7BaGAgcQ7/deceptive-agents-can-collude-to-hide-dangerous-features-in

One example: Leopold spends a lot of time talking about how we need to beat China to AGI and even talks about how we will need to build robo armies. He paints it as liberal democracy against the CCP. Seems that he would basically burn timeline and accelerate to beat China. At the same time, he doesn't really talk about his plan for alignment which kind of shows his priorities. I think his narrative shifts the focus from the real problem (alignment).

This part shows some of his thinking. Dwarkesh makes some good counter points here, like how is Donald Trump having this power better than Xi.

It seems to be able to understand video rather than just images from the demos, I'd assume that will give it much better time understanding too. (Gemini also has video input)

"does it actually chug along for hours and hours moving vaguely in the right direction"
I am pretty sure no. It is competent within the scope of tasks I present here. But this is a good point, I am probably overstating things here. I might edit this. 

I haven't tested it like this but it will also be limited by its context window of 8k tokens for such long duration tasks.

Edit: I have now edited this

I also took into account that refusal-vector ablated models are available on huggingface and scaffolding, this post might still give it more exposure though. 
Also Llama 3 70B performs many unethical tasks without any attempt at circumventing safety. At that point I am really just applying a scaffolding. Do you think it is wrong to report on this?

How could this go wrong, people realize how powerful this is and invest more time and resources into developing their own versions?

I don't really think of this as alignment research, just want to show people how far along we are. Positive impact could be to prepare people for these agents going around, agents being used for demos. Also potentially convince labs to be more careful in their releases. 

Thanks for this comment, I take it very serious that things can inspire people and burn timeline.

I think this is a good counterargument though:
There is also something counterintuitive to this dynamic: as models become stronger, the barriers to entry will actually go down; i.e. you will be able to prompt the AI to build its own advanced scaffolding. Similarly, the user could just point the model at a paper on refusal-vector ablation or some other future technique and ask the model to essentially remove its own safety.

I don't want to give people ideas or appear cynical here, sorry if that is the impression.

I think that is a fair categorization. I think it would be really bad if some super strong tool-use model gets released and nobody had any idea before this could lead to really bad outcomes. Crucially, I expect future models to be able to remove their own safety guardrails as well. I really try to think about how these things might positively affect AI safety, I don't want to just maximize for shocking results. My main intention was almost to have this as a public service announcement that this is now possible. People are often behind on the Sota and most people are probably not aware that jailbreaks can now literally produce these "Bad Agents". In general, 1) I expect people being more informed to have a positive outcome and 2) I hope that this will influence labs to be more thoughtful with releases in the future. 

Thanks for the task ideas. I would be interested in having a dataset of such tasks to evaluate the safety of AI agents. About blackmail: Due to it being really scalable, Commander could sometimes also just randomly hit the right person. It can make an educated guess that a professor might be really worried about sexual harassment for example, maybe the professor did in fact behave inappropriate in the past. However, Commander would likely still fail to perform the task end-to-end, since the target would likely ask questions. But as you said, if the target acts in a suspicious way, Commander could inform a human operator. 

Thanks for the positive feedback, I'm planning to follow up on this and mostly direct my research in this direction. I'm definitely open to discussing Pro's and Con's. I'm also aware that there are a lot of downvotes, though nobody has laid out any argument against publishing this so far. (Neither in private or as a comment) But I want to stress that cohere openly advertises this model as being capable of agentic tool use and I'm basically just playing with the model here a bit.

Load More