Wiki Contributions

Comments

Might be a good way to further test this indeed. So maybe something like green and elephant?

Interesting. I'd guess that the prompting is not clear enough for the base model. The Human/Assistant template does not really apply to base models. I'd be curious to see what you get when you do a bit more prompt engineering adjusted for base models.

Cool work.

I was briefly looking at your code, and it seems like you did not normalize the activations when using PCA. Am I correct? If so, do you expect that to have a significant effect?

I think your policy suggestion is reasonable. 

However, implementing and executing this might be hard: what exactly is an LLM? Does a slight variation on the GPT architecture count as well? How are you going to punish law violators? 

How do you account for other worries? For example, like PeterMcCluskey points out, this policy might lead to reduced interpretability due to more superposition. 

Policy seems hard to do at times, but others with more AI governance experience might provide more valuable insight than I can. 

Why do you think a similar model is not useful for real-world diplomacy?

It's especially dangerous because this AI is easily made relevant in the real world as compared to AlphaZero for example. Geopolitical pressure to advance these Diplomacy AIs is far from desirable.

After years of tinkering and incremental progress, AIs can now play Diplomacy as well as human experts.[6]

It seems that human-level play is possible in regular Diplomacy now, judging by this tweet by Meta AI. They state that:

We entered Cicero anonymously in 40 games of Diplomacy in an online league of human players between August 19th and October 13th, 2022. Over the course of 72 hours of play involving sending 5,277 messages, Cicero ranked in the top 10% of participants who played more than one game.