Beth Barnes

Wiki Contributions


A very crude deception eval is already passed

Instruction-following davinci model. No additional prompt material

Zoe Curzi's Experience with Leverage Research

Many of these things seem broadly congruent with my experiences at Pareto, although significantly more extreme. Especially: ideas about psychology being arbitrarily changeable, Leverage having the most powerful psychology/self-improvement tools, Leverage being approximately the only place you could make real progress, extreme focus on introspection and other techniques to 'resolve issues in your psyche', (one participant's 'research project' involved introspecting about how they changed their mind for 2 months) and general weird dynamics (e.g. instructors sleeping with fellows; Geoff doing lectures or meeting individually with participants in a way that felt very loaded with attempts to persuade and rhetorical tricks), and paranoia (for example: participants being concerned that the things they said during charting/debugging would be used to blackmail or manipulate them; or suspecting that the private slack channels for each participant involved discussion of how useful the participants were in various ways and how to 'make use of them' in future). On the other hand, I didn't see any of the demons/objects/occult stuff, although I think people were excited about 'energy healers'/'body work', not actually believing that there was any 'energy' going on, but thinking that something interesting in the realm of psychology/sociology was going on there. Also, I benefitted from the program in many ways, many of the techniques/attitudes were very useful, and the instructors generally seemed genuinely altruistic and interested in helping fellows learn.

Call for research on evaluating alignment (funding + advice available)

Yeah, I think you need some assumptions about what the model is doing internally.

I'm hoping you can handwave over cases like 'the model might only know X&A, not X' with something like 'if the model knows X&A, that's close enough to it knowing X for our purposes - in particular, if it thought about the topic or learned a small amount, it might well realise X'.

Where 'our purposes' are something like 'might the model be able to use its knowledge of X in a plan in some way that outsmarts us if we don't know X'?

Another way to put this is that for workable cases, I'd expect the first clause to cover things: if the model knows how to simply separate Z into X&A in the above, then I'd expect suitable prompt engineering, fine-tuning... to be able to get the model to do task X.

It seems plausible to me that there are cases where you can't get the model to do X by finetuning/prompt engineering, even if the model 'knows' X enough to be able to use it in plans. Something like - the part of its cognition that's solving X isn't 'hooked up' to the part that does output, but is hooked up to the part that makes plans. In humans, this would be any 'knowledge' that can be used to help you achieve stuff, but which is subconscious - your linguistic self can't report it directly (and further you can't train yourself to be able to report it)

Common knowledge about Leverage Research 1.0

Wow, that is very bad. Personally I'd still trust Julia as someone to report harms from Leverage to, mostly from generally knowing her and knowing her relationship to Leverage, but I can see why you wouldn't.

Common knowledge about Leverage Research 1.0

The basic outline is:

There were ~20 Fellows, mostly undergrad-aged with one younger and a few older.

Stayed in Leverage house for ~3 months in summer 2016 and did various trainings followed by doing a project with mentorship to apply things learnt from trainings

Training was mostly based on Leverage ideas but also included fast-forward versions of CFAR workshop, 80k workshop. Some of the content was taught by Leverage staff and some by CEA staff who were very 'in Leverage's orbit'.

I think most fellows felt that it was really useful in various ways but also weird and sketchy and maybe harmful in various other ways.

Several fellows ended up working for Leverage afterwards; the whole thing felt like a bit of a recruiting drive.

Common knowledge about Leverage Research 1.0
If anyone is aware of harms or abuses that have taken place involving staff at Leverage Research, please email me, in confidence, at or

I would suggest that anything in this vein should be reported to Julia Wise, as I believe she is a designated person for reporting concerns about community health, harmful behaviours, abuse, etc. She is unaffiliated with Leverage, and is a trained social worker.

Common knowledge about Leverage Research 1.0
Using psychological techniques to experiment on one another, and on the "sociology" of the group itself, was a main purpose of the group. It was understood among members that they were signing up to be guinea pigs for experiments in introspection, altering one's belief structure, and experimental group dynamics.

The Pareto program felt like it had substantial components of this type of social/psychological experimentation, but participants were not aware of this in advance and did not give informed consent. Some (maybe most?) Pareto fellows, including me, were not even aware that Leverage was involved in any way in running the program until they arrived, and found out they were going to be staying in the Leverage house.

Beth Barnes's Shortform

You mean a fixed point of the model changing its activations as well as what it reports? I was thinking we could rule out the model changing the activations themselves by keeping a fixed base model.

Load More