LESSWRONG
LW

198
Vili Kohonen
16130
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Lessons from organizing a technical AI safety bootcamp
Vili Kohonen2d*10

For example, while inviting speakers was mainly Dmitrii's responsibility, it wasn't that easy and some of them were invited by me as I knew relevant people personally. We went over several smaller tasks together, like which questions to include in the feedback forms and what kind of information we needed to provide about Helsinki for people coming from abroad.

In addition, two dynamics really increased the number of our ad hoc meetings:

  1. We really wanted to do things well. Dmitrii liked to ask for my feedback and I readily gave it to him, and sometimes vice versa.
  2. Gather Town made it super easy to have a chat. When another person is online you can wave them to your desk with two clicks and for the other person accepting is only one click.

I think these things improved the program. But again, we were operating reactively and it was quite stressful. Had we specced the bootcamp properly, used e.g. Trello and had structured meetings where we would have needed to only check everything's going mostly as planned, I think we would have saved loads of time and reduced stress levels. We just didn't have the time nor experience this time.

Reply
Prompt optimization can enable AI control research
Vili Kohonen9d30

Have to highlight here that this wasn't even the initial project of Mia and Zach; they pivoted halfway through the week from trying to craft better red team policies to this. Automated prompt engineering and fine-tuning from that seems to fit the blue team's bill better. The monitor as a classifier is arguably more straightforward than creating effective attack policies for the red team, although it's important to pursue strategies to find upper bounds for both teams.

Reply
Making deals with early schemers
Vili Kohonen3mo10

Good post, thanks. I especially agree with the statements on current research teaching AIs synthetic facts and following through with deals during research. 

It seems very likely that we have to coordinate with AIs in the future and we want maximum credibility to do so. If we aim to advance capabilities to program lies to AIs, while promising from safety research perspective, we probably should be very clear when and how this manipulation is done for it to not undermine credibility. If we develop such capabilities further we are also making the current most potential pathway to honor our commitments, fine-tuning the contract to the model, more uncertain from the model's perspective.

Committing to even superficial deals early and often is a strong signal of credibility. We want to accumulate as much of this kind of evidence. This is important for human psychology as well. As mentioned, dealmaking with AIs is a fringe view societally at the moment, and if there's no precedence for it by even the most serious safety researchers, it is much a larger step for the safety community to bargain for larger amounts of human-possessed resources if push comes to shove at some point.  

Reply
No wikitag contributions to display.
15Lessons from organizing a technical AI safety bootcamp
4d
3