I think it’s possible that an AI will decide not to sandbag (e.g. on alignment research tasks), even if all of the following are true:
The reason is as follows:
This seems less likely the harder the problem is, and therefore the more the AI needs to use its general intelligence or agency to pursue it, which are often the sorts of tasks we’re most scared about the AI doing surprisingly well on.
I agree this argument suggests we will have a good understanding of more simple capabilities the model has, like what facts about biology it knows about, which may end up being useful anyway.
On top of what Garrett said, reflection also pushes against this pretty hard. An AI that has gone through a few situations where it has acted against its own goals because of "context-specific heuristics" will be motivated to remove those heuristics, if that is an available option.
Oh, hmm, this seems like a bug on our side. I definitely set up a redirect a while ago that should make those links work. My guess is something broke in the last few months.
Thanks for the heads up. Example broken link (https://agentfoundations.org/item?id=32
), currently redirects to broken https://www.alignmentforum.org/item?id=32
, should redirect further to https://www.alignmentforum.org/posts/5bd75cc58225bf0670374e7d/exploiting-edt
(Exploiting EDT[1]), archive.today snapshot.
Edit 14 Oct: It works now, even for links to comments, thanks LW team!
LW confusingly replaces the link to www.alignmentforum.org
given in Markdown comment source text with a link to www.lesswrong.com
when displaying the comment on LW. ↩︎
A framing I wrote up for a debate about "alignment tax":
A person whose mainline is {1a --> 1b --> 2b or 2c} might say "alignment is unsolved, solving it mostly a discrete thing, and alignment taxes and multipolar incentives aren't central"
Whereas someone who thinks we're already in 2a might say "alignment isn't hard, the problem is incentives and competitiveness"
Someone whose mainline is {1a --> 2a} might say "We need to both 'solve alignment at all' AND either get the tax to be really low or do coordination. Both are hard, and both are necessary."
Results on logarithmic utility and stock market leverage: https://www.lesswrong.com/posts/DMxe4XKXnjyMEAAGw/the-geometric-expectation?commentId=yuRie8APN8ibFmRJD