LESSWRONG
LW

550
Jeremy Gillen
2526Ω154102821
Message
Dialogue
Subscribe

I'm interested in doing in-depth dialogues to find cruxes. Message me if you are interested in doing this.

I do alignment research, mostly stuff that is vaguely agent foundations. Currently doing independent alignment research on ontology identification. Formerly on Vivek's team at MIRI.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
6Jeremy Gillen's Shortform
3y
57
Some data from LeelaPieceOdds
Jeremy Gillen2d60

Relevant comment on reddit from someone working on Leela Odds:

Reply1
Daniel Tan's Shortform
Jeremy Gillen2d94

Why would models start out aligned by default? 

Reply
Some data from LeelaPieceOdds
Jeremy Gillen6d*140

This is the best I've got so far. I estimated the rating using the midpoint of a logistic regression fit to the games. The first few especially seem to have been inflated due to not having enough high rated players in the data, so it had to extrapolate. And they all seem inflated by (I'd guess) a couple of hundred points due to the effects I mentioned in the post. (Edit: Please don't share the graph alone without this context).

The NN rating in the Blitz data highlights the flaw in this method of estimating the rating.

I haven't found a way to get similar data on human vs human games.

Reply
Some data from LeelaPieceOdds
Jeremy Gillen6d20

Took a while to download all this. I'm curious what your blitz rating is?

Reply
shortplav
Jeremy Gillen8d20

Does that sound right?

Can't give a confident yes because I'm pretty confused about this topic, and I'm pretty unhappy currently with the way the leverage prior mixes up action and epistemics. The issue about discounting theories of physics if they imply high leverage seems really bad? I don't understand whether the UDASSA thing fixes this. But yes.

That avoids the "how do we encode numbers" question that naturally raises itself.

I'm not sure how natural the encoding question is, there's probably an AIT answer to this kind of question that I don't know.

Reply
Some data from LeelaPieceOdds
Jeremy Gillen8d40

By "control plausibly works" I didn't mean "Stuff like existing monitoring will work to control AIs forever". I meant it works if it is a stepping stone allows us to accelerate/finish alignment research, and thereby build aligned AGI. 

Reply
Some data from LeelaPieceOdds
Jeremy Gillen8d20

I think several of the subquestions that matter for whether it'll plausibly work to have AI solve alignment for us are in the second category. Like the two points I mentioned in the post. I think there are other subquestions that are more in the first category, which are also relevant to the odds of success. I'm relatively low confidence about this kind of stuff because of all the normal reasons why it's difficult to say how other people should be thinking. It's easy to miss relevant priors, evidence, etc. But still... given what I know about what everyone believes, it looks like these questions should be resolvable among reasonable people.

Reply
shortplav
Jeremy Gillen8d40

Makes sense, but in that case, why penalize by time? Why not just directly penalize by utility? Like the leverage prior.

Also, why not allow floating point representations of utility to be output? Rather than just binary integers?

Reply1
shortplav
Jeremy Gillen8d40

Aren't there programs that run fast and also return a number that grows much faster than |p|? Like up arrow notation. Why don't these grow faster than your speed prior penalizes them?

Reply1
Some data from LeelaPieceOdds
Jeremy Gillen9d40

I think there are reasonable people who look at the evidence and think it plausible that control works, and also reasonable people who look at the evidence and think it implausible that control works. And others who think that openai-superalignment-style plans plausibly work.

Something is going wrong here.

Reply
Load More
Eurisko
7 months ago
Eurisko
7 months ago
(+7/-6)
66Some data from LeelaPieceOdds
10d
21
70Detect Goodhart and shut down
10mo
21
31Context-dependent consequentialism
Ω
1y
Ω
6
161Without fundamental advances, misalignment and catastrophe are the default outcomes of training powerful AI
Ω
2y
Ω
60
175Thomas Kwa's MIRI research experience
2y
53
38AISC team report: Soft-optimization, Bayes and Goodhart
2y
2
119Soft optimization makes the value target bigger
Ω
3y
Ω
20
6Jeremy Gillen's Shortform
3y
57
76Neural Tangent Kernel Distillation
3y
20
37Inner Alignment via Superpowers
3y
13
Load More