orthonormal

Sequences

Staying Sane While Taking Ideas Seriously

Comments

Discussion with Eliezer Yudkowsky on AGI interventions

Fighting is different from trying. To fight harder for X is more externally verifiable than to try harder for X. 

It's one thing to acknowledge that the game appears to be unwinnable. It's another thing to fight any less hard on that account.

Discussion with Eliezer Yudkowsky on AGI interventions

One tiny note: I was among the people on AAMLS; I did leave MIRI the next year; and my reasons for so doing are not in any way an indictment of MIRI. (I was having some me-problems.) 

I still endorse MIRI as, in some sense, being the adults in the AI Safety room, which has... disconcerting effects on my own level of optimism.

Discussion with Eliezer Yudkowsky on AGI interventions

Ditto - the first half makes it clear that any strategy which isn't at most 2 years slower than an unaligned approach will be useless, and that prosaic AI safety falls into that bucket.

Speaking of Stag Hunts

Thanks for asking about the ITT. 

I think that if I put a more measured version of myself back into that comment, it has one key difference from your version.

"Pay attention to me and people like me" is a status claim rather than a useful model.

I'd have said "pay attention to a person who incurred social costs by loudly predicting one later-confirmed bad actor, when they incur social costs by loudly predicting another". 

(My denouncing of Geoff drove a wedge between me and several friends, including my then-best friend; my denouncing of the other one drove a wedge between me and my then-wife. Obviously those rifts had much to do with how I handled those relationships, but clearly it wasn't idle talk from me.)

Otherwise, I think the content of your ITT is about right. 

(The emotional tone is off, even after translating from Duncan-speak to me-speak, but that may not be worth going into.)

For the record, I personally count myself 2 for 2.5 on precision. (I got a bad vibe from a third person, but didn't go around loudly making it known; and they've proven to be not a trustworthy person but not nearly as dangerous as I view the other two. I'll accordingly not name them.)

Speaking of Stag Hunts

Thanks, supposedlyfun, for pointing me to this thread.

I think it's important to distinguish my behavior in writing the comment (which was emotive rather than optimized - it would even have been in my own case's favor to point out that the 2012 workshop was a weeklong experiment with lots of unstructured time, rather than the weekend that CFAR later settled on, or to explain that his CoZE idea was to recruit teens to meddle with the other participants' CoZE) from the behavior of people upvoting the comment.

I expect that many of the upvotes were not of the form "this is a good comment on the meta level" so much as "SOMEBODY ELSE SAW THE THING ALL ALONG, I WORRIED IT WAS JUST ME".

My ML Scaling bibliography

Is this meant to be a linkpost? I don't see any content except for the comment above.

Eutopia is Scary

The subconscious mind knows exactly what it's flinching away from considering. :-)

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

A secondary concern in that it's better to have one org that has some people in different locations, but everyone communicating heavily, than to have two separate organizations.

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

Sure - and MIRI/FHI are a decent complement to each other, the latter providing a respectable academic face to weird ideas. 

Generally though, it's far more productive to have ten top researchers in the same org rather than having five orgs each with two top researchers and a couple of others to round them out. Geography is a secondary concern to that.

Load More