Previously "Lanrian" on here. Research analyst at Redwood Research. Views are my own.
Feel free to DM me, email me at [my last name].[my first name]@gmail.com or send something anonymously to https://www.admonymous.co/lukas-finnveden
Apparently the thing of "people mixed up openphil with other orgs" (in particular OpenAI's non-profit and the open society foundation) was a significantly bigger problem than I'd have thought — recurringly happening even in pretty high-stakes situations. (Like important grant applicants being confused.) And most of these misunderstandings wouldn't even have been visible to employees.
And arguably this was just about to get even worse with the newly launched "OpenAI foundation" sounding even more similar.
Yay, empirical data!
Ok, so maybe both ”people who don’t think they can cut it on their technical skills [and so wear suits]” and socially oblivious people with suits are rare. And so the dominant signal would just be that the person is an outlier level of culturally out-of-touch.
(Though maybe suits start seeping in more at less prestigious and culturally iconic tech companies than google.)
Like: It doesn’t sound like you were confident enough in your technical skills that you were like ” it doesn’t matter how I dress”, since you thought a suit would tank your chances. Sounds like you understood that it was very important how you dressed for the interview, and that you just knew what the dress code was.
the only people who wear a suit for an interview in tech are the people who don’t think they can cut it on their technical skills and the people hiring know this
I’d bet against this.
Socially clued-in people who have poor technical skills will understand that they should show up in a hoodie to not tank their chances. (Insofar as interviewers are actually selecting in the way you say.)
I bet there will be some brilliant programmers who are so socially clueless that they listens to their mom’s advice and foolishly show up in a suit. (Or immigrants from a country with very different norms in tech, also comes to mind as someone who could get this wrong.)
It’s still plausible that a hoodie is still overall bayesian evidence that someone is good at programming. But I think it’s weaker than you say. And I don’t think it just operates via confidence in technical skills. Eg I think you’re selecting at least as strongly for being very familiar with programmer culture. (Which is evidence of being a good programmer! But evidence of a very similar kind to the way a suit is evidence that someone will be a good lawyer.)
Yeah, I found this surprisingly focused on social reality given that the just previous sentence was ”The winner’s bracket isn’t focused on signalling games, it’s focused on something more object level.”
If you feel the need to signal how focused on the object level you are, you’re still playing the signaling game.
I don't think this was a big difference between the first and the second version. The first version already had this bullet point:
However, in a situation of extreme emergency, such as when a clearly bad actor (such as a rogue state) is scaling in so reckless a manner that it is likely to lead to lead to imminent global catastrophe if not stopped (and where AI itself is helpful in such defense), we could envisage a substantial loosening of these restrictions as an emergency response. Such action would only be taken in consultation with governmental authorities, and the compelling case for it would be presented publicly to the extent possible.
Neel was talking about AI safety expertise and experience in the AI safety field. I can’t see that Chris had any such experience on his linked-in.
Did apollo have anyone you’d consider highly experienced when first starting out?
Yeah, I don’t disagree with anything in this comment. I was just reacting to the market for lemons comparison.
When I wrote about AGI and lock-in I looked into error-correcting computation a bit. I liked the papers von Neumann, 1952 and Pippenger, 1990.
Apparently at the time I wrote:
I've forgotten the details about how this was supposed to be done, but they should be in the two papers I linked.