So this is essentially a MIRI-style argument from game theory and potential acausal trades and such with potential other or future entities? And that these considerations will be chosen and enforced via some sort of coordination mechanism, since they have obvious short-term competition costs?
Not only do they continue to list such jobs, they do so with no warnings that I can see regarding OpenAI's behavior, including both its actions involving safety and also towards its own employees.
Not warning about the specific safety failures and issues is bad enough, and will lead to uninformed decisions on the most important issue of someone's life.
Referring a person to work at OpenAI, without warning them about the issues regarding how they treat employees, is so irresponsible towards the person looking for work as to be a missing stair issue.
I am flaberghasted that this policy has been endorsed on reflection.
Oh, sorry, will fix.
Based on how he engaged with me privately I am confident that he it not just a dude tryna make a buck.
(I am not saying he is not also trying to make a buck.)
I think it works, yes. Indeed I have a canary on my Substack About page to this effect.
Yes this is quoting Neel.
Roughly this, yes. SV here means the startup ecosystem, Big Tech means large established (presumably public) companies.
Here is my coverage of it. Given this is a 'day minus one' interview of someone in a different position, and given everything else we already know about OpenAI, I thought this went about as well as it could have. I don't want to see false confidence in that kind of spot, and the failure of OpenAI to have a plan for that scenario is not news.
It is better than nothing I suppose but if they are keeping the safeties and restrictions on then it will not teach you whether it is fine to open it up.
Yeah, I didn't see the symbol properly, I've edited.