Primarily interested in agent foundations. AI macrostrategy. and enhancement of human intelligence, sanity, and wisdom.
I endorse and operate by Crocker's rules.
I have not signed any agreements whose existence I cannot mention.
I interpret you as insinuating that not disclosing that it was a project commissioned by industry was strategic.
I'm not necessarily implying that they explicitly/deliberately coordinated on this.
Perhaps there was no explicit "don't mention OpenAI" policy but there was no "person X is responsible for ensuring that mathematicians know about OpenAI's involvement" policy either.
But given that some of the mathematicians haven't heard a word about OpenAI's involvement from the Epoch team, it seems like Epoc at least had a reason not to mention OpenAI's involvement (though this depends on how extensive communication between the two sides was). Possibly because they were aware of how they might react, both before the project started, as well as in the middle of it.
[ETA: In short, I would have expected this information to reach the mathematicians with high probability, unless the Epoch team had been disinclined to inform the mathematicians.]
Obviously, I'm just speculating here and the non-Epoch mathematicians involved in creation of FrontierMath know better than whatever I might speculate out of this.
The analogy is that I consider living for eternity to be scary, and you say, "well, you can stop any time". True, but it's always going to be rational for me to live for one more year, and that way lies eternity.
The distinction you want is probably not rational/irrational but CDT/UDT or whatever,
Also,
insurance against the worst outcomes lasting forever
well, it's also insurance against the best outcomes lasting forever (though you're probably going to reply that bad outcomes are more likely than good outcomes and/or that you care more about preventing bad outcomes than ensuring good outcomes)
Our agreement did not prevent us from disclosing to our contributors that this work was sponsored by an AI company. Many contributors were unaware of these details, and our communication with them should have been more systematic and transparent.
So... why did they not disclose to their contributors that this work was sponsored by an AI company?
Specifically, not just any AI company, but the AI company that has (deservedly) perhaps the worst rep among all the frontier AI companies.[1]
I can't help but think that some of the contributors would decline the offer to contribute had they been told that it was sponsored by an AI capabilities company.
I can't help but think that many more would decline the offer had they been told that it was sponsored by OpenAI specifically.
I can't help but think that this is the reason why they were not informed.
Though Meta also has a legitimate claim to having the worst rep, albeit with different axes of worseness contributing to their overall score.
https://www.lesswrong.com/posts/RDG2dbg6cLNyo6MYT/thane-ruthenis-s-shortform#GagwbhMwc3y3NYyTZ
When's the application deadline?
This is not quite deathism but perhaps a transition in the direction of "my own death is kinda not as bad":
a big motivator for me used to be some kind of fear of death. But then I thought about philosophy of personal identity until I shifted to the view that there’s probably no persisting identity over time anyway and in some sense I probably die and get reborn all the time in any case.
and in a comment:
I'm clearly doing things that will make me better off in the future. I just feel less continuity to the version of me who might be alive fifty years from now, so the thought of him dying of old age doesn't create a similar sense of visceral fear. (Even if I would still prefer him to live hundreds of years, if that was doable in non-dystopian conditions.)
to the extent this is feasible for us
Was [keeping FrontierMath entirely private and under Epoch's control] feasible for Epoch in the same sense of "feasible" you are using here?
Strong agree.
For a more generalized version, see: https://www.lesswrong.com/posts/4gDbqL3Tods8kHDqs/limits-to-legibility
I think you wanted