Technical staff at Anthropic (views my own), previously #3ainstitute; interdisciplinary, interested in everything, ongoing PhD in CS, bets tax bullshit, open sourcerer, more at zhd.dev
I still think pretty regularly on "green-according-to-blue". It's become my concept handle to explain-away the appeal of the common mistake ('naive green'?), and simultaneously warn against dismissing green on the basis of a straw man.
I read Otherness & Control as they were published, and this was something like the core tension to me:
Musing on attunement has, I think, untangled some subtle confusions. I haven't exactly changed my mind, but I do think I'm a little wiser than I would have been otherwise.
There just aren't that many rivalrous goods in OSS - website hosting etc. tends to be covered by large tech companies as a goodwill/marketing/recruiting/supply-chain expense, c.f. the Python Sofware Foundation. Major conferences usually have some kind of scholarship program for students and either routinely pay speakers or cover costs for those who couldn't attend otherwise; community-organized conferences like PyCon tend to be more generous with those. Honor-system "individual ticket" vs "corporate ticket" prices are pretty common, often with a cheaper student price too.
A key mechanism I think is that lots of people are aware that they have lucrative jobs and/or successful companies because of open source, that being basically the only way to make money off OSS, and therefore are willing to give back either directly or for the brand benefits.
I'd strongly encourage most donors to investigate donating appreciated assets; especially when subject to LTCG tax treatment you can get a very neat double effect: donate the whole value to charity without subtracting CGT, and then claim a deduction for the full value.
We'll suppose each planet has an independent uniform probability of being at each point in its orbit
This happens to be true of Earth, and a very useful assumption, but I think it's pretty neat that some systems have a mean-motion resonance (eg Neptune-Pluto, or the Galilean moons, or almost Jupiter-Saturn) which constrains the relative position away from a uniform distribution.
I would love this list (even) more with hyperlinks.
"yes obviously there's a lot of gift-economy around", some further observations
I am so very tired of these threads, but I'll chime in at least for this comment. Here's last time, for reference.
I continue to think that working at Anthropic - even in non-safety roles, I'm currently on the Alignment team but have worked on others too - is a great way to contribute to AI safety. Most people I talk to agree that they think the situation would be worse if Anthropic had not been founded or didn't exist, including MIRI employees.
I'm not interested in litigating an is-ought gap about whether "we" (human civilization?) "should" be facing such high risks from AI; obviously we're not in such an ideal world, and so discussions from that implicit starting point are imo useless.
I have a lot of non-public information too, which points in very different directions to the citations here. Several are from people who I know to have lied about Anthropic in the past; and many more are adversarially construed. For some I agree on an underlying fact and strongly disagree with the framing and implication.
I continue to have written redlines which would cause me to quit in protest.
I'm disappointed that no one (EA-ish or otherwise) seems do have done anything interesting with that liquidation opportunity.
I've spent a lot of time this year on tax-and-donation planning, and helping colleagues with their plans. Some very substantial, largely still confidential, things have indeed been done, and I think they will pay off very nicely starting probably-next-year and scaling up over time.
"Yes, obviously!"
...except that this is apparently not obvious, for example to those who recommend taking a "safety role" but not a "capabilities role" rather than an all-things-considered analysis. That's harder and often aversive, but solving a different easier problem doesn't actually help.