Interesting. I really hope that some of them do something, soon. Time is fast running out. There's no point being a rich philanthropist (or rich, or a philanthropist) if the world gets destroyed before you deploy your resources.
Thanks, that's good to hear. What form does the pledge take? Do you have a DAF that contains half your shares? When do you think the next liquidation opportunity might be? (I guess you weren't eligible for the one in May[1]?)
I'm disappointed that no one (EA-ish or otherwise) seems do have done anything interesting with that liquidation opportunity.
Have you donated any of your equity yet? If not, why not?
Shared on the EA Forum, with some commentary on the state of the EA Community (I guess the LessWrong rationality community is somewhat similar?)
In practice, bans can be lifted, so "never" is never going to become an unassailable law of the universe. And right now, it seems misguided to quibble over "Pause for 5, 10, 20 years", and "Stop for good", given the urgency of the extinction threat we are currently facing. If we're going to survive the next decade with any degree of certainty, we need an alliance between B1 and B2, and I'm happy for one to exist.
Re "invest in AI and spend the proceeds on AI safety" - another consideration other than the ethical (/FDT) concerns, is that of liquidity. Have you managed to pull out any profits from Anthropic yet? If not, how likely do you think it is that you will be able to[1] before the singularity/doom?
Maybe this would require an IPO?
One problem is that this assumption of the ASI society being mostly structured as well-defined persistent individuals with long-term interests is questionable
Very questionable. Why would it be separate individuals in a society, and not be - or just very rapidly collapse into - a singleton? In fact, the dominant narrative here on LW has always featured a singleton ASI as the main (existential) threat. And my story here reflects that.
being able to discover new laws of nature and to exploit the consequences of that.
Ok, but I think that still basically leads to the end result of all humans (and biological life) dead.
It seems odd to think that it's more likely such a discovery would lead to the AI disappearing into it's own universe (like in Egan's Crystal Nights), than just obliterating our Solar System with it's new found powers. Nothing analogous has happened in the history of human science and tech development (we have only become more destructive of other species and their habitats).
then it would be better to use an example not directly aimed against “our atoms”
All the atoms are getting repurposed at once, no special focus on those in our bodies (but there is in the story, to get the reader to empathise). Maybe I could've included more description of non-alive things getting destroyed.
mucking with quantum gravity too recklessly, or smth in that spirit
I'm trying to focus on plausible science/tech here.
they need to do experiments in forming hybrid consciousness with humans to crack the mystery of human subjectivity, to experience that first-hand for themselves, and to decide whether that is of any value to them based on the first-hand empirical material (losing that option without looking is a huge loss)
Interesting. But even if they do find something valuable in doing that, there's not much to keep the vast majority of humans around. And as you say, they could just end up as "scans", with very few being run as oracles.
(I say this as someone who has already put a lot of their money where their mouth is.)