Your dedication to acting morally is admirable. However, I think the underlying mindset behind this post is a bit counterproductive. First of all, you did not steal the money in any meaningful sense. If someone robbed a bank and used some of the money to buy dinner from Outback Steakhouse, nobody would accuse the steakhouse of robbing money from the bank. Furthermore, you did everything a reasonable person would do in your shoes, emailing the FTX estate, calling them, ect. It's not like you have done nothing to return the funds, or are just using them to fulfill your personal interests.
Your argument about the global economy is similarly overly idealistic. Yes, theft affects the economy in negative ways by reducing trust, but there is no way that your actions would contribute to this. The vast majority of people would agree that you have fulfilled your duty.
Thus, I think simply donating the money to a valid cause is the best thing you can do with the money at this point. You have done right by most potential deontological/ virtue ethics frameworks of ethics already, so there is no reason not to lean into utilitarianism and do the most good instead of contorting yourself into avoiding all things that FTX may have thought was good. As long as people who benefited from FTX attempted to return their unspent funds before donating it to charity, that would get a full pass from me.
Furthermore, if we extend the principle that you can't use money derived from unethical sources further, you would not be able to spend any money, as much money is derived from arguably more unethical sources. People who run massive factor farms that cause mass suffering to animals contribute lots of money to the economy. Defense contractors who sell weapons to oppressive governments also do so. Refusing to use any "tainted money" could rationally commit you to refusing to accept payments from such workers, which is obviously flawed, as you doing so has little bearing on their actions, and would be very difficult to implement in practice.
Could you link me to his work? If he is correct, it seems a little bit counterintuitive.
Given your response, it seems like there should be a stronger push towards AI divestment from within the LessWrong and EA communities. Assuming that many members are heavily invested in index funds like S&P500, that means that millions are dollars are being spent by the less wrong community alone on the stock of companies pursuing AI capabilities research (Microsoft, Google, and Nvidia alone make up more than 10% of the Index’s market cap), which is not an intuitively negligible effect in my view. One could rationalize this by saying that they could use the excess gains to invest in AI safety, but you seem to disagree with this (I am uncertain myself given a lack of experience with AI safety non-profits ).
Very interesting paper. Thanks for sharing! I agree with several of the limitations suggested in the paper, such as the correlation between number of uses of the oracle AI and catastrophic risk, the analogy to AI to a nuclear power plant (obviously with the former having potentially much worse consequences), and the disincentives for corporations to cooperate with containment safety measures. However, one area I would like to question you on is the potential dangers of super intelligence. Its referred to throughout the paper, but never really explicitly explained.
I agree that super intelligent AI, as opposed to human level AI, should probably be avoided, but if we design the containment system well enough, I would like to know how having a super intelligent AI in a box would really be that dangerous. Sure, the super intelligent AI could theoretically make subtle suggestions which end up changing the world (a la the toothpaste example you use), and exploit other strategies we are not aware of, but in the worst case I feel that still buys us valuable time to solve alignment.
In regards to open weight models, I agree that at some point regulation has to be put in place to prevent unsafe AI development (possibly on an international level). This may not be so feasible, but regardless, I view comprehensive alignment as unlikely to be achieved before 2030, so I feel like this is still the best safety strategy to pursue if existential risk mitigation is our primary concern.
As a full throated defender of pulling the lever (given traditional assumptions such as a lack of an audience, complete knowledge of each outcomes, productivity of the people on the tracks) , there are numerous issues with your proposals:
1.) Vague alternative: You seem to be pushing towards some form of virtue ethics/basic intuitionism, but there are numerous problems with this approach. Besides determining whose basic intuitions count and whose don't, or which virtues are important, there is very real problems when these virtues conflict. For instance, imagine you are walking at night, and are trying to cross a street. The sign says red, but no cars are around. Do you jaywalk? In this circumstance, one is forced to make a decision which pits two virtues/ intuitions against each other. The beauty of utilitarianism is that it allows us to choose in these circumstances.
2.) Subjective Morality: Yes, utilitarianism may not be "objective" in the sense that there is no intrinsic reason to value human flourishing, but I believe utilitarianism to be the viewpoint which closest conforms to what most people value. To illustrate why this matters, I take an example from Alex O'Connor. Image you need to decide what color to paint a room. Nobody has very strong opinions, but most people in your household prefer the color blue. Yes, blue might not be "objectively" the best, but if most of the people in your household like the color blue the most, there is little reason not to. We are all individually going to seek what we value, so we might as well collectively agree to a system which reflects the preferences of most people.
3.) Altruism in Disguise:
Another thing to notice is that virtue ethics can be a form of effective altruism when practiced in specific ways. In general, bettering yourself as a person by becoming more rational, less biased, etc, will in fact make the world a better place, and giving time to form meaningful relationships, engage in leisure, etc. can actually increase productivity in the long run.
You also seem advocate for fundamental changes in society, changes I am not sure I would agree with, but if your proposed changes are indeed the best way to increase the general happiness of the population, it would be, by definition, the goal of the EA movement. I think a lot of people look at the recent stuff with SBF and AI research and come to think the EA movement is only concerned with lofty existential risk scenarios, but there is a lot more to it then that.
Edit:
Almost forgot this, but citation: Alex O'Connor(in this video) formulated the blue room example. We use it differently (he uses it to argue against objective morality), but he verbalized it.
Thanks for stating your objection to my argument. I agree with you and that is why I argue for the formation of a centrist party with AI Safety as an important issue, rather than a party which literally only speaks about AI, as the latter party would come across as unserious (even the Green Party does not do this with climate change). You can see the "What about other issues?" section about some general positions I think a party should take.
I also think an effective political party could grow support for AI Safety by leveraging media coverage in order to bring to light the dangers of AI and the necessity of regulation, so even if it starts as a fringe prospect, it will gain momentum over time.
To your second point about existing parties being able to adopt this as part of the platform, along with the potential bad optics of EAs/AI doomers "invading" existing parties, I think it is simply harder to succeed in political party primaries than as a third party candidate due to political party primaries having an unlevel playing field which favors catering to the idiosyncrasies of the party's base and major party players.
To back this up, I do not see Chase Oliver getting anywhere as a Republican nominee, but he did gain some notoriety as a third party candidate. While it is in theory possible for pro-AI Safety rationalists to gain power through traditional parties, I feel like this would take a very long time and is less likely to succeed.
Ultimately, however, even if you think this has a slim chance of succeeding, I see some electoral effort as the only plausible way of achieving necessary government support for AI Safety, so I feel like something like this is still worth it if there is not a clear alternative.