lexande
lexande has not written any posts yet.

Some altruistically-motivated projects would be valid investments for a Checkbook IRA. I guess if you wanted to donate 401k/IRA earnings to charity you'd still have to pay the 10% penalty (though not the tax if the donation was deductible) but that seems the same whether it's pretax or a heavily-appreciated Roth.
The math in the comment I linked works the same whether the chance of money ceasing to matter in five years' time is for happy or unhappy reasons.
My impression is that the "Substantially Equal Periodic Payments" option is rarely a good idea in practice because it's so inflexible in not letting you stop withdrawals later, potentially even hitting you with severe penalties if you somehow miss a single payment. I agree that most people are better off saving into a pretax 401k when possible and then rolling the money over to Roth during low-income years or when necessary. I don't think this particularly undermines jefftk's high-level point that tax-advantaged retirement savings can be worthwhile even conditional on relatively short expected AI timelines.
... (read more)I prefer pre-tax contributions over Roth ones now because of my expectation that probably there will be an
You have to be really confidently optimistic or pessimistic about AI to justify a major change in consumption rates; if you assign a significant probability to "present rate no singularity"/AI winter futures then the benefits of consumption smoothing dominate and you should save almost as much (or as little) as you would if you didn't know about AI.
Note that it is entirely possible to invest in almost all "non-traditional" things within a retirement account; "checkbook IRA" is a common term for a structure that enables this (though the fees can be significant and most people should definitely stick with index funds). Somewhat infamously, Peter Thiel did much of his early angel investing inside his Roth IRA, winding up with billions of dollars in tax-free gains.
In particular it seems very plausible that I would respond by actively seeking out a predictable dark room if I were confronted with wildly out-of-distribution visual inputs, even if I'd never displayed anything like a preference for predictability of my visual inputs up until then.
It seems like a major issue here is that people often have limited introspective access to what their "true values" are. And it's not enough to know some of your true values; in the example you give the fact that you missed one or two causes problems even if most of what you're doing is pretty closely related to other things you truly value. (And "just introspect harder" increases the risk of getting answers that are the results of confabulation and confirmation bias rather than true values, which can cause other problems.)
Here's an attempt to formalize the "is partying hard worth so much" aspect of your example:
It's common (with some empirical support) to approximate utility as proportional to log(consumption). Suppose Alice has $5M of savings and expected-future-income that she intends to consume at a rate of $100k/year over the next 50 years, and that her zero utility point is at $100/year of consumption (since it's hard to survive at all on less than that). Then she's getting log(100000/100) = 3 units of utility per year, or 150 over the 50 years.
Now she finds out that there's a 50% chance that the world will be destroyed in 5 years. If she maintains her old... (read more)
Why would you expect her to be able to diminish the probability of doom by spending her million dollars? Situations where someone can have a detectable impact on global-scale problems by spending only a million dollars are extraordinarily rare. It seems doubtful that there are even ways to spend a million dollars on decreasing AI xrisk now when timelines are measured in years (as the projects working on it do not seem to be meaningfully funding-constrained), much less if you expected the xrisk to materialize with 50% probability tomorrow (less time than it takes to e.g. get a team of researchers together).
It's true that claims that poor people now are much richer than poor or even rich people 300 years ago rely somewhat on cherrypicking which axes to measure, but the cited claims of "100-fold productivity increase" since then *also* rely on cherrypicking which axes to measure.
We haven't gotten 100x more productive in obtaining oxygen, certainly, nor in many still-scarce resources people care about (childcare might be a particularly clear example). So people still experience poverty because civilization is still tightly bottlenecked on some resources.
I don't think there are any resources which have gotten 100x more abundant per capita but that people still desperately scrabble to afford basic levels of. And for resources that are abundant but not hyperabundant, it's clear how redistribution like UBI can help.