Just someone who wants to learn about the world. I think about AI risk sometimes, but I still have a lot to learn.
You're right about (1). I seemed to have misread the chart, presumably because I was focused on worms.
Concerning (2), I don't see how your argument implies that the marginal returns to new resources are high. Can you clarify?
Formulations are basically just lifted from the post verbatim, so the response might be some evidence that it would be good to rework the post a bit before people vote on it.
But I think I already addressed the fundamental reply at the beginning of the section 2. The theses themselves are lifted from the post verbatim, however, I state that they are incomplete.
Maybe you'd class that under "background knowledge"? Or maybe the claim is that, modulo broken parts, motivation, and background knowledge, different people can meta-learn the same effective learning strategies?
I would really rather avoid making strict claims about learning rates being "roughly equal" and would prefer to talk about how, given the same learning environment (say, a lecture) and backgrounds, human learning rates are closer to equal than human performance in learned tasks.
I think it's important to understand that the two explanations I gave in the post can work together. After more than a year, I would state my current beliefs as something closer to the following thesis:
Given equal background and motivation, there is a lot less inequality in the rates human learn new tasks, compared to the inequality in how humans perform learned tasks. By "less inequality" I don't mean "roughly equal" as your prediction-specifications would indicate; the reason is because human learning rates are still highly unequal, despite the fact that nearly all humans have similar neural architectures. As I explained in section two of the post, a similar architecture does not imply similar performance. A machine with a broken part is nearly structurally identical to a machine with no broken parts, yet it does not work.
The personal strategies for slowing aging are interesting, but I was under the impression that your post's primary thesis was that we should give money to, work for, and volunteer for anti-aging organizations. It's difficult to see how doing any of that would personally make me live longer, unless we're assuming unrealistic marginal returns to more effort.
In other words, it's unclear why you're comparing anti-aging and cryonics in the way you described. In the case of cryonics, people are looking for a selfish return. In the case of funding anti-aging, people are looking for an altruistic return. A more apt comparison would be about prioritizing cryonics vs. personal anti-aging strategies, but your main post didn't discuss personal anti-aging strategies.
I appreciate the detailed and thoughtful reply. :)
I and others think that anti-aging and donating to SENS is probably a more important cause area than most EA cause areas (especially short-term ones) besides X-risk for the reasons below.
I agree that anti-aging is neglected in EA compared to other short-term, human focused cause areas. The reason is likely because the people who would be most receptive to anti-aging move to other fields. As Pablo Stafforini said,
Longevity research occupies an unstable position in the space of possible EA cause areas: it is very "hardcore" and "weird" on some dimensions, but not at all on others. The EAs in principle most receptive to the case for longevity research tend also to be those most willing to question the "common-sense" views that only humans, and present humans, matter morally. But, as you note, one needs to exclude animals and take a person-affecting view to derive the "obvious corollary that curing aging is our number one priority". As a consequence, such potential supporters of longevity research end up deprioritizing this cause area relative to less human-centric or more long-termist alternatives.
I wrote a post about how anti-aging might be competitive with longtermist charities here.
Data from human trials suggest many of these approaches have already been shown to reduce the rate of cognitive impairment, cancer, and many other features of aging in humans. Given these changes are highly correlated with biological aging, the evidence strongly suggests the capacity for the approaches mentioned to slow biological in humans.
Again, this is nice, and I think it's good evidence that we could achieve modest success in the coming decades. But in the post you painted a different picture. Specifically, you said,
The 'white mirror' of aging is a world in which biological age is halted at 20-30 years, and people maintain optimal health for a much longer or indefinite period of time. Although people will still age chronologically (exist over time) they will not undergo physical and cognitive decline associated with biological aging. At chronological ages of 70s, 80s, even 200s, they would maintain the physical appearance and much lower disease risk of a 20-30-year-old.
If humans make continuous progress, then eventually we'll get here. I have no issue with that prediction. But my objection concerned the pace and tractability of research. And it seems like there's going to be a ton of work going from modest treatments for aging to full cures.
One possible response is that the pace of research will soon speed up dramatically. Aubrey de Grey has argued along these lines on several occasions. In his opinion, there will be a point at which humanity wakes up from its pro-aging trance. From this perspective, the primary value of research in the present is to advance the timeline when humanity wakes up and gets started on anti-aging for real.
Unfortunately, I see no strong evidence for this theory. People's minds tend to change gradually in response to gradual technological change. The researchers who said this year that "I'll wait until you have robust mouse rejuvenation" will just say "I'll wait until you have results in humans" when you have results in mice. Humans aren't going to just suddenly realize that their whole ethical system is flawed; that rarely ever happens.
More likely, we will see gradual progress over several decades. I'm unsure whether the overall project (ie. longevity escape velocity) will succeed within my own lifetime, but I'm very skeptical that it will happen within eg. 20 years.
In addition, in the past 2 years, human biological aging has already been reversed using calorie restriction, and with thymic rejuvenation, as measured by epigenetic (DNAm) aging.
I don't think either of these results are strong evidence of recent progress. Calorie restriction has been known about for at least 85 years. The thymic rejuvenation result was a tiny trial with ten participants, and the basic results have been known since at least 1992.
The recent progress in epigenetic clocks is promising, and I do think that's been one of the biggest developments in the field. But it's important to see the bigger picture. When I open up old Alcor Magazine archives, or old longevity books from the 1980s and 1990s, I find pretty much same arguments that I hear today for why a longevity revolution is near. People tend to focus on a few small laboratory successes without considering whether the rate of laboratory successes have gone up, or whether it's common to quickly go from laboratory success to clinical success.
Given that 86 percent of clinical trails eventually fail, and the marginal returns to new drug R&D has gone down exponentially over time, I want to know what specifically should make us optimistic about anti-aging, that's different from previous failed predictions.
I understand that the number of longevity biotech companies may (wrongly) suggest that the field is well-funded. But this number is not an accurate proxy for the relative funding received by basic geroscience to develop cures for aging, from which these companies are spun-out of.
If the number of companies working on rejuvenation biotechnology did not accurately represent the amount of total effort in the field, then what was the point of bringing it up in the introduction?
I think many EA's assume academia is an efficient market that will self-correct to prioritise research with the greatest potential impact
Interestingly, I get the opposite impression. But maybe we talk to different EAs.
Aubrey de Grey who has significant insight into the landscape of funding for anti-aging believes that $250-500 million over 10 years is required to kickstart the field sufficiently so that larger sources of funding will flow in.
I don't doubt Aubrey de Grey's expertise or his intentions. But I've heard him say this line too, and I've never heard him give any strong arguments for it. Why isn't the number $10 billion or $1 trillion? If you think about comparably large technological projects in the past, $500 million is a paltry sum; yet, I don't see a good reason to believe that this field is different than all the others. Moreover, there is a well-known bias that people within a field are more optimistic about their work than people outside of it.
For example, a drug or cocktail of therapies that extend life of all humans on Earth by 10 years essentially allows 10-years' worth of people who would otherwise have died of aging (~400 million people) to potentially reach the point at which AI solves aging and hence, longevity escape velocity.
This is only true so long as the drug can be distributed widely almost instantaneously. By comparison, it usually takes vaccines several decades to be widely distributed. I also find it very unlikely that any currently researched treatment will add 10 years of healthy life discontinuously. Again, progress tends to happen gradually.
Oops, that was a typo. I meant curing cancer. And I overlooked the typo twice! Oops.
This seems untrue on its face. What we mean by "curing aging" is negligible senescence.
And presumably what the cancer researcher meant by curing cancer was something like, "Can reliably remove tumors without them growing back"? Do you have evidence that we have not done this in mice?
In addition to the reasons you mentioned, there's also empirical evidence that technological revolutions generally precede the productivity growth that they eventually cause. In fact, economic growth may even slow down as people pay costs to adopt new technologies. Philippe Aghion and Peter Howitt summarize the state of the research in chapter 9 of The Economics of Growth,
Although each [General Purpose Technology (GPT)] raises output and productivity in the long run, it can also cause cyclical fluctuations while the economy adjusts to it. As David (1990) and Lipsey and Bekar (1995) have argued, GPTs like the steam engine, the electric dynamo, the laser, and the computer require costly restructuring and adjustment to take place, and there is no reason to expect this process to proceed smoothly over time. Thus, contrary to the predictions of real-business-cycle theory, the initial effect of a “positive technology shock” may not be to raise output, productivity, and employment but to reduce them.
As an effective altruist, I like to analyze how altruistic cause areas fare on three different axes: importance, tractability and neglectedness. The arguments you gave for the importance of aging are compelling to me (at least from a short-term, human-focused perspective). I'm less convinced that anti-aging efforts are worth it according to the other axes, and I'll explain some of my reasons here.
The evidence is promising that in the next 5-10 years, we will start seeing robust evidence that aging can be therapeutically slowed or reversed in humans.[...]In the lab, we have demonstrated that various anti-aging approaches can extend healthy lifespan in many model organisms including yeast, worms, fish, flies, mice and rats. Life extension of model organisms using anti-aging approaches ranges from 30% to 1000%:
When looking at the graph you present, a clear trend emerges: the more complex and larger the organism, the less progress we have made on slowing aging for that organism. Given that humans are much more complex and larger than the model organisms you presented, I'd caution against extrapolating lab results to them.
I once heard from a cancer researcher that we had, for all practical purposes, cured cancer in mice, but the results have not yet translated into humans. Whether or not this claim is true, it's clear that progress has been slower than the starry-eyed optimists had expected back in 1971.
That's not to say that there hasn't been progress in cancer research, or biological research more broadly. It's just that progress tends to happen gradually. I don't doubt that we can achieve modest success; I think it's plausible (>30% credence) that we will have FDA approved anti-aging treatments by 2030. But I'm very skeptical that these modest results will trigger an anti-aging revolution that substantially affects lifespan and quality of life in the way that you have described.
Most generally, scientific fields tend to have diminishing marginal returns, since all the low-hanging fruit tends to get plucked early on. In the field of anti-aging, even the lowest hanging fruit (ie. the treatments you described) don't seem very promising. At best, they might deliver an impact roughly equivalent to adding a decade or two of healthy life. At that level, human life would be meaningfully affected, but the millennia-old cycle of birth-to-death would remain almost unchanged.
Today, there are over 130 longevity biotechnology companies
From the perspective of altruistic neglectedness, this fact counts against anti-aging as a promising field to go into. The fact that there are 130 companies working on the problem with only minor laboratory success in the last decade indicates that the marginal returns to new inputs is low. One more researcher, or one more research grant will add little to the rate of progress.
In my opinion, if robust anti-aging technologies do exist in say, 50 years, the most likely reason would be that overall technological progress sped up dramatically (for example, due to transformative AI), and progress in anti-aging was merely a side effect of this wave of progress.
It's also possible that anti-aging science is a different kind of science than most fields, and we have reason to expect a discontinuity in progress some time soon (for one potential argument, see the last several paragraphs of my post here). The problem is that this argument is vunerable to the standard reply usually given against arguments for technological discontinuities: they're rare.
(However I do recommend reading some material investigating the frequency of technological discontinuities here. Maybe you can find some similarities with past technological discontinuities? :) )