I'm really sorry for your loss. It's a crushing thing and as my parents get older I often feel that gnawing terror and anxiety as well. I hope you can find some peace eventually.
I'm sorry for your loss. I would just like to point out that proceeding cautiously with AGI development does not mean that we'll reach longevity escape velocity much later. Actually, I think if we don't develop AGI at all, the chances for anyone celebrating their 200th birthday are much greater.
To make the necessary breakthroughs in medicine, we don't need a general agent who can also write books or book a flight. Instead, we need highly specialized tool AI like AlphaFold, which in my view is the most valuable AI ever developed, and there's zero chance that it will seek power and become uncontrollable. Of course, tools like AlphaFold can be misused, but the probability of destroying humanity is much lower than with the current race towards AGI that no one knows how to control or align.
It is my opinion as an aging researcher, for what it is worth, that the chances of living 200 years by anyone currently alive round to 0% if we do not develop AGI. We may get away with not developing strong superintelligence, but I consider the development of AGI a necessity. Knowing this, you may proceed accordingly and do your EV calculations. Maybe it is worth the risk or maybe it is not.
Could you explain why exactly AGI is "a necessity"? What can we do with AGI that we can't do with highly specialized tool AI and one ore more skilled human researchers?
Not the person you're responding to, but my guess is that without general AI, we wouldn't know the right questions to ask or which specialized AIs to create.
Thanks for your comment! If we talk about AGI and define this as "generally as intelligent as a human, but not significantly more intelligent", then by definition it wouldn't be significantly better at figuring out the right questions. Maybe AGI could help with that by enhancing our capacity for searching for the right questions, but it shouldn't be a fundamental difference, especially if we weigh the risk of losing control over AI against it. If we talk about superintelligent AI, it's different, but the risks are even higher (however, it's not easy to draw a clear line between AGI and ASI).
All in all, I would agree that we lose some capabilities to shape our future if we don't develop AGI, but I believe that this is the far better option until we understand how to keep AGI under control or safely and securely align it to our goals and values.
Fair point. I basically agree with that - AGI would give us broader capabilities than narrow AI, but certainly would also carry greater risk.
What about the enhancement of human intelligence that was discussed here? (For example How to make superkids - LessWrong).
They probably have more than a 1% chance of success and could accelerate anti-aging research. Even if you consider the current research situation critically stalled.
I am sorry for your loss. Death is natural but it is so, so bad.
I'm assuming you're posting this here in part to foster a discussion of this tension between preventing deaths in the short term and taking more risks on killing everyone by getting alignment wrong if we rush toward AGI.
This tension is probably going to become more widespread as the concept of AGI becomes more prominent. Many people will want faster progress, in hopes of saving themselves or their loved ones. Longtermism is pretty dominant here on LW, but it is very much a minority view in society at large. Thus, this urge to rush will have to be countered by spreading an awareness of how rushing toward AGI is increasing the odds for older people, while risking the lives of their children and grandchildren. And all of the glorious generations to follow - while most people aren't longtermist, the idea of unimaginable flourishing does hold some weight in their minds, and it isn't that ahrd to imagine in broad form.
I face this dilemma myself. At 50 and in imperfect health, my likely end falls somewhere in the middle of my predicted range of hitting longevity takeoff. Any small speedup or slowdown might shift my odds substantially. I don't know what I'd do if I had real power over our rate of progress, but I don't. So I'll continue advocating that we slow down as much as we can, while also working as fast as we can to align our first AGI/ASIs. That speed will improve our odds of collective survival in the likely case that we can't slow down substantially. And it might even save a few more of the precious unique minds now alive.
🕯️
My mom died last December, and part of the grief is in how hard it is to say (to people who loved her, and miss her, like I do, but don't have the same awareness of history) what you've said here about your mom, and timelines, and how much potentially fantastic future our mothers missed out on. Thank you for putting some of that part of "that lonely part of the grief" into words.
Sorry that you also lost your mom. 🫂
A sentiment that didn't quite make it into the piece is that my anger and grief has been transformed into steadfastness by my love for her. The idea for this post came from a sense of determination that her death would mean something to others. That steadfastness has also given new fuel to my other projects. I'm determined to get my book finished in time to influence the course of AI. I'm also determined to live the best life I can, and one worthy of my mom's sense of fun, if we really do only have dozens of months left.
That seems nice. I have not acquired steadfastness (yet (growth mindset?)) but perhaps "find things from which I could justifiably draw steadfastness as a resulting apparent trait" would be a useful tactic to try to apply. I have mostly optimized for flexibility, such as to be able to react to whatever happens, and then be able to nudge everything closer back towards The Form Of The Good... but the practical upshot doesn't look like steadfastness from the outside, I don't think.
Mom would have approved of less "apparent chaos from a distance without the ability to see the causal details" in my life. One of her folksy mantras was "be normal and good" and it was a family joke that my brother and I would always object "we can't do that! look at the world, you have to pick one!"
Sorry for your loss.
I believe that if we stop AI research then everyone alive today will be dead by 2150 with near certainty. LEV without AGI does not seem realistic at all…
In the next two decades we're likely to reach longevity escape velocity: the point at which medicine can increase our healthy lifespans faster than we age.
I have the same belief and have thought about how bad it’d be if my loved ones died too soon.
Sorry for your loss.
My sympathies to you and your family. This is one of life's sadness that we all experience at some point. While I don't think this will help easy anything (what could I say to do so?) I have to say when may father died it was too soon -- or felt that way. He lived a long life, died from aging (at nearly 102) and lasted a few days after the pain got too much for him that morphine was regularly administered by the hospice nurse. It was time, and expected. When it happened it was still too soon.
It was easier for me when my mother died about 6 months later but I didn't not have the same sense of loss, being lost, wishing there was more I had done, regrets regarding the relationship and more. And in her case there was even a bit of your experience of an initial mis-assessment (in our case be a sybling) that was the final straw with my mother dying a couple of weeks later.
I hope your father is taking it well (that sounds bad because I know it's nothing but pain, but I cannot think of any better way to say) and has other reasons to view life as worth living rather than feeling that same deep too soon loss and a lack of interest in what life offers.
I'm sorry for your loss. It is something no one should have to go through.
My father was diagnosed with Parkinson's last year. I have processed and accepted the fact that he is going to die.
Under the circumstances, he is most likely going to die from artificial intelligence at about the same time that I do.
There is no temptation you could give me that would make me risk the end of all things. Not prevention of my father's death. Not the prevention of my death. Not the prevention of my partner's death. I do not need AGI. Humanity as a whole does not need AGI, nor do most people even want it.
Death is horrible, which is why everyone should be strongly advocating for AGI to not be built, until it is safe. By default, it will kill literally everyone.
If you find yourself weighing the lives of everyone on earth and deciding for yourself whether they should be imperiled, then you have learned the wrong lesson from stories of comic book supervillains. It's not our choice to make, and we are about to murder everyone's mothers.
Angry that doctors had spent years teaching her to delay treatment by dismissing her concerns.
Sorry for your loss, but thank you for reminding us how precious life is.
The quoted sentence from your post is I believe the main reasons why doctors (doctors not surgeons) will be one of the first high-status professions to be replaced by AI in a couple of years . If you can just get comprehensive blood work done (you could draw blood in a drop-in booth in a mall and send to a lab) and my some pictures taken of your body and then have a conversation with an AI about the symptoms you experience, the need to go to a psychical grumpy stressed doctor is no more needed or even something you would like to do when you will get much much better and consistent results from the AI doctor who treat you with respect and dignity and do no check his or her watch every minute.
Death is bad and should go away.
Consider whether working on cryonics does better both on would-have-saved-your-mom and on risks-the-lightcone than working on AI.
I don't understand the argument about the trillion expected lives. Why is it assumed that after the singularity we will have trillions more human children? Why is this not a mind upload and the end of reproduction in its current form? And the people alive at the time of the singularity will simply exist in virtual worlds with AI agents?
I find it strange to ask an ASI to emulate a human so that you can raise in your virtual world. And conditionally 8 billion improved uploaded human minds may not strive to emulate as many variations of human brains as possible in order to fill the universe with them.
It was a cold and cloudy San Francisco Sunday. My wife and I were having lunch with friends at a Korean cafe.
My phone buzzed with a text. It said my mom was in the hospital.
I called to find out more. She had a fever, some pain, and had fainted. The situation was serious, but stable.
Monday was a normal day. No news was good news, right?
Tuesday she had seizures.
Wednesday she was in the ICU. I caught the first flight to Tampa.
Thursday she rested comfortably.
Friday she was diagnosed with bacterial meningitis, a rare condition that affects about 3,000 people in the US annually. The doctors had known it was a possibility, so she was already receiving treatment.
We stayed by her side through the weekend. My dad spent every night with her. We made plans for all the fun things we would when she was feeling better.
Monday the doctors did more tests.
Tuesday they told us the results.
My mom wasn't going to wake up. The disease had done too much damage to her brain.
We cried. We said our goodbyes. We kept her company.
A little over a week later, she passed away in peace.
My mom's name was Judy. Not Judith; just Judy.
She grew up in a small town in Illinois. The kind of place people come from but never go to.
She left as soon as she could. She enrolled in the nearby community college, where she met my dad, a recent transplant from Baltimore. He got her attention with his motorcycle. She won him over with a plate of cookies.
They got their degrees, then got married. They honeymooned at Disney World. My mom loved it so much she decided they were going to move there.
That took a few years. First they earned their bachelor's and master's of fine arts. Then they spent time teaching in Ohio. They saved their money, and before long they had enough. They made the move to Florida.
Within the year my mom was pregnant, and in a sign of her connection to Disney, I was due the day of EPCOT's grand opening. My dad wanted us to go so I could be born at Disney. My mom reasonably refused. I must have been disappointed, because I waited three more weeks to join them.
My three sisters soon followed, and my mom took a decade off from teaching to stay home with us. We spent many fun days playing in the yard, swimming in the pool, and tending the garden. When the afternoon sun rose high and threatened to melt us, we'd hide inside to play games, watch TV, and make art.
My mom made a lot of art. She threw pots. She weaved fibers. She painted in oils and drew with colored pencils. A few times a year she'd show her art, along with my dad's, at weekend art festivals. They'd sell a few pieces, but never a lot. They always kept the best stuff on our walls at home.
Most other weekends we were at Disney. On a typical Saturday we'd wake up early, drive through the sunrise to get there before opening, then race to do as many rides as we could before the lines got long. We'd spend the whole day playing, as my mom liked to say, until we were too tired to have fun anymore.
We all had our favorite rides. One of my sisters loved Big Thunder Mountain. Another always wanted to ride the Carousel. I liked Mr. Toad's Wild Ride, and my dad was a fan of anything he could take a nap on.
My mom also had a favorite ride: Peter Pan's Flight.
She never said this to me, but I think she loved that ride because she saw herself in Wendy. Like Wendy, she had to grow up, but that didn't mean she had to stop having fun. With help from my dad—her Peter—and us kids—her Lost Boys—she had many great adventures for many happy years.
This past year was an especially happy one. She and my dad celebrated their 50th wedding anniversary. She got to meet her second grandchild. And everyone came home to help her celebrate her favorite time of year—Christmas.
My mom loved almost everything about Christmas. Decorating the tree. Baking cookies. Finding the perfect present to give someone. But what she loved most of all was spending time with family. Having everyone together, all in one place, sharing the little moments of our lives.
Now, we have to go on living our little moments without her.
I've had a lot of feelings about my mom's death.
I've felt sad. I've felt lost. I've felt frustrated and depressed and like wallowing in despair.
I've also been angry.
Angry that she didn't go to the doctor at the first sign of infection.
Angry that doctors had spent years teaching her to delay treatment by dismissing her concerns.
But most of all angry that the world couldn't keep her alive for just a few years more.
Family history suggests she could have easily lived another 5 or 10 or 20 years. And if she had managed to live that long, then she might have lived a lot longer.
In the next two decades we're likely to reach longevity escape velocity: the point at which medicine can increase our healthy lifespans faster than we age. That might sound like science fiction to you, but we're surprisingly close. And with rapid advances in AI, we're accelerating the research necessary to make it a reality.
So while I'm sad my mom didn't live longer, I'm devastated that she didn't live long enough.
And at the same time I'm terrified, because the very AI progress that could have saved her may ultimately be the end of us all.
I've spent 25 years thinking about the potential dangers of AI. Not everyone agrees, but I believe the creation of artificial superintelligence will pose an existential threat to all living beings. I also believe we can avoid this fate, but only if we develop the theories and techniques necessary to steer AI towards supporting life's flourishing.
Before my mom's death, it was easy for me to say we should go slow. That we should take the time to do additional safety research.
Now I know first hand the terrible price of taking even one day more.
The accounting is grim. If we go faster, we might save more lives, but risk the extinction of all life on Earth. If we go slower, we sacrifice millions, but protect the future lives of trillions yet to be born.
Most days I don't think about this tradeoff. I just do what I can that may lead to safely creating AI.
Unfortunately, it's not a lot.
Others are able to do more. I applaud their efforts. I support their work. And I worry that, despite doing our best, we'll still fail.
Yet, I have hope. Maybe I shouldn't, but I do. It's a trait I inherited from my mother. She was a fundamentally hopeful person. I am, too.
I don't know how we'll do it, but I have hope we'll find our way through.
And if we do, it won't bring back my mom. But she will be among the last of the moms we had to lose. I hope that will be her legacy, and the legacy of every person we lost too soon.
Cross-posted from my blog.