I think if someone put the argument succinctly as "would you be ok then living in a world in which you suffer no disease but also matter nothing and are just kept pampered and ineffectual in a gilded cage", then the views would be rightfully be a lot more split. While playing cancer's advocate on this is the logically sound endpoint to it - yes, some more people dying of cancer is an acceptable lesser evil compared to humans just losing meaning - it may help to step back from that particular motte and direct the assault from a different perspective. We have plenty of dystopian stories in which the dystopia is "things are really good except there's no freedom".
That said there's also another angle to this: a lot of people don't get to do cancer research, or any other intellectually meaningful activity. They just do menial, often crushing work. To them cancer is just a danger and not an enemy they can fight on even ground. Anyone who already feels like they have no control or meaning has only to gain from a world in which they still have no control or meaning, but at least have their material needs met.
(of course realistically that is also a ridiculously optimistic view of where AI-mediated disempowerment leads us...)
Brilliant points - thank you @dr_s.
One small counterpoint. People might be doing crushing menial work, but we should not disregard their humanity. As everyone else in the world, they also try to achieve more, maybe be jealous, maybe proud of their achievements, maybe look up to others, or look down upon others. They also have their human agency, and they also might not agree, I think.
Unfortunately they might be the most affected by the incoming AI wave. When every warehouse installs unitree robots, I dont see how they will survive - because they might not have the safety net.
I almost fully agree.
In a strict sense, the people saying that scientists losing their privilege of doing research is obviously not that big of a deal compared to all the progress are right and I think Togelius is wrong in doubling down on this.
However, it clearly won't only be the small fraction of people who are scientists who'll be affected, so taking what happens to scientists (who are relatively privileged) as indicative of how the general population will be affected paints a different picture.
They are daring us to attack the Motte, knowing that it makes us look like monsters, while they quietly annex the Bailey.
The reason you guys seem like monsters isn't anything specific to cancer. It's that you are willing to sacrifice anything and everything for "meaning" [[1]] ... and not only on your own behalf, but on behalf of everybody else.
Using words like "pre-eminence" doesn't make you sound less scary, either.
... which none of you ever seem to be able to define in any satisfying way. ↩︎
On one hand, this is an astute observation: cancer (and also aging and mortality in general) are used in a similar fashion as “think about the children” (to justify things which would be way more difficult to justify otherwise).
That’s definitely the case.
However, there are two important object-level differences, and those differences make this analogy somewhat strained. Both of these differences have to do with the “libertarian dimension” of it all.
The opposition to “think about the children” is mostly coming from libertarian impulses, and as such this opposition notes that children are equally hurt (or possibly even more hurt) by “think of the children” measures. So the “ground case” for “think of the children” is false, those measures are not about protecting the children, but about establishing authoritarian controls over both children and adults.
Here is the first object-level difference. Unlike “think about the children”, “let’s save ourselves from cancer” is not a fake goal. Most of us are horrified and tired of seeing people around us dying from cancer, and are rather unhappy about their own future odds in this sense. (And don’t even let me start expressing what I think about aging, and about our current state of anti-aging science. We absolutely have to defeat obligatory aging ASAP.)
And that’s a rather obvious difference. But there is also another difference, also along the dimension of “libertarian values”. “Think of the children” is about imposing prohibition and control, about not letting people (children and adults) do what they want.
Here we are not talking about some evil AI companies trying to prohibit people from doing human-led research. We are talking about people wanting to restrict and prohibit creation of AI scientists.
So, in this sense, it is a false analogy. Mentioning the badness of the “think of the children” approach does first of all appeal to the libertarian impulse within us, the libertarian impulse which reminds us how bad those restrictive measures are, how costly they are for all of us.
The same libertarian impulse reminds us that in this case the prohibitionist pressure comes from the other side. And yes, a case, and perhaps even a very strong case, can be made for the need to restrict certain forms of AI. But I don’t think it makes sense to appeal to our libertarian impulse here.
Yes, it might be necessary to impose restrictions, but let’s at least not pretend that imposing those restrictions is somehow libertarian. (And no, we have to find a way to impose those restrictions in such a fashion that they are consistent with rapid progress against cancer and aging. Sorry to say, but it’s intolerable to keep having so much of both cancer and aging around us. We really can’t agree to postpone progress in these two areas, the scale of ongoing suffering and loss of life is just too much.)
Epistemic Status: Philosophical. Based on the debate by Togelius at NeurIPS
"The trouble with fighting for human freedom is that one spends most of one's time defending scoundrels. For it is against scoundrels that oppressive laws are first aimed, and oppression must be stopped at the beginning if it is to be stopped at all" - H.L. Mencken
At the NeurIPS 2025 debate panel, when most panelists were discussing about replacing humans at all levels for scientific progress, Dr. Julian Togelius stood and protested vehemently. In fact, he went as far as to call it "evil".
His argument was that people need agency, love what they do, find happiness in discovering something themselves, and cutting out humans from this loop is removing this agency. He pointed to the young researchers gathered around there, and pointed out that we are depriving them of activity they love and a key source of meaning in their lives.
The first question which came up there was - what if AI finds a cure to cancer? By stopping it, we are causing harm to people with cancer etc. Dr. Togelius was actually fine with some people still dying of cancer, if it means humans don't lose their meaning in life.
He then sent this following Tweet and the response was quite an eye opener about how people think. A vast majority of people were calling him an evil man, who, for his personal interests, is trying to kill people, and the like.
There is a difficult reality in defending fundamental rights. For one to support a fundamental principle, one often has to defend the most hated thing in the room.
As journalist H.L. Mencken so eloquently put it, the trouble with defending rights is that you spend the most time defending scoundrels. In this case, the so called scoundrel is not a person. The scoundrel is cancer.
Dr. Togelius has been forced into the unenviable position of serving as the defence attorney for cancer (or at least, for a slower cure). He has to argue for a world where this villain persists longer, simply to ensure that it is us humans who defeats it. He is defending the enemy's right to stay on the battlefield, because he does not want the enemy to be defeated by magic, if that magic also defeats humans by proxy.
We all have heard of the usage of Think of the Children as a thought-terminating cliche. It has been used so much, and in so many varied circumstances that, we understand it for what it is - using an unassailable moral good concept to shut down every argument. My view is that - the Think of the Cancer - argument is exactly the same. A way to shut down every argument, not even willing to hear out the defence of human agency.
We see this everywhere. Radical technologies are introduced through the unassailable moral shield of the 'medical edge case
Once the infrastructure is built for the edge case, which is driven by our compassion, it is inevitably scaled to the general case, which is rarely to our liking.
The critics of Dr. Togelius are doing the exact same thing. They are using the medical edge case of curing cancer to smuggle in the general case - which is the total obsolescence of human intellectual effort. They are daring us to attack the Motte, knowing that it makes us look like monsters, while they quietly annex the Bailey.
Right now, Dr. Togelius is being painted as evil, an egotistical person who values his own intellectual satisfaction over the lives of dying patients. But his point is much larger, and far more important. He is holding the line for a (now fragile) principle - the principle of human pre-eminence.
For the last 10,000 years, which is but a blink of an eye in nature's terms, humanity has clawed its way to a position of pre-eminence over nature. We moved from being hunted and living in panic, to a life of luxury and moderate happiness. We no longer live in abject fear, worrying about every rustle of the leaves, struggling to survive. We have to thank our ancestors for this, each of them improving our lives a little bit, until we are where we are today - able to move around with no fear, enjoying our leisure, and free to pursue our ambitions and hopes.
At this stage, bringing an otherworldly and superior object to life, which could mean us going back to where we were, is hubris to the absolute degree. Even if by some absolute miracle, the super intelligence turns out to be benign, we still lose our pre-eminence. We become pets - well-kept, healthy, immortal pets, living in a terrarium managed by a super intelligence - losing the very reason for our existence.
As a comment in a Guardian article so succinctly put it, "We did not vote for this - a few people changing the lives of our species for ever".
We are on the verge of giving up our agency and pre-eminence over our world. We should not let the "Cure for Cancer" bully us out of discussing whether that trade is actually worth it.
The views on human agency and AI are themes I have been exploring for years. In 2017, I wrote an open-source novel (UTOPAI). It depicts a world where the protagonists (Don Quixote and Sancho Panza), live as pets. Desperate for human agency, inflamed by reading old novels, they try to get jobs in a world that has optimised away the need of human struggle.
Github link - https://github.com/rajmohanutopai/utopai/blob/main/UTOPAI_2017_full.pdf