"There was a threshold crossed somewhere," said the Confessor, "without a single apocalypse to mark it.  Fewer wars.  Less starvation.  Better technology.  The economy kept growing.  People had more resource to spare for charity, and the altruists had fewer and fewer causes to choose from.  They came even to me, in my time, and rescued me.  Earth cleaned itself up, and whenever something threatened to go drastically wrong again, the whole attention of the planet turned in that direction and took care of it.  Humanity finally got its act together."

— Eliezer Yudkowsky, Three Worlds Collide


A common sentiment among people worried about AI x-risk is that our world is on track to stagnate, collapse, or otherwise come to a bad end without (aligned) AGI to save the day.

Scott Alexander:

[I]f we never get AI, I expect the future to be short and grim. Most likely we kill ourselves with synthetic biology. If not, some combination of technological and economic stagnation, rising totalitarianism + illiberalism + mobocracy, fertility collapse and dysgenics will impoverish the world and accelerate its decaying institutional quality.

@disturbance in a a recent LW post that got lots of comments:

Statement: I want to deliberately balance the caution and the recklessness in developing AGI, such that it gets created in the last possible moment so that I and my close ones do not die.


A seemingly straightforward implication of this view is that we should therefore be willing to take on some amount of risk in order to build towards AGI faster than we would in a world where we had the luxury to take our time.

I think some of these sentiments and their implications are based on a mistaken view of the relative difficulty of particular technical and social challenges, but here I want to focus on a totally different point: there are lots of ways that things could go well without AGI (at least for a while).

Even if positive scenarios without AGI are unlikely or unrealistic given our current circumstances and trajectory, it's useful to have a concrete vision of what a good medium-term future without AGI could look like. I think it's especially important to take a moment to reflect on these possible good futures because recent preliminary governance wins, even if they succeed without qualification, are mainly focused on restriction and avoidance of bad outcomes rather than on building towards particular positive outcomes.

The rest of this post is a collection of examples of technologies, ideas, projects, and trends unrelated to AGI that give me hope and joy when I see them being worked on or talked about. It's not meant to be exhaustive in any sense - mostly it is just a list of areas that I personally enjoy reading about, and would consider professional opportunities related to them.

Most of them involve solving hard technological and social problems. Some are quite speculative, and likely to be intractable or extremely unlikely to come to pass in isolation. But making incremental progress on any one is probably robustly positive for the world and lucrative and fulfilling for the people working on them[1]. And progress tends to snowball, as long as there's no catastrophe to stop it.

As you read through the list, try to set aside your own views and probabilities on AGI, other x-risks, and fizzle or stagnation scenarios. Imagine a world where it is simply a given that humanity has time and space to flourish unimpeded for a time. Visualize what such a world might look like, where solutions are permitted to snowball without the threat of everything being cut short or falling to pieces. The purpose of this post is not to argue that any such world is particularly likely to be actualized; it is intended to serve as a concrete reminder that there are things worth working towards, and concrete ways to do so.


Energy abundance; solar and nuclear energy

Image
From AukeHoekstra on Twitter; global solar capacity added vs. forecasts

Energy abundance has the potential to revolutionize many aspects of civilization, and there are fewer and fewer technological barriers holding it back. Regulatory barriers (e.g. to nuclear power, and to all kinds of construction and growth generally) may remain, but in a hypothetical world free of unrelated obstacles, energy abundance is looking more and more like a case of "when" and "how," rather than "if".

Prediction markets

Robin Hanson has been talking about the benefits of prediction markets since 1988, but it feels like with the recent growth of Manifold and the continued operation of real-money prediction markets (PredictIt, Kalshi, Polymarket) things are starting to snowball.

I'm most excited about the future of real-money policy prediction markets on all sorts of legal and governance issues. The possibility of such markets might seem distant given the current regulatory climate around prediction markets themselves, but that seems like the kind of thing that could change rapidly under the right circumstances. Imagine if, instead of a years-long slog through the FDA, the process for getting a new drug approved was a matter of simply convincing a prediction market that the efficacy and harms met certain objective risk thresholds, with private insurers competing to backstop the liability.

Assorted links:

Space travel and space tech

Kinda self-explanatory. Starlink and SpaceX seem to be leading the way at the moment, but I'm excited to follow other private and national space efforts.

Life extension

Bryan Johnson is all over Twitter these days, documenting his self-experimentation in a quest to extend his own lifespan, ideally indefinitely. This is also a favorite topic of OG transhumanists on LessWrong and elsewhere, and it's nice to see it get some attention from someone with serious resources.

I haven't looked too closely into the specifics of the "Blueprint protocol", but from a distance it strikes me as about the right mix of correctly combining and synthesizing a bunch of existing science in an evidence-based way, and mad experimentation of his own design.

Occupational licensing reform

A lot of occupational licensing in the U.S. and around the world is mostly about rent-seeking. Reform efforts have enjoyed some success in recent years, and it's nice to see some feel-good / commonsense wins in U.S. politics, even if they're probably relatively minor in the grand scheme of things. Shoshana Weissmann is a great Twitter follow on this issue. More generally, efforts to reduce credentialism seem like a promising avenue towards incremental reclamation of territory ceded to Moloch.

Automation of all kinds

I think efforts to combine software, robotics, logistics, mechanical engineering, and the AI we already have to reduce the amount of labor that humans do, especially grunt work, is very cool and likely to lead to large and real productivity gains.

Self-driving cars and trucks are perhaps the flashiest example of this kind of automation, but I expect more gradual automation in restaurants, factories, cleaning, maintenance, and other areas of the economy that currently rely on a lot of relatively low-skill, low-paying human labor to unlock a lot of human capital. Avoiding squandering that unlocked potential or ceding it right back to Moloch will be a challenge, but it's a challenge that looks a bit more solvable with a lot more collective wealth.

Semi-related: AI Summer Harvest

Charter cities

Scott Alexander's posts on this topic are always a treat. More so than any specific project or proposal, what excites me about charter cities is the abstract appeal of starting from scratch. In software, legacy systems often complicate and slow development of new features, and I think the same dynamic exists in many traditional cities. Beneath the streets of New York City are layers and layers of utilities and pipes, some over a century old. Charter cities offer an opportunity to start fresh: literal greenfield projects.

Genetic engineering and biohacking

In humans, plants, and animals. There's too much cool stuff here to talk about or even list, so I'll just dump a few links:

Rationality

The original sequences hold up amazingly well, and modern LessWrong is still pretty great. I think there is a lot more room to grow, whether it is by writing and re-writing key ideas with better (or just different) presentation, spreading such ideas more widely, or developing more CFAR-style curriculum and teaching it to people of all ages.

Some of my own favorite recent rationality writing is buried in Eliezer's million word collaborative BDSM fiction, and could really use a more accessible presentation. (Though some of it, e.g. on Bayesian statistics and that practice of science, already stands pretty well on its own, despite the obscure format.)

Effective altruism

Also pretty great. I could criticize a lot of things about EA, but my overall feelings are pretty well-captured by adapting a classic Churchill quote: "EA is the worst form of altruism except for all those other forms that have been tried from time to time..."

I think an EA that was free from the threat of AGI x-risk for a while has the potential to be even more amazing. In such a world, my guess is that EA would succeed at directing a sizable fraction of the wealth and productivity gains unlocked by other areas in this list towards tackling a lot of big problems (global poverty, global health, animal welfare, non-AGI x-risks, etc.) much faster than those problems would get addressed in a world without EA.

Cryonics

A common view is that current cyronics tech is speculative, and a hard sell to friends and family. Another worry is that civilization hardly seems stable enough to keep you well-preserved and on-track to advance to a point where they will reliably bring you back as yourself, into a world you're happy to be brought back into.

Those concerns might be valid, but they look like problems on which it is possible to make incremental progress. If you mainly care about immortality for yourself and your loved ones, my guess is your best bet is to push hard for cryonics research and adoption, as well as overall civilizational stability and adequacy. Incremental progress in any of those areas seems far more likely to increase the likelihood you are eventually brought back, compared to pushing towards AGI as fast as possible.[2]
Links:

Georgism

Georgism and land value taxes are an elegant idea for aligning incentives, funding public goods, and combating rent-seeking. I think land value taxes are simple enough that there is some hope of incremental adoption in some form or jurisdiction in the relatively near future. A decent introduction and source for further reading is this winner from Scott's book review contest a couple of years ago.

Balsa Research

Zvi's thing, focused on addressing low-hanging fruit and finding achievable, shovel-ready wins through public policy. Whether Balsa in particular succeeds at its mission or not, I think it is an exciting model for how rationalists can enter the space of public policy-making and politics in an effective and robustly positive way.

dath ilan

dath ilan is Eliezer's fictional medianworld. The defining characteristic is that the real Eliezer's most important personality traits are the median among the entire planet's population. Because Eliezer himself is unusually smart, kind, and honorable compared to the average earthling, the population of dath ilan is much smarter, kinder, and better coordinated as a whole compared to Earth.

Many details and features of dath ilan would probably look pretty strange to an average earthling. Some of this strangeness is due to Eliezer's other unusual-for-Earth traits and personal preferences, but the deeper lesson of dath ilan is that many of their systems of governance, coordination mechanisms, methods of doing science, and modes of thought are actually derived from a combination of reasoning from first principles and relatively few assumptions about human nature and behavior.

It's not that there's a particular "dath ilani" way of aggregating information or resolving conflicts or running an economy. Rather, the desire to flourish, cooperate, build, and enjoy the benefits of technological civilization and other nice things is common among a very wide distribution of possible humans. It might take a certain baseline level of population-level intelligence, coordination ability, and lucky history to actually achieve all those nice things, but civilizations which do manage the feat will probably share some commonalities with each other and with dath ilan that Earth currently lacks.

dath ilan is currently getting along pretty well without AGI, making it an inspiring picture of what one possible future of Earth without AGI could look like. In worlds where Earth succeeds in the long-term, I expect that it will share many features with dath ilan. But I think a more interesting and exciting question involves thinking about ways that the best versions of Earth and dath ilan will still be different.


When I imagine a world that manages to solve problems or make significant progress in even a few of the areas above, I feel a lot more optimistic about the possibility of such a world confronting the challenge of building an AGI safely, eventually. And even if we still fail at that ultimate challenge, such a world would be a nicer place to live in the meantime.

If you're feeling despair about recent AI progress, consider turning some of your attention or professional efforts elsewhere. There are lots of opportunities in research, startups, and established companies that have nothing to do with pushing the frontier of AI capabilities[3], and lots of ways to have fun, build meaning, and enjoy life that don't involve thinking about AGI at all.

  1. ^

    Also, none of them come with the moral hazards, uncertainties, and
    x-risks associated with working directly towards AGI.

  2. ^

    My guess is that reconstructing a well-preserved brain and developing indefinite life extension technology for biological humans are probably about equally hard (i.e. both trivial) for a superintelligence.

  3. ^

    Even startups that are focused on applying current AI in service of automating and improving existing business processes in diverse industries are probably net-positive for the world under a wide variety of progress models, and likely to be personally lucrative if you choose wisely.

    Though for both short-term viability and moral hazard reasons, you probably want to be careful to avoid pure "wrapper" startups whose business model depends on continued access to current and future third-party frontier models. But automating business processes or building integrations using current AI at established companies seems positive or at least non-harmful, as long as you're confident in your ability not to depend on or contribute to "hype" around future advances.

New Comment
28 comments, sorted by Click to highlight new comments since: Today at 2:58 PM
[-]Roko6mo2127

dath ilan is currently getting along pretty well without AGI

I hate to have to say it, but you are generalizing from fictional evidence

Dath ilan doesn't actually exist. It's a fantasy journey in Eliezer's head. Nobody has ever subjected it to the rigors of experimentation and attempts at falsification.

The world around us does exist. And things are not going well! We had a global pandemic that was probably caused by government labs that do research into pandemics, and then covered up by scientists who are supposed to tell us the truth about pandemics. THAT ACTUALLY HAPPENED! God sampled from the actual generating function for the universe and apparently that outcome was sufficiently high to have been picked.

A world without any advanced AI tech is probably not a good world; collapsing birthrates, rent extraction, dysgenics and biorisk are probably fatal.

I'm not updating about what's actually likely to happen on Earth based on dath ilan.

It seems uncontroversially true that a world where the median IQ was 140 or whatever would look radically different (and better) than the world we currently live in. We do not in fact, live in such a world.

But taking a hypothetical premise and then extrapolating what else would be different if the premise were true, is a generally useful tool for building understanding and pumping on intuitions in philosophy, mathematics, science, and forecasting.

If you say "but the premise is false!!111!" you're missing the point.

What you should have said, therefore, is "Dath ilan is fiction; it's debatable whether the premises of the world would actually result in the happy conclusion depicted. However, I think it's probably directionally correct -- it does seem to me that if Eliezer was the median, the world would be dramatically better overall, in roughly the ways depicted in the story."

So I think the remaining implied piece is that humans are getting smarter? I actually think we are and will continue to, in the relevant ways. But that's quite debatable.

I don't think that will happen as a foregone conclusion, but if we pour resources into improved methods of education (for children and adults), global health, pronatalist policies in wealthy countries, and genetic engineering, it might at least make a difference. I wouldn't necessarily say any of this is likely to work or even happen, but it seems at least worth a shot.

I was thinking more of the memetic spread of "wisdom" - principles that make you effectively smarter in important areas. Rationality is one vector, but there are many others. I realize the internet is making us dumber on average in many ways, but I think there's a countercurrent of spreading good advice that's recognized as good advice. Anyone with an internet connection can now get smarter by watching attractively-packaged videos. There's a lot of misleading and confusing stuff, but my impression is that a lot of the cream does rise to the top if you're actively searching for wisdom in any particular area.

[-]O O6mo17-3

Life extension as Bryan Johnson lays it out is mostly pseudoscience. He is optimizing for biomarkers, instead of the actual issue. It remains to be proven if these proxies remain useful proxies when optimized for.

The key problem is it seems difficult to iterate over anti aging tech without potentially failing to extend life over many generations. Bryan Johnson's ideas may work (although if I had to bet on it, I'd put it at no chance at all), but we won't find out for sure until he has irreversibly aged. AI can theoretically let us predict the effects of life extension drugs without having to wait for the drugs to fail on people.

Don’t see why we necessarily need AGI for this but AlphaFold-7 or something of the likes probably helps a lot.

I see a similar trend in other points like genetic screening and space travel. There’s a rose tinted view of current efforts succeeding in doing anything or having any substantial progress in any category.

SpaceX itself isn’t even economically viable without government subsidies. Substantial space exploration is probably nowhere in sight. We still can’t guarantee rockets won’t explode on launch. (Space flight is hard, and our progress is nowhere near enough for something like space tourism)

Similarly, the state of genetic screening broadly seems there is weak evidence you can reduce the odds of rare diseases by some amount (with large error bars). A far cry from selecting for higher IQ or stronger children.

The default view for most of these fields seems hopeless without much progress in our lifetimes.

We also probably need lots of new ideas to solve climate change, and new ideas will become scarce in the world as populations decline and society collectively shifts to serve the needs of old people. AGI helps us solve this.

SpaceX itself isn’t even economically viable without government subsidies

I'm pretty sure that's false. Starlink is a money printer and SpaceX dominates the commercial market.

And not only false, but completely backwards. The government has been much more favorable to SpaceX's competitors than to SpaceX, and on an objective scale I think if SpaceX didn't have to delay so much to get approval from FAA and Fish & Wildlife, they'd be significantly farther ahead right now.

It's probably true that SpaceX wouldn't have gotten off the ground early on without some help (mostly in the form of contracts) from the govt, but that's not what you said -- you said "isn't even economically viable," not "would have needed more VC investment to get started but would have easily paid it back by now"

As for guaranteeing rockets won't explode on launch... SpaceX is getting there. Give them another decade & they'll be there I think. They key, as they always say, is to do it so freaking many times that every possible way things can go wrong has in fact gone wrong in the past and been fixed. Like with commercial flights.

[-]O O6mo-1-2

Starlink may be, but space flight is not. This is an important distinction because that means it will be hard to scale space flight without scaling up StarLink to cover losses in space flight first. 

SpaceX got 2.3B in government funding in the last year [1] which is not an insignificant amount.  (For reference, “In 2022, revenue doubled to $4.6 billion, helping the company reduce its loss last year to $559 million from $968 million, the WSJ reported.”) If you separate space flight from StarLink revenue, the government is probably a proportionally larger source of funding. 

The government is willing to take a loss on contracts to fund activities like space exploration because it’s not constrained by the need for profit. There is no good commercial reason to do these flights, so I would imagine it would be difficult to raise VC money if it would take decades to potentially recoup investments. 

As for guaranteeing rockets won't explode on launch... SpaceX is getting there. Give them another decade & they'll be there I think. They key, as they always say, is to do it so freaking many times that every possible way things can go wrong has in fact gone wrong in the past and been fixed. Like with commercial flights.

The bar for interesting space travel is more colonization of planets that are far away. I don't doubt we will eventually fix this problem in the future, but this is probably a lot more limited than what people are imagining or hoping. 

"With commercial entities like Virgin Galactic, SpaceX and Blue Origin set to be joined by new players in the coming years, experts in the field are predicting that affordable suborbital travel will be available to most of us within a couple of decades." [2]

Assuming no AGI, this is the likely trajectory. Maybe in decades we can take these suborbital flights without needing to be a billionaire, but I'd imagine most people are thinking of going to Mars or somewhere farther, rather than just leaving Earth momentarily then dipping back. 

I think if we wanted something like commercial flights to Mars or Europa in our lifetimes, I'd assume we would have the suborbital flights commercialized already and have semi-frequent non-commercial flights to those destinations. 

 

  1. https://futurism.com/the-byte/spacex-tesla-government-money-npr
  2. https://www.space.com/not-far-out-can-games-predict-the-future-of-commercial-space-travel#:~:text=With%20commercial%20entities%20like%20Virgin,within%20a%20couple%20of%20decades.

So, you retract your claim that SpaceX is not economically viable without government subsidies?

[-]O O6mo10

What would SpaceX look like without government subsidies or contracts? 

"X got its start with government subsidies and contracts" is a veeeeerrry different claim from "X is not even economically viable without government subsidies." The distinction between subsidies and contracts is important, and the distinction between getting started and long-term viability is important.

[-]O O6mo-1-2

I don’t think if almost half their revenue is still coming from the government, and probably more of their space flight revenue, you can say SpaceX only got its start with government subsidies and contracts.

I also find the comparison with plane flights strange. There is a lot of value for consumers to go between countries. Business flights and tourism means many flights produce more value than they cost, giving us reasons to fund them. In comparison there aren’t many ways for a space tourist flight to produce more value than they consume.

So plane flights should not be the parallel drawn. Maybe deep-sea submarine rides are a more accurate comparison, which are still very expensive and dangerous. The primary customer of deep-sea submarine rides is still the government and government funded researchers.

I hope to drive the point home that no one should expect much progress in space tourism by default without AI advancements. We landed on the moon 53 years ago and despite the overwhelming scientific progress since, you still can’t take a flight there by choice.

Look, I'm not here to argue about the long-term trajectory of space flight with you, I'm here to object to your false and misleading claim about SpaceX. If you concede that point then I'll go away.

[-]O O6mo-10

Spacex would exist but it would look very different.

Climate change is exactly the kind of problem that a functional civilization should be able to solve on its own, without AGI as a crutch.

Until a few years ago, we were doing a bunch of geoengineering by accident, and the technology required to stop emitting a bunch of greenhouse gases in the first place (nuclear power) has been mature for decades.

I guess you could have an AGI help with lobbying or public persuasion / education. But that seems like a very "everything looks like a nail" approach to problem solving, before you even have the supposed tool (AGI) to actually use.

[-]O O6mo87

We use fossil fuels for a lot more than energy and there’s more to climate change than fossil fuel emissions. Energy usage is roughly 75% of emissions. 25% of oil is used for manufacturing. My impression is we are way over targets for fossil fuel usage that would result in reasonable global warming. Furthermore, a lot of green energy will be a hard sell to developing nations.

Maybe replacing as much oil with nuclear as politically feasible reduces it but does it reduce it enough? Current models[1] assume we invent carbon capture technology somewhere down the line, so things are looking dire.

It’s clear we have this idea that we will partially solve this issue in time with engineering, and it does seem that way if you look at history. However, recent history has the advantage that there was constant population growth with an emphasis on new ideas and entrepreneurship. If you look at what happened to a country like Japan, when age pyramids shifted, you can see that the country gets stuck in backward tech as society restructures itself to take care of the elderly.

So I think any assumptions that we will have exponential technological progress are “trend chasing” per se. A lot of our growth curves almost require mass automation or AGI to work. Without that you probably get stagnation. Economists have projected this in 2015 and it seems not much has changed since. [2]. Now [3].

I think it’s fine to have the opinion that AGI risk of failure could be higher than the risks from stagnation and other existential risks, but I also think having an unnecessarily rose tinted view of progress isn’t accurate. For example, you may be overestimating AGI risk relative to other risks in that case.

  1. https://newrepublic.com/article/165996/carbon-removal-cdr-ipcc-climate-change

  2. https://www.mckinsey.com/featured-insights/employment-and-growth/can-long-term-global-growth-be-saved

  3. https://www.conference-board.org/topics/global-economic-outlook#:~:text=Real GDP growth%2C 2023 (%25 change)&text=Global real GDP is forecasted,to 2.5 percent in 2024.

[-][anonymous]6mo11-1

Max, many of these problems have stayed stagnant longer than your remaining life expectancy already.

If you calibrate yourself to "the future will be similar to the past ", which is the baseline case without agi and assume, for the sake of argument, that no current AI really gets used - the government bans it all before it becomes strong enough to be more than a toy.

Invert the view the other way. What could you do on these problem with a human level AI? With a moderate superintelligence? You can solve all the problems listed with a human level AGI, except the medical ones where you require superintelligence at some level.

People complain about my posts being too long, so I won't make an exhaustive list, but current AI models have difficulty with robotics. It is a hard problem. You likely need several more oom of compute to solve robotics and to run your solution in real time.

If you can solve robotics - AI models can control various forms of robot can do nearly all manufacturing and maintenance tasks for almost all industry on earth - many of the problems trivialize.

Energy, climate change, space travel are trivial.

Georgism and land use isn't really solved but for areas that do allow prebuilt robotic modular buildings, they could transform themselves at incredible rates. Essentially from an empty field to a new Hong Kong in a year. Buildings could be replaced in a month. This is because everything would be made out of large modules that can fold up and fit on a truck, and robots guided by AGI do all steps. A lot of maintenance would be done by module swaps. Robot comes and grabs the entire bathroom and pulls it off the side of the building instead of fixing the plumbing fault.

The reason you need superintelligence for medicine is that it is a complex system with too many variables for living human brains to consider them all. And you need to give every person on earth, or at least first world residents, the benefit of a medical super-expert if you want them to live as long as possible. Human medical expert decision making does not scale. Current AI is not strong enough, and to bypass regulatory barriers medical ai needs to be overwhelming and obviously superhuman in order to convince regulators.

Just to add a detail paragraph: the simple reason is an older person, even if anti aging medicine worked partially, is an unstable system. It's like balancing on a ball bearing. Any small disturbance destabilizes the system, the body failed to compensate from degraded components, and death eventually. You would need someone to be monitored 24/7, with an implant able to emit drugs - including new ones invented on demand to handle genetic variants - to keep them stable past a certain age. There are thousands of variables and they interrelate in complex ways a single human cannot learn before they themselves are dead of aging.

So no, a more calibrated view of the future is that if humans do manage to coordinate and ban AI without defecting, the next 60-100 years of your life, Max - and that's all you will see, I would put 999:1 odds on it - the outcome is going to be like the last 60 years, but with more regulations and an older population.

This means that to ban superintelligence from ever existing is choosing death. For yourself and probably every human being alive right now, because to solve these problems will likely require generations of trial and error, as aging mechanisms get missed and the human patients die anyway, and only gradually lifespans extend. If you think you can coordinate forever, then theoretically 10^50 humans could one day exist, vs the small cost of 10^11* humans dead now (10 generations of trial and error before correct life extension) but I will leave the problems with this for a post.

*Approximately 80 billion people, or slightly less than all humans who have ever lived. Lesser pauses, like a 6 month delay, are megadeaths instead of gigadeath.

I feel like adding human brain uploading / mind emulation to the list changes things. On the one hand, that's kinda cheating since having digital humans is close to AGI in a lot of ways. On the other hand, if the 'no AGI for now' was because of lack of a robust alignment solution, and digital humans was a way around that, then it would fit the hypothetical.

I would also emphasize meta developments, like better forms of governance and contract forming (automated negotiations to reduce overhead?). In so far as one of the major things holding us back from tackling some of the engineering-feasible-but-socially-hard problems is coordination failures, then improving coordination ability would be a win on many fronts.

A funny thing: The belief that governments won't be able to make coordinated effective decisions to stop ASI, and the belief that progress won't be made on various other important fronts, are probably related. I wonder if seeing the former solved will inspire people into thinking that the others are also more solvable than they may have otherwise thought. Per the UK speech at the UN, "The AI revolution will be a bracing test for the multilateral system, to show that it can work together on a question that will help to define the fate of humanity." Making it through this will be meaningful evidence about the other hard problems that come our way.

I believe you misinterpreted the quote from disturbance. They were implying that they would bring about AGI at the moment before their brain would be unsalvageable by AGI such that they could be repaired, assumedly in expectation of immortality.

I also don't think the perspective that we would likely fail as a civilization without AGI is common on LessWrong. I would guess that most of us would expect a smooth-ish transition to The Glorious Future in worlds where we coordinate around [as in don't build] AI. In my opinion the post is good even without this claim however.

Ah, you're right that that the surrounding text is not an accurate paraphrase of the particular position in that quote.

The thing I was actually trying to show with the quotes is "AGI is necessary for a good future" is a common view, but the implicit and explicit time limits that are often attached to such views might be overly short. I think such views (with attached short time limits) are especially common among those who oppose an AI pause.

I actually agree that AGI is necessary (though not sufficient) for a good future eventually. If I also believed that all of the technologies here were as doomed and hopeless as the prospect of near-term alignment of an artificial superintelligence, I would find arguments against an AI pause (indefinite or otherwise) much more compelling.

This post received a lot of objections of the flavor that many of the ideas and technologies I am a fan of either wont't work or wouldn't make a difference if they did.

I don't even really disagree with most of these objections, which I tried to make clear up front with apparently-insufficient disclaimers in the intro that include words like "unrealistic", "extremely unlikely", and "speculative".

Following the intro, I deliberately set aside my natural inclination towards pessimism and focused on the positive aspects and possibilities of non-AGI technology.

However, the "doomer" sentiment in some of the comments reminded me of an old Dawkins quote:

We are all atheists about most of the gods that humanity has ever believed in. Some of us just go one god further.

I feel the same way about most alignment plans and uses for AGI that a lot of commenters seem to feel about many of the technologies listed here.

Am I a doomer, simply because I (usually) extend my pessimism and disbelief one technology further? Or are we all doomers?

I don't really mind the negative comments, but it wasn't the reaction I was expecting from a list that was intended mainly as a feel-good / warm-fuzzy piece of techno-optimism. I think there's a lesson in empathy and perspective-taking here for everyone (including me) which doesn't depend that much on who is actually right about the relative difficulties of building and aligning AGI vs. developing other technologies.

These things are either unlikely to succeed or just not that important.

Including dath ilan is especially strange. It is a work of fiction, it is not really appealing, and it is not realistic (that is, not internally consistent) either.

The world population is set to decline over the course of this century. Fewer humans will mean fewer innovations as the world grows greyer, and a smaller workforce must spend more effort and energy taking care of a large elderly population. Additionally, climate change will eat into some other fraction of available resources to simply keep civilization chugging along that could have instead been used for growth. The reason AGI is so important is because it decouples intelligence from human population growth.

The world population is set to decline over the course of this century.

This is another problem that seems very possible to solve or at least make incremental progress on without AGI, if there's a will to actually try. Zvi has written a bunch of stuff about this.