I've been trying my best to think of something that AGI could do which I really love and deeply endorse.

I can think of some nice things. New kinds of animals. Non-habit-forming heroine. Stupid-pointless-disagreements-between-family-members fixomatic maybe. Turn everyone hot.

None of this makes me joyful and hopeful. Just sounds neat and good. Humans seem pretty damn good at inventing tech etc ourselves anyway.

I think I might have assumed or copy-pasted the "AI is truly wonderful if it goes truly well" pillar of my worldview. Or maybe I've forgotten the original reasons I believed it.

What exactly did that great AI future involve again?

New Answer
New Comment

10 Answers sorted by

the gears to ascension

Jan 30, 2024

5123

[edit: pinned to profile]

I want to be able to calculate a plan that converts me from biology into a biology-like nanotech substrate that is made of sturdier materials all the way down, which can operate smoothly at 3 kelvin and an associated appropriate rate of energy use; more clockworklike - or would it be almost a superfluid? Both, probably, clockworklike but sliding through wide, shallow energy wells in a superfluid-like synchronized dance of molecules - Then I'd like to spend 10,000 years building an artful airless megastructure out of similarly strong materials as a series of rings in orbit of Pluto. I want to take a trip to alpha centauri every few millennia for a big get together of space-native beings in the area. I want to replace information death with cryonic sleep, so that nothing that was part of a person is ever forgotten again. I want to end all forms of unwanted suffering. I want to variously join and leave low latency hiveminds, retaining my selfhood and agency while participating in the dance of a high-trust high-bandwidth organization that respects the selfhood of its members and balances their agency smoothly as we create enormous works of art in deep space. I want to invent new kinds of culinary arts for the 2 to 3 kelvin lifestyle. I want to go swimming in Jupiter.

I want all of Earth's offspring to ascend.

quetzal_rainbow

Jan 28, 2024

5129

I mean, like, immortality? Abundance? Perfect health? World peace via all mentioned means and improved coordination? Human intelligence augmentation, where by intelligence I mean everything including and not limiting creativity, wisdom, perception clarity and self-awareness? Space colonization, which is first of all, ability to opt out from our current civilization and try something else?

You can say "but humans can invent all of this eventually anyway" but I dare to remind you that there is 14000 children dying every day and, conditional of alignment, AGI is a fastest way to stop it.

Do you care that much about which way is fastest? Just "get the things you like a bit sooner" doesn't feel super compelling to me.

14000 children dying every day means that getting solution even an hour earlier saves in expectation ~583 of them which seems to be really worthy? "Children not dying" is pretty much compelling thing to want it even a bit sooner.

2lukehmiles3mo
Good point
1O O3mo
Do you think there’s a pathway to immortality without AGI? We still haven’t made any more progesss on aging than the Romans did.

MondSemmel

Jan 28, 2024

81

Check out the Fun Theory sequence, if you haven't already.

Thanks for the pointer. Haven't read it.

Writer

Jan 28, 2024

70

Turn everyone hot

If we can do that due to AGI, almost surely we can solve aging, which would be truly great. 

We'll solve it either way right?

2Writer3mo
I'd guess so, but with AGI we'd go much much faster. Same for everything you've mentioned in the post.
1O O3mo
Without AGI, no chance in our lifetimes or any lifetimes that are soon. Possibly never given dysgenic effects and declining world population.
1lukehmiles3mo
AGI? Not just a few tricks with chemistry and proteins?
2[anonymous]3mo
Current biomedical knowledge says no, it's extremely complex, and simple tricks have unacceptable failure rates. Remember you want to turn everyone hot and not kill 10-50 percent of them during the first treatment, with all the subjects developing untreatable fatal cancers a few years after treatment. These aren't hypotheticals, cellular reprogramming, one of the few actual techniques that seems to reverse aging, has side effects like these. If you want to make everyone hot and keep them alive for centuries you need many thousands, maybe millions, of separate techniques, many specific to exactly 1 living patient. Or essentially a network of powerful AGI and ASI systems who model each patient per an accurate model of human bodies too complex for any human to learn, then the system chooses the drugs or genetic edits to make, maximizing the chance of success per the model. The simulation models are also updated for every patient treated, which is not something any study or any living doctor is able to benefit from. And this is able to happen in seconds, so the medical system can save patients in the process of dying from failures current medicine is unaware of.

Perhaps

Jan 28, 2024

60

I would say value preservation and alignment of the human population. I think these are the hardest problems the human race faces, and the ones that would make the biggest difference if solved. You're right, humanity is great at developing technology, but we're very unaligned with respect to each other and are constantly losing value in some way or another. 

If we could solve this problem without AGI, we wouldn't need AGI. We could just develop whatever we want. But so far it seems like AGI is the only path for reliable alignment and avoiding Molochian issues.

I agree deeply with the first paragraph. I was going to list coordination as the only great thing I know of where AI might be able to help us do something we really couldn't do otherwise. But I removed it because it occurred to me that I have no plausible story for how that would actually happen. How do you imagine that going down? All I've got is "some rogue benevolent actor does CVE or pivotal act" which I don't think is very likely.

Charlie Steiner

Jan 31, 2024

52

Bah, nobody's mentioned social applications of superhuman planning yet.

You could let an AI give everyone subtle nudges, and a month later everone's social life will be great. You'll see your family the right amount, you'll have friends who really get you who you see often and do fun things with. Sex will occur. Parties and other large gatherings will be significantly better.

The people to make this possible are all around us, it's just really hard to figure out how to make it happen.

Oh I love this answer. Seems like pretty narrow AI would be adequate though. Also the same tech could probably be used to eg start or stop revolutions. Inspiring anyway.

Ben Matthews

Jan 28, 2024

42

Human potential is the big one for me.

Personally, I feel that my imagination is limited - not just around the capabilities of AGI, but in common work scenarios.

There are lots of people out there who are a lot smarter than me, but AGI can help me realise more of my human potential.

This applies both at a personal level, but eventually at a societal level and at a species level beyond that.

What this looks like and how it is made safe through safeguards, I don’t know. But I’m interested in how AGI can help us achieve our human potential in ways that I as an individual can’t imagine without the help of AGI / the sum of human knowledge.

Richard_Kennaway

Jan 28, 2024

40

How super/general the AI is is a knob you can set to whatever you want. With zero set to the present day, if you turn it up far enough you get godlike capability of which I find it impossible to say anything.

More modest accomplishments I would pay, say, the price of a car for would include a robot housekeeper that can cope with all of a human’s clutter, and clean everything, make beds, etc. as well as I can and better than in practice I will. Or a personal assistant that I can rely on for things like making complex travel arrangements, searching for stuff on the internet with the accuracy of Google and the conversational interface of ChatGPT, and having serious discussions with on any subject.

Beyond that my creativity falters. In 1980 I couldn’t even have foreseen Google, smartphones, or cat videos.

Nathan Helm-Burger

Jan 29, 2024

30

So, I don't think we need AGI for this... But Digital humans. Uploads from preserved brains. The freedom from carbon-based substrate, and the benefits that go with that like immortality and speed-of-light travel.

I suggest that ideally we keep AI as weak as possible to get us quickly to digital humans, and then have the digital humans do the AI work. Never go down the path of non-human intelligence which is fully superior to human intelligence. Keep the future human! Empower Us, not Them!

(This response gives me a human chauvinist vibe. I'm sympathetic to really carefully thinking thing through before handing control to quite alien beings, but at some point, we'll probably want to have control in the hands of beings which look very different than current humans. Also, the direct value might come from entities which aren't well described as human.)

5Nathan Helm-Burger3mo
Yes, that's correct. I'm transhuman-chauvinist. I want our present and future to belong as much as possible to humans and our enhanced descendants, not to alien minds who may not share my values. There absolutely are non-human minds I'd like to create and experience living with and share the future in an equitable way with, but it is a highly restricted set based on compatibility with my own values. For instance, uplifted mammals or digital humans. Many people might not describe either of those groups as 'human', but I'd still consider them within group human-ish. Of course, the human-ish group isn't a team, it's a category. Within that there are opposed factions actively killing each other. I'd prefer if that wasn't the case. I don't have a plan for aligning human-ish creatures to each other sufficiently to achieve a peaceful society, but I do suspect that this would be easier than aligning a group of actors including both human-ish actors and very alien actors. Until we have such a plan, probably we shouldn't hand much power over to potentially non-peaceful alien actors.

Jacob G-W

Jan 28, 2024

20

I really want a brain computer interface that is incredibly transformative and will allow me to write in my head and scaffold my thinking.

This one might actually be doable without super-powerful AIs. Current progress in non-invasive brain-computer interfaces is rather impressive...

I do think people should organize and make it go faster, but this should be achievable with the current level of AI.

(Unlike practical immortality and eventually becoming God-like, which both do seem to require super-intelligence and which are what a lot of people really want. Being able to personally know, understand, and experience all which is worth to experience in the current world, and more beyond. This does require power of super-intelligence.)

2 comments, sorted by Click to highlight new comments since: Today at 6:25 AM
[-]Ben3mo80

This question can be interpreted two ways:

(1) What really great things do we not have yet, which we might have in the future? Possibly accelerated by AI.

(2) What really great things absolutely require futuristic AI, and we can never have them without first inventing it.

I think (2) contains things like robot butlers, cleaning your house and doing your chores (whether they qualify as "really great" maybe up to taste). (1) contains things likes cures for {cancer/malaria/Alzheimer's/depression/suicide headaches}, and machines that are more {cost/energy/carbon} efficient,  which could all plausibly be invented faster with AIs helping.

When I imagine the future I imagine first an age of insane abundance: where some random university students replace their self-driving cars every few months just to keep up with the new season's fashion. (This summer, drive a convertible. No black cars after labour day.). Then last-months self-driving cars all take themselves to a nano-bot factory to self-recycle into new ones. Sure, automobile-fast-fashion  is basically just posing and a bit crazy. And I would be scoffing at the insanity of it and the shallowness of the people involved like the angry moralising old man I would be. But, none of those people in those cars are starving, none of them are in danger of malaria. Perhaps they would all be immune to aging. The abundance is the symptom of things sorted.

Then, at some point after that, I imagine an age where physical resource goods cease to have status associations. Like, even some poor looser could have a private jet if they wanted, it doesn't prove anything. Why would you want one anyway? The gravity train is faster and the airstrip is miles away from your home. I don't know what that world looks like. Maybe people find new ways of claiming status, everyone wants to tell your about their new book or their wonderful status-worthy politics, maybe people just move on and have fun. 

Most people, when they try to imagine paradise, imagine their present life with all the things they don’t like removed, and all the things they do like available in abundance without effort. My own answer does not escape that pattern. The mediaeval peasant’s paradise was the mythical land of Cockaigne, where there was endless feasting and no work and no masters. Surveys have found that people generally think that the maximum income anyone could possibly want is about 10 times whatever their current income is. Few have imagined new joys.

How many people just 50 years ago managed to imagine anything like the present? “A Logic Named Joe” did a pretty good job of anticipating the Internet and the issues we are currently dealing with around AI, but how many other bullseyes like that are there? Or other truly novel futures that could have been?