Context

This is a place to explore visions of how AI can go really well. Conversations about AI (both in this community and disseminated by mainstream media) focus on dystopian scenarios and failure modes. Even communities that lean technoutopian (Silicon Valley) are having an AI hangover. More broadly, many people in my life think the future will be worse than the present and this makes me sad. 

So I think it's time to revisit the science fiction books of our teenage years and imagine what amazing applications of AI or AGI in society looks like. AI that doesn’t destroy us is great. AI that unlocks human flourishing is even better. I've personally found it much easier to think of negative scenarios than positive scenarios, so this is me enlisting your help. 

Discussion norms

I’d like this to be a yes-and, generative, bursty, collective brainstorming thread. Ideas that make no sense, that you might not endorse later, that you can't even explain or defend are all welcome. The person that starts the idea might not be the person that finishes it so don't censor too early.

I hope this discussion style shows us a different way to get to better reasoning: as opposed to one person contributing a well-developed position and others pointing out inconsistencies, we can build ideas collectively, with everyone contributing different parts. I also hope it encourages people who don’t normally participate on LessWrong to contribute their thoughts.

Prompts for idea generation 

(Feel free to ignore and share whatever comes to mind) 

  1. What does AI going well look like?
  2. If you feel pessimistic about the future, what vision of AI would make you feel more optimistic?
  3. What are problems we don't know how to solve that you'd be excited to solve with AI?
  4. What great companies could be built in the next decade, where the product is built primarily around modern Machine Learning? (If you find this prompt overly large, can you think of any companies that could be built around GPT-3?)
     

I’ll seed some of my ideas below :) Thanks Amanda, Ben, Andreas for discussing these ideas with me. 

New to LessWrong?

New Answer
New Comment

15 Answers sorted by

johnswentworth

Oct 07, 2020

140

One class of ideas which I think is underrated: AI theory used as theory for things other than building autonomous AI/ML systems. Examples:

  • Using insights/tools from AI theory to improve human reasoning (a popular topic on LW!)
  • Using insights from subsystem alignment or search theory to design better human institutions
  • Using tools developed for microscope AI to understand the world-models of bacteria or cancer cells.
  • Future higher-level programming languages which allow robust programmatic reasoning about abstract real-world objects (while still retaining unambiguous semantics)

Daniel Kokotajlo

Oct 07, 2020

90

This hilarious classic fiction is brimming with ideas for how things could go well (alongside some ideas for how things could go badly).

jungofthewon

Oct 07, 2020

90

Instant translation across nueroatypical people, just like instant translation between English and Korean. An AI system that helps me understand what an autistic individual is currently experiencing, helps me communicate more easily with them. 

Yes, and something that could build up a model of different people's minds, and figure out the optimal way of transferring a representation between the two. What if had a language for master Go players to communicate their intuitions about board positions? Master composers to communicate how to write a symphony? Mathematicians to communicate the right intuitions for a particular theorem? 

I mean we do have human language, which is remarkable in the extent to which it enables these things. But I have an intuition that we could do a lot better here (e.g.... (read more)

1jungofthewon4y
Yes! For example I am often amazed by people who are able to explain complex technical concepts in accessible and interesting ways 

Vermillion

Oct 08, 2020

80

I think if we get AGI/ASI right the outcome could be fantastic from not just from the changes made to the world, but the changes made to us as conscious minds, and that an AGI/ASI figuring out mind design (and how to implement it) will be the most signifigant thing that happens from our perspective. 

I think that the possible space of  mind designs is a vast ocean, and the homo sapiens mind design/state is a single drop within those possibilities. The chance that our current minds are what you would choose for yourself given knowledge of all options is very unlikely. Given that happiness/pleasure (or at least that kind of thing) seems to be a terminal value for humans, our quality of experience could be improved a great deal. 

One obviouse thought is if you increase the size of a brain or otherwise alter its design could you increase the potential magnitude for pleasure. I mean we think of animals like insects, fish, mammals etc on a curve of increasing consciousness generally with humans at the top, if that is the case humanity need not be an upper limit on the 'amount' of consciousness you can possess. And of course, within the mind design ocean more desirable states than pleasure may exist for all I know.

I think that imagining radical changes to our own conscious experience is unintuitive and its importance as a goal underappreciated, but I cannot imagine that anything else AGI/ASI could do that would be more important or rewarding for us.

I love the idea of exploring different minds and seeing how they fit. Getting chills thinking about what it means for humanity's capacity for pleasure to explode. And loving the image of swimming through a vast, clear, blue mind design ocean.  

avturchin

Oct 07, 2020

80

AI capable to find win-win solution for any conflict problem. It will be good as a judge, in preventing wars, in helping spouses understand each other, in psychotherapy, in solving global problems like global warming without stopping technological progress etc.

jungofthewon

Oct 07, 2020

70

An interactive, conversational system that makes currently expensive and highly manual therapy much more accessible. Something that talks you through a cortisol spike, anxiety attack, panic attack. 

Adapts to the user's preferred method of coping/comfort and blends that with 'scientific' methods?

Ben Pace

Oct 07, 2020

60

I'd be interested in a GPT product that did the following for editing/writing:

  • Suggested re-writes for sentences or wordings that you don't like
  • Suggests a new sentence/paragraph to the end of your essay, to help get you unstuck
  • Provides info about where people will stop reading
  • Offers titles for the essay

I'd like a GPT enabled journal trained on a corpus of creativity techniques that could ask you clarifying questions about whatever you're writing about. This sounds on the surface like it wouldn't work, but if the questions are unobtrusive (in a sidebar) they only need to

  1. be a good match some percent of the time
  2. be close enough to a good question that you can think of the good version for yourself

FactorialCode

Oct 08, 2020

50

Assuming we can solve the relevant ethical dilemmas. There is exactly one thing I want:

Waifus. Sentient optimized and personalized romantic partners.

supposedlyfun

Oct 08, 2020

50

Using AI/ML to fix some edge problems with government would be so neat.  My optimistic scenario is that AI/ML gets common enough that nobody thinks it's weird, and then the state legislature of, say, California says, "How about we let an AI try to draw our Congressional districts?" and a nonpartisan committee comes up with a reasonably fair utility function, or maybe there's even a statewide ranked-choice vote on ten different values for the AI to consider, and the AI gives back a map that everyone agrees is not perfect for their side but pretty close to fair.  

It would almost be like seeing Rawls' veil of ignorance play out in real life, if everyone is comfortable enough with AI in general to not reject whatever it comes up with just because they aren't winning as much as they'd like to.  (It's possible that voters would be on board for this particular issue.  Cyborg government!  We're not that far off.)

Yes-anding you: our limited ability to run "experiments" and easily get empirical results for policy initiatives seems to really hinder progress. Maybe AI can help us organize our values, simulate a bunch of policy outcomes, and then find the best win-win solution when our values diverge. 

jefflab

Oct 08, 2020

40

An AI that reads all the scientific literature and finds where similar ideas are being independently explored in unrelated fields, and then introduces those researchers. 

Credit: This idea was proposed by Norbert Wiener in 1961 in his book Cybernetics pg2: "...shared the conviction that the most fruitful areas for the growth of the sciences were those which had been neglected as a no-man's land between the various established fields. Since Leibniz there has perhaps been no man who has had a full command of all the intellectual activity of his day."

Idan Arye

Oct 08, 2020

30

Instead of creating a superintelligent AGI to perform some arbitrary task and watch it allocate all the Earth's resources (and the universe's resources later, but we won't be there to watch it) to optimize it, we decide to give it the one task that justifies that kind of power and control - ruling over humanity.

The AGI is more competent than any human leader, but we wouldn't want a human leader who's values we disagree with even if they are very competent - and the same applies to robotic overlords. So, we implement something like Futarchy, except:

  • Instead of letting the officials generate policies, the AGI will do it.
  • Instead of using betting markets we let the AGI decide which policy best fulfills the values.
  • Instead of voting for representatives that'll define the values, the AGI will talk with each and every one of us to build a values profile, and then use the average of all our values profiles to build the values profile used for decision making.
    • Even better - if it has enough computation power it can store all the values profiles, calculate the utility of each decision according to each profile, calculate how much the decision will affect of each voter, and do a weighed average.

So the AGI takes over, but humanity is still deciding what it wants.

Andrew Jacob Sauer

Oct 08, 2020

30

A common concern around here seems to be that, without massive and delicate breakthroughs in our understanding of human values, any superintelligence will destroy all value by becoming some sort of paperclip optimizer. This is what Eliezer claims in Value is Fragile. Therefore, any vision of the future that manages to do better than this without requiring huge philosophical breakthroughs (in particular, a future that doesn’t know how to implement CEV before the Singularity happens) is encouraging to me as a proof of concept for how the future might be more likely to go well.

In a future where uploading minds into virtual worlds becomes possible before an AI takeover, there might well be a way to salvage quite a lot of human value with a very comparatively simple utility function: simply create a big virtual world and upload lots of people into it, then have the AI’s whole goal be to run this simulation for as long as possible.

This idea of “just run this program” seems a lot more robust and more likely to work and less likely to be exploited than attempting to maximize some utility function meant to represent human values, and the result would probably be better than what would happen if the latter went wrong. I suspect it would be well within the capability of a society which can upload minds to create a virtual world for these minds where the only scarce resource is computation cycles and there is no way to forcibly detain someone, so this virtual world would not have many of the problems our current world has.

This is far from a perfect outcome, of course. The AI would likely destroy everything it touches for resources, killing everyone not fortunate enough to get uploaded. And there are certainly other problems with any idea of “virtual utopia” we could come up with. But this idea gives me hope because it might be improved upon, and because it is a way that we don’t lose everything even if CEV proves too hard of a problem to solve before Singularity.

jungofthewon

Oct 07, 2020

30

I tweeted an idea earlier: A tool that explains in words you understand what the other person really meant. maybe has settings for "gently nudge me if i'm unfairly assuming negative intent"

jungofthewon

Oct 08, 2020

20

Doesn't directly answer the question but: AI tools / assistants are often portrayed as having their own identities. They have their own names e.g. Samantha, Clara, Siri, Alexa. But it doesn't seem obvious that they need to be represented as discrete entities. Can an AI system be so integrated with me that it just feels like me on a really really really good day? Suddenly I'm just so knowledgeable and good at math! 

And also the inverse: helping you avoid doing things you don't want to do. For example, observing that you are over-reacting to an ambiguous email rather than giving them the benefit of the doubt. Or more seriously recognizing you are about to fall off the wagon with substance abuse and prompting you to reconsider. (e.g. mitigating the part of human nature described in Romans 7:15, "I do not understand what I do. For what I want to do I do not do, but what I hate I do.")

Filipe Marchesini

Oct 07, 2020

-10

Yes, jungofthewon, and AI going well will also give all humanity the ability to get adequate quantity of super-delicious green food, making everyone get unlimited access to super-delicious perfectly sustainable healthy and ethical food.

Also, AI going well will provide to humanity the best algorithm to allocate different people in different places, choosing the best place to accommodate  every person on earth. You wouldn't believe that living with this specific set of 7 people on this specific city and in this specific house would give you the maximum expected happiness points you could ever achieve by just choosing the right people to live with.  

Furthermore, this perfect auto sustainable large-scale medical diagnostic system with the cumulative knowledge from the best medical sources on earth and the best statistical analysis from all data from all patients on earth, that no person on earth ever develops any kind of disease, because the system always help you to avoid them.

I feel optimistic about the future when I think on the possibility of choosing an AI system to be my political representative that I choose to coordinate my resources in direction of optimizing large-scale systems of education, health, safety, housing, basic sanitation and entertainment. I would give all my resources for such a system that proves to me that it can handle these systems better than traditional human political representatives, and even more if it provided us drones to protect all our community from human systems that threaten our peaceful way of coordinating resources for all living beings.

We don't know how to allocate the best person for each possible task for optimizing some human system. I would be in awe if we got an AI system that could choose the best job for each human, based on their past data plus their iterated input about what they want / they like to do / they would like to be doing.

Instead of great companies in the traditional way of seeing problems being solved by a bunch of humans, I believe great ML algorithms running on decentralized systems that could be built in the next decade, where the "algorithm product" is coordinating humans to get more happiness points, more iterations among them, more health and fulfilling actions, by helping them to choose the best actions to optimize their own systems. For example, using GPT-5 we could actually describe in natural language "I would like to have a super-cheap and easy to build with the least quantity of resources automatic ethical green food production system", and then it would just tell me a set of actions to take and get that for me, and I could distribute this to the community. 

Your first sentence came off as quite patronizing, so I wasn't able to do a good-faith reading of the rest of your post.

2jungofthewon4y
Haha I didn't find it patronizing personally but it did take me an embarrassingly long time to figure out what Filipe did there :) Resource allocation seems to be a common theme in this thread.