In a recent interview with ABC news, Sam Altman emphasised multiple times that he sees individual AI tutoring as one of the greatest applications of AI:

I'm most excited about is the ability to provide individual learning -- great individual learning for each student.

However, I’m very sceptical that this individual tutoring technology will be widely used in developed countries, and in developing countries one generation down the line.

Apart from the engaging tutor, learning cognitively demanding disciplines and skills requires motivation. The main motivations that people have for learning are[1]:

  • Competitive: people learn skills to outcompete others in some (status) game. This might be linked to mating.
  • Economic: people learn difficult disciplines to get a job and earn a living. This motivation is based on fear of the loss of livelihood.
  • Intrinsic: people learn difficult disciplines because they are intrinsically motivated to learn the subject, or find the process of learning difficult skills intrinsically rewarding. The latter could also be seen as self-actualisation motivation.
  • Altruistic: people learn skills for doing good.
  • (GPT-4 also suggested there is a “social” motivation of “fitting in”, but I think it is hardly the case: is there any society where people are ostracised for not possessing difficult skills? I doubt so. This is especially so in the social groups of children, where, if anything, the opposite is the case: kids may be ostracised for trying to become “too smart”, or “learning too much”.)

Outside of the contexts where competitive motivation is a proxy for economic success (and, by extension, mating success), competitive motivation seems to primarily apply in the context of games, such as poker, chess, and Counter-Strike. Indeed, AI tutors have the potential to increase the competition and skill of human players, as this has already happened in chess.

On the other hand, economic motivation for learning will crumble. In a few years, the whole population of developed countries will realise that we head towards complete automation of all cognitive labour. Universal basic income will appear inevitable. Kids will realise this, too, they are not stupid. By extension, the mating motivation that was conditioned on learning difficult skills (”learn math or programming” → “find a high-paying job or start a business” → “attract a good mate”) will crumble, too. And so will the altruistic motivation: when AI will become superhuman, assuming it will be aligned, it will be most altruistic to not try to “help” it with any cognitive tasks.

I believe that true intrinsic motivation for learning is either very rare or requires a long, well-executed process of learning with positive feedback so that the brain literally rewires itself to self-sustain motivation for cognitive activity (see Domenico & Ryan, 2017). Almost certainly, this requires being raised in a family where someone already has intrinsic motivation for learning, and is very determined to develop such motivation in the kids. So, we cannot expect the percentage of people who are intrinsically motivated for learning will grow, outside of the tiny percent of those who are naturally born with this predisposition[2].

Realistic futures of motivation and education

In the world with UBI and complete cognitive automation, it seems more or less plausible to me that humans will channel their activity in the following directions:

  • Physical games and competitions: football, basketball, mountaineering, bodybuilding, surfing.
  • Cognitive games and competition: poker, chess, eSports.
    • Learning advanced science: in the world where AGI will develop science that in principle will be far beyond the capacity of unaided and unmodified human brains to grasp. People who are intrinsically motivated in learning will just try to climb this ladder as high as possible. Or, this could also become a sort of status game, albeit likely a very niche one.
  • Manual and physical labour: in the world with UBI, the labour that is still not automated should automatically become highly paid to motivate anyone to do it; if the world will become (almost) abundant by this point and money could not be a serious motivation at all, it must also become socially praised and a status job.
  • Spirituality and compassion: I can imagine that some people will respond to the meaning crisis by turning towards meditation, Buddhism, or developing as much love, gratitude, and compassion for everyone and everything that surrounds them as possible.
  • Debating beauty (art, fashion) and developing the tastes for beauty and art. If the value of art is fundamentally subjective (rather than objective, as David Deutsch conjectured), even though the art itself may be created mostly by AI, people can indulge in open-ended development or artistic thought endlessly: this is effectively a random walk on the spaces of form and style. If the beauty and/or value of art is in some sense objective, this could be turned into a “climbing the ladder” competitive exercise, akin to learning advanced science.
  • Just being frustrated and/or addicted to food, sex, or simple visual stimuli like TikTok or games.

If you are pessimistic, you may think that the last point will dominate the condition of humanity post-AGI. One can even argue that the downward spiral of depression, frustration, and addiction will inevitably lead to the extinction of the human race if humanity will not merge with AI in some way because either such technology will prove to be infeasible, or because AGI will not permit the humanity to do this for some reason.

Most of the directions outlined above don't require excellent AI tutors Altman is excited about creating. The exceptions are “cognitive competition”, “learning advanced science”, and, to some degree, “developing the tastes for beauty and art”. If a large proportion of humanity will indeed engage in these types of activities in the post-AGI future, I would consider this a very good outcome of the AI transition, but it intuitively seems to me that this outcome is very unlikely, even conditioned on solving technical alignment and solving the governance and global coordination challenges that will surround the AI transition.

To sum up: my P(at least 30% of all people are flourishing by learning and doing other cognitively difficult stuff|no human-AI merge, AI is aligned and doesn’t kill everyone) is less than 10%.

Does anyone know the opinions of expert sociologists, psychologists, educators, and anthropologists on this topic?

Conclusion

If Sam Altman truly believes that the future in which a lot of people thrive by learning is likely and OpenAI strategy leads to such a future, I think either his thinking is flawed, or "AI tutors for everyone” is just a convenient slogan from the perspectives or marketings and politics rather than his actual belief about the future.

I don’t see how OpenAI’s product and licensing strategy could be particularly different so that it nears the “enlightened future” with a higher probability than otherwise. For example, I think that their licensing of generative AI technology to Snapchat and Microsoft Copilot are both bad for society at least short- and mid-term. On the other hand, the first might be good for OpenAI financially, while the second was vital for OpenAI financially and will probably be good for the economy according to some metrics, though bad for the resilience and antifragility of the economy (business processes dependent on OpenAI → OpenAI is down → half of the economy is down). However, all these factors don’t seem to me to be directly impacting the prospects of the “enlightened future”.

I’m not sure anything can actually bring about the “enlightened future” (again, conditioned on the case when the human-AI merge will not happen). If this is the case, I think it was more truthful on the part of Altman to say “I’m excited for the AI to automate all labour; then, if the human-AI merge will be feasible, we can tap into unfathomable knowledge. If not, or if aligned AI will not permit us to do this, we can at least meditate, play, and live happy, untroubled lives, while the minority of geeks can indulge in trying to learn the finest theories of physics and science, developed by the AI”.

For similar reasons, I’m sceptical of Altman’s invocation of “creativity”:

[…] human creativity is limitless, and we find new jobs. We find new things to do.

New things? Yes. New jobs? No need to call these new things “jobs”. Will these “things to do” be creative? I doubt so. Only subjective artistic exploration may count as such, but, as with learning advanced science, I think only a small portion of the population is intrinsically motivated by artistic exploration.

  1. ^

    Please forgive me that this classification doesn’t follow any convention from the psychological literature, such as self-determination theory. Upon reading Chater’s “The Mind is Flat”, I’m very sceptical of the scientific standing of any such theories. For storytelling purposes, the classification proposed below is more convenient than SDT.

  2. ^

    There are also gradations of this disposition. I consider myself perhaps within 10% of the most intrinsically motivated learners in the population, yet I struggle to read dry books and papers for the knowledge contained therein alone. Pop science videos on YouTube and podcasts are more of a form of entertainment than media for learning difficult disciplines and applying serious cognitive effort.

New Comment
33 comments, sorted by Click to highlight new comments since: Today at 3:55 PM

It is somewhat alarming that many participants here appear to accept the notion that we should cede political decision-making to an AGI. I had assumed that this was a widely-held view that such a course of action was to be avoided, yet it appears that I may be in the minority.

The question I'm currently pondering is do we have any other choice? As far as I see, we have three options to deal with AGI risks:

A: Ensure that no AGI is ever built. How far are we willing to go to achieve this outcome? Can anything short of burning all GPUs accomplish this? Is that even enough or do we need to burn all CPUs in addition to that and go back to a pre-digital age? Regulation on AI research can help us gain some valuable time, but not everyone adheres to regulation, so eventually somebody will build an AGI anyway.

B: Ensure that there is no AI apocalypse, even if a misaligned AGI is built. Is that even possible?

C: Ensure that every AGI created is aligned. Can we somehow ensure that there is no accident with misaligned AGIs? What about bad actors that build a misaligned AGI on purpose?

D: What I describe in this post - actively build one aligned AGI that controls all online devices and eradicates all other AGIs. For that purpose, the aligned AGI would need to at least control 51% of the world’s total computing power. While that doesn’t necessarily mean total control, we’d already give away a lot of autonomy by just doing that. And surely, some human decision-makers will turn their duties over to the AGI. Eventually, all or most decision-making will be either AGI-guided or fully automated, since it’s more efficient.

Am I overlooking something?

For me, it's primarily due to way more optimistic models of AI risk than most LWers have, so I count as one of the people who would accept political decision making to an AGI.

(GPT-4 also suggested there is a “social” motivation of “fitting in”, but I think it is hardly the case: is there any society where people are ostracised for not possessing difficult skills? I doubt so. This is especially so in the social groups of children, where, if anything, the opposite is the case: kids may be ostracised for trying to become “too smart”, or “learning too much”.)

I disagree about children. I've seen classrooms where competition is on having good grades, and the guys with good grades are the bullies/they are socially on top. In general there are many social contexts in which you have to be good at something difficult to fit in. I've seen it both in selective schools, where you could expect it, but also in ghettos.

I have only anecdotal evidence. I studied in one of the "top" middle schools in Moscow, Russia. Children (for example, myself) weren't ostracised for being too nerdy or too curious in their studies, but also being relatively bad at studies wasn't a factor that lowered someone's status in that social group at all. I think there was almost no correlation between social status and classroom success. However, in ordinary schools, I know for sure that there is a negative correlation between social status and classroom success.

Maybe in the US, it's very different.

The idea that all cognitive labor will be automated in the near-future is a very controversial premise, not at all implied by the idea that AI will be useful for tutoring. I think that’s the disconnect here between Altman’s words and your interpretation.

Altman gestured multiple times (including in this very interview, but also elsewhere) that he has a "single-digit-years" HLAI timeline. HLAI must imply automation of all cognitive labour because it's vastly cheaper, faster, and makes fewer mistakes than people, right?

I think people generally overrate the importance of economic incentive and to a huge degree.

Economic incentive can coerce. It can create the bare minimum of motivation. But trully high level of motivation, the burning passion, the enamouring and intoxicating feeling of purpose, can only come from intrinsic sources.

Yes, that's for sure. But that doesn't seem Altman's and Gates' talking point: "we will provide those <5% (maybe still much less) of nerds who have burning passion for abstract mathematical knowledge with excellent tutors to teach it". The PR point is meant to be much more egalitarian. 

is there any society where people are ostracised for not possessing difficult skills?

Depending on what you call "difficult". I think you will try to fit in if 80%+ of your peers know the skill, but OTOH if 80%+ people know have the skill then is it "difficult"?

In gen pop I feel this way about driving cars - some people hate it and have to deal with a lot of stress/anxiety to learn this skills and although they could live without it. But living without it means they can't do some of the things - they have less power.

I'd say then the power motivation (increase the space of your action) could be lumped together with the economic one or form a separate one.

Similar skill like this - learning a language. In gen pop it's seldom the case unless you live in a country where 3/4 people know a second language (sadly not the case in my country). But it afaik it worked in the past with some elite societes where people learned french (lingua franca) or greek or latin or something to fit in. Languages also increase the space of your actions (you can understand more by yourself, talk to more people etc.) so this again could be "power" motivated.

Ok, I buy language -- I think indeed, for example in 19th century Russia learning French was a marker of "fitting in" aristocracy, although it was not required for life. But I don't know if there is any such society anymore where some social group tends to learn some language for status reasons only.

I believe that true intrinsic motivation for learning is either very rare or requires a long, well-executed process of learning with positive feedback so that the brain literally rewires itself to self-sustain motivation for cognitive activity (see Domenico & Ryan, 2017).

A lot of what I found reading over this study suggests that this is already the case, not just in humans, but other mammals as well. Or take Dörner’s PSI-Theory (which I’m a proponent of). According to Dörner, uncertainty reduction and competence are the most important human drives, which must be satisfied on a regular basis and learning is one method of reducing uncertainty. 

One might argue that in the “utopian” scenario you outlined, this need is constantly being satisfied, since we all welcome our AI overlords and therefore have no uncertainty. In that case, the competence drive would help us out. 

Simplified, we can say that everything humans do has the end-goal of satisfying their competence drive and satisfying any other drive (e.g. by eating, sleeping, working/earning money, social interaction, uncertainty reduction) is only a sub-goal of that. With all physiological needs being taken care of by the AI overlords, the focus for satisfying the competence drive would shift more towards the “higher” drives (affiliation and uncertainty reduction), and direct displays of competence (e.g. through competition).

In the “Realistic futures of motivation and education” section, you mention some of the things humans could do in a utopian post-AGI scenario to satisfy their competence drive with the frustration path being reserved to those unfortunate souls who cannot find any other way to do so. Those people already exist today and it’s possible that their number will increase post-AGI, but I don’t think they will be the majority.

Just think about it in terms of mate-selection strategies. If providing resources disappears as a criterion for mate-selection because of AGI abundance, we will have to look for other criteria and that will naturally lead people to engage in one or several of the other activities you mentioned, including learning, as a means of increasing their sexual market value.

The question is what percent of people will go the learning route. I expect this percentage to decrease relative to the present level because presently, learning difficult disciplines and skills is still required for earning a stable living. Today, you cannot confidently go into body-building, or Counter-Strike gaming, or chess, because only a tiny minority of people earn money off this activity alone. For example, as far as I remember, only a few hundred top chess players earn enough prize money to sustain themselves: others need to also work as chess teachers, or do something else still, to sustain themselves. Same with fitness and body-building: only a minority of body-builders earn enough prize money, others need to work as personal trainers, do fitness blogging on the side, etc. Same story for surfing, too.

When the economic factor will go away, I suspect that even more people will go into fitness, body-building, surfing, chess, poker, and eSports, because these activities are often joyful in themselves and have lower entry barriers than serious science learning.

As I also noted below in the comments, the fact that few people will choose to try to learn SoTA science is not necessarily "bad". It just isn't compatible with Altman's emphasis. Remember, that he brought up excellent AI tutoring in response to the question like "Why do you build this AI? What do we need it for?" I think there are many honest answers that would be more truthful than "Because AI will teach our kids very well and they will exercise their endless creativity". But maybe the public is less prepared to those more truthful answers, they are too far outside of the Overton window still.

When the economic factor will go away, I suspect that even more people will go into fitness, body-building, surfing, chess, poker, and eSports, because these activities are often joyful in themselves and have lower entry barriers than serious science learning.

These activities aren't mutually exclusive, you know. Even if you make mastering eSports or surfing your main goal in life, you'll still engage in other activities in your "spare-time" and for a lot of people, that will include gaining basic scientific knowledge. Sure, that will be "armchair science" for most of these people, but that's already the case today.

Those who study a scientific field in its entirety and become PhDs or professors today, rarely do so out of financial interest. For example, a mathematics professor could earn much more money by working in the free economy. As such, I would expect the number of people in the world with the proficiency of becoming a mathematics professor to even grow in a utopian post-AGI scenario. The same goes for other scientific fields as well.

Today, most academics are somewhere in between, for example a doctor, who has enough medical knowledge to practice as a surgeon, but not enough to teach medicine or write academic papers. These are likely the ones who are most influenced by extrinsic rewards, so let's take a closer look at what happens to those in-betweeners in your scenario. 

With AGI surgeons, the demand for human surgeons would dramatically decrease, so there is no financial incentive to become a better surgeon, or practice surgery at all. Some of the existing surgeons would likely follow the academic path and still increase their medical knowledge, out of intrinsic motivation. The remaining surgeons will lay their interest on, as you said, surfing, poker, eSports, etc, or other studies. 

I think the most likely outcome for academia will be a strengthening of interdisciplinary sciences. Right now, academics can expect the highest salary by studying a scientific discpline in depth and becoming an in-betweener. When that incentive structure disappears because there is little need for in-betweeners post-AGI, they will either study science more broadly, or focus on other activities and study armchair science in their free time.

In both cases, AI tutoring can have practical applications, so Altman wasn’t lying. Anyway, I think he is referring to current practical AI use cases, which do include AI tutoring, and not a post-AGI future. So overall, I don't think that he is somehow trying to suppress an inconvenient truth that is outside of the Overton window, but it's definitely worthwhile to think about AGI implications from this angle.

"When the economic factor will go away, I suspect that even more people will go into fitness, body-building, surfing, chess, poker, and eSports, because these activities are often joyful in themselves and have lower entry barriers than serious science learning."

This strikes me similar to the death of the darkroom. Yeah, computers do it better, cheaper, etc. However, almost no one who has ever worked in a darkroom seriously producing photography is happy that this basically doesn't exist at all anymore. The experience itself teaches a lot of skills in a very kinaesthetic and intuitive way (with saturation curves that are pretty forgiving, to boot).

But more than this, the simple pleasure of math, computer programming, and engineering skills are very worthwhile in themselves. However, in John S Mills style utilitarianism, you have to do a lot of work to get to enjoy those pleasures. Will the tingle of the lightbulb coming on when learning PDEs just die out in the next 20 years like the darkroom has in the past 20 years? Meanwhile maybe darkrooms will make a big comeback?

I guess people will always want to experience pleasures. Isn't learning complex topics a uniquely human pleasure?

I dont think I never learned something because it would make me a better worker / provide me with more economical resources when I was a child and was in need of tutoring, I got lucky to have a somewhat curious mind and I tried to saciate it. 

 

Of course as an adult I choose to do things that are useful, overall, and that normally repercutes on being a human with skills that other people pay for. But the explicit bayesian calculation about knowledge and money is not one I tend to do, what interest me interest me. Of course when trying to learn something as an adult the friction of the subject is a marker that determines how likely I'm going to try to acquire the information, for example, I have read about Laser gyros, but the mountain of knowledge was too unsumournable to actually learn how laser gyros work, really. 

If the LLMs can lower the friction I guess everybody will be more likely to learn things. Also, there is no "big mistery" in most fields, you just need a structured idea of a certain amount of concepts. Some of them more palatable than others. (I know what a sigmoid activation is, but I dont have the high school knowledge about the funcion very fresh on my mind) These tools could help with it. 

 

(I tried to parse this comment on Bing writing tool to get a better output and it just came as more corporate) English is not my first language but this just feels worse. 

 I have always been curious about learning new things, regardless of their economic value or usefulness for my career. When I was a child and needed tutoring, I did not choose subjects based on how they would make me a better worker or provide me with more resources. Instead, I followed my interests and tried to satisfy them. Of course, as an adult, I also consider the practical aspects of learning something new, such as how it can benefit me professionally or personally. 

But I do not usually make explicit Bayesian calculations about knowledge and money; rather, I learn what interests me. However, sometimes the difficulty of learning something can discourage me from pursuing it further. For example, I have read about laser gyros, but the amount of knowledge required to understand how they work was too overwhelming for me. 

If there were tools that could lower the friction of learning new things, such as language models that could explain concepts in simple terms or provide structured overviews of different fields, I think everyone would be more likely to learn more things. 

After all, most fields do not have "big mysteries" that are impossible to grasp; they just require familiarity with certain concepts and their relationships. Some of these concepts may be more intuitive than others (for instance, I know what a sigmoid activation is in neural networks, but I do not remember much about the function itself from high school math). These tools could help bridge these gaps and make learning easier and more enjoyable.
 

If the LLMs can lower the friction I guess everybody will be more likely to learn things. Also, there is no "big mistery" in most fields, you just need a structured idea of a certain amount of concepts.

This is not the case. Maybe your memory capacity is naturally on the upper side of the human range (e.g., 9 pieces rather than 4) as well as IQ, which makes learning for you seem doable. The fact is, most people, no matter how hard they try, are probably incapable of learning calculus, let alone tensor algebra or something even more abstract or complex, such as the math that is needed even to begin to approach string theory. Or, something that requires keeping simultaneous track of many moving pieces. For example, it's considered that no human can properly understand how the brain works: it requires simultaneously holding in one's head dozens or hundreds of moving pieces. AI can do this, but a human can't.

I saw this mistake made by David Deutsch: because he himself is a genius and can relatively easily understand anything that any other human can write, he conjectured that the "human understanding has universal reach". Stephen Wolfram, another genius, concurred.

This is why I don't place much confidence in projections about how the population will be affected by TAI from people like Sam Altman either. You have to consider they are very likely to be completely out of touch with the average person and so have absolutely terrible intuitions about how they respond to anything, let alone forecasting long term implications for them stemming from TAI. If you get some normal people together and make sure they take the proposition of TAI and everything it entails seriously, (such as widespread joblessness), I suspect you would encounter a lot more fear/apprehension around the kind of behaviours and ways of living that is going to produce. 

Human brain was made for task solving. Stuff like pain, joy, grief or happiness are only exist to help brain realise which task to solve.

If brain is not solving tasks, it will either invent tasks for itself (that's what near 100% of entertainment revolves around), or will suffer from frustration and boredom. Or both.

So, completely taking away the need to think from someone is not altruism. It's one of the most cruel thing one could enact on a person, comparable with physically killing or torturing them.

I don't think competing in chess, sports, bodybuilding, or Dota is boring. So, conditioned on successfully transitioning to aligned AI and keeping this AI aligned, I don't think people are doomed for frustration and boredom (albeit one can defend such an argument, which I labelled as "pessimistic" in the post, I don't personally think this is the case).

In fact, there is probably nothing wrong with this future. Or, we shouldn't even label this future as "right" or "wrong" because this is simply the only possible future, people cannot stay at the forefront of cognitive achievement in the presence of superhuman AI.

I doubt whether this future's most exciting part is excellent AI tutoring.

Yeah, it could be fun, but it could feel empty. As you can't actively work to make world better, or other people happier. With exception maybe if it is also some cooperative game/sport. Or if other people's needs require specifically people's assistance. Maybe they need specifically human-provided hug:) I think I would.

In this hypothetical, you have a superhuman AI actively trying to make the world nice. Therefore, whatever happens will be very nice. 

Maybe the AI leaves some things to humans, you know, like art or something. 

Isn't the point of improving the world to make the world better. If you are working on a cancer cure, and hear the news that someone else succeeded, do you go "How dare they. I wanted to cure cancer. Me Me Me." or do you go "yay"? 

Why should it be any different with AI's. 

More to the point. People still play chess, even when the AI is totally better than them. People will find something to do. People do that. People don't go around moping at the "hollow purposelessness of their existance". 

Helping solving health problems and prolonging life I can accept from AI, if we can't solve it by ourselves.

But what if AI will decide that, say, being constantly maximally happy is nice, so it turns everyone into happy vegatables?

Then you have screwed up your alignment. Any time your woried that the AI will make the wrong decision, rather than that the AI will make the right decision, but you want to have made that decision, then your worried about an alignment failure. Now maybe it's easier to make a hands off AI than a perfectly aligned AI. An AI that asks us it's moral dilemmas because we can't figure out how to make an AI choose correctly on it's own. 

But this is a "best we could do given imperfect alignment". It isn't the highest form of AI to aim for. 

Problem is, our alignment is glitchy too. We are wired to keep running for the carrot that we will never be able to have for long. Because we will also strive for me. But AI can just teleport us to the "maximum carrot" point. Meanwhile, what we really need is not the destination, but the journey. At least, that's what I believe into. Sadly, not much people understand/agree with it. 

I'm more optimistic about this because a lot of people already don't care about whether they make the world a better place (apart from sharing joy and sex and friendship with other people, which is a form of service to the world, too), and they seem (mostly) fine with that. I don't think the meaning crisis and global psychological health will radically worsen just because AI will strip the ability to make a good impact in the world through cognitive achievement from a (relative minority) of people who are still oriented towards this.

I expect a benevolent AGI sovereign would purposefully return a lot of power and responsibility to humans, despite being able to do everything itself, just to give us something to do and an honest feeling that our lives have meaning. I think that actually most people do want to feel as if they are serving their community, and making their local world a better place - in fact this is kind of a fundamental human need - and a world in which that is not possible would be hellish for most people.

It will feel (and will be) a game in low-stakes situations, like when parents allow children to cook dinner occasionally, even if they know that the kids will burn the pie and will forget to season the salad, etc. Superintelligent AI won't seriously stake its own future on the capability and execution of unaided humans.

So, it could be a sort of game, a competition, "who helped the AI the most", like in a family, "who cooked the best dish for the dinner".

Just doing small favours to people around you, sharing laughs with them, or even smiling at them and receiving a smile back, is already a kind of service that makes a local world a better place -- so realistically,  AI won't completely strip people of that. However, servicing the world by applying cognitive effort will become a thing of the past.

I understand why you say this, but I honestly think you are wrong. I think the AGI should be very hands-off, and basically be a "court of last resort" / minarchist state, with the ability (and trusted tendency) to intervene when a truly bad decision is about to be made, but which heavily discourages people from relying on it in any way. I think you're underestimating how much responsibility humans need in order to be happy.

Firstly happiness is a combination of many things. A fair bit of that is pleasant sensations. Not in pain. Nice food. Comfortable. Pretty surroundings ...

It's an average, not an and gate, so you can still be happy with some things not great, if the rest are good. 

Secondly, a lot of people don't want responsibility. Ie prospective parents who aren't sure if they want the responsibility. People often want to escape their responsibilities. 

Thirdly, I am not responsible for global steel production. Do I have any reason to want some other humans to have that responsibility over no one having it? No, not really.  Who holds responsibility in todays world. A few rich and powerful people. But mostly responsibility is held by paperwork monstrosities, where each person just follows arcane bureaucratic rules.  

You're totally right about all this! But still many people do want responsibility to some extent - and re: paperwork monstrosities, this is objectively abnormal for the human race; civilization is utterly alien to our evolved context, and imo mostly a bad thing. The only good thing that's come out of it is technology, and of that, only medical and agricultural technology is really purely good. People in the state of nature have responsibility roughly evenly distributed among the tribe members, though with more among those who are older or considered more expert in some way.

In the fresh interview, Sutskever said something very close: "I would much rather have a world where people are free to make their own mistakes [...] and AGI provide more like a base safety net".

However, all the AI products being made today actively take away people's freedom to "make mistakes". If people want to stay competitive in the market, they will soon find it obligatory to use ChatGPT to check their strategies, Copilot to write code, Microsoft Business Chat to make business decisions, etc., because all these tools will soon make fewer mistakes than people.

Same in romance: I'd not be surprised if soon people would need to use AI assistance both online and offline (via real-time speech recognition and comms via AirPods or smart glass a-la rizzGPT) to stay competitive on the dating market.

So, if we don't see these things as part of the future world, why are we introducing them today (which means we already plan to remove these things in the future)? 🤔

Because there is no coherent we. Moloch rules the earth, alongside Mammon. The enslavement of humanity benefits many corporate egregores.