Recently, when people refer to “immediate societal harms and dangers” of AI, in media or political rhetoric, they predominantly choose to mention “bias”, “misinformation”, and “political (election) manipulation”.

Despite politicians, journalists, and experts frequently compare the current opportunity of regulating AI for good with the missed opportunity to regulate social media in the early 2010s, somehow AI romantic partners are rarely mentioned as a technology and a business model that has the potential to grow very rapidly, harm the society significantly, and be very difficult to regulate once it has become huge (just as social media). This suggests that AI romance technology should be regulated swiftly.

There is a wave of articles in the media (1, 2, 3, 4, for just a small sample) about the phenomenon of AI romance which universally raise vague worries, but I haven’t found a single article that rings the alarm bell that AI romance deserves, I believe.

The EA and LessWrong community response to the issue seems to be even milder: it’s rarely brought up, and a post “In Defence of Chatbot Romance” has been hugely upvoted.

This appears strange to me because I expect, with around 75% confidence, that rapid and unregulated growth and development of AI partners will become a huge blow to society, on a scale comparable to the blow from unregulated social media.

I'm not a professional psychologist and not familiar with academic literature in psychology, but the propositions on which I base my expectation seem at least highly likely and common-sensical to me. Thus, one of the purposes of this article is to attract expert rebuttals of my propositions. If there would be no such rebuttals, the second purpose of the article is to attract the attention of the community to the proliferation of AI romantic partners which should be regulated urgently (if my inferences are correct).

What will AI romantic partners look like in a few years?

First, as Raemon pointed out, it’s crucial not to repeat the mistake that is way too common in the discussions of general risks from AI, that is, to assume only the AI capabilities that are present already will exist and to fail to extrapolate the technology development.

So, here’s what I expect AI romantic partner startups will be offering within the next 2-3 years, with very high probability, because none of these things requires any breakthroughs in foundational AI capabilities, and is only a matter of “mundane” engineering around the existing state-of-the-art LLMs, text-to-audio, and text-to-image technology:

  • A user could create a new “partner”, that comes with a unique, hyper-realistic “human avatar”, generated according to the preferences of the user: body shape, skin colour, eye colour, lip shape, etc. Of course, these avatars will maximise sexual attractiveness, within the constraints set by the user. You can see a sample of avatars that are already generated today in this Twitter account. Except, today these avatars still look just a tad “plastic” and “AI-generated”, which I expect to completely go away soon, i.e., they will look totally indistinguishable from real photos, except that the avatars themselves will be “too perfect to be true” (which also could be addressed, of course: an avatar could have some minor skin defect, or face asymmetry, or some other imperfection if the user chooses).
  • Apart from a unique appearance, the AI partner will also have a unique personality (a-la character.ai) and unique voice (a-la Scarlett Johansson in the movie “Her”). Speech generation will be hyper-realistic, sound just like a real human voice recorded, and will correctly reflect the emotional charge of the text being said and the general emotional tone of the discussion happening between the user and the AI just before the given AI’s voice reply is generated.
  • LLMs underlying the AI will be fine-tuned on real dialogues between romantic partners and will have human-level emotional intelligence and the skill of directing the dialogue (human-level theory of mind already goes without saying), such as noticing slight when it’s acceptable to be “cheerful” and when it’s better to remain “serious” when it’s better to wrap up the dialogue because the user becomes slightly bored, etc. Of course, these LLMs won’t be just OpenAI API with a custom system prompt, which is prone to “leaking” those “I’m just an AI model” disclaimers, but a custom fine-tune, a-la character.ai.
  • Even if the technology won’t become sophisticated enough to automatically discover the dialogue style preferred by the user, this style could probably be configured by the user themselves, including the preferred “smartness” of their partner, the “sweetness” of the dialogue (e.g., the usage of nouns such as “dear” or “baby”), the preferred levels of sarcasm/seriousness, playfulness, agreeableness, and jealousy of the AI. The available level of smartness, erudition, and eloquence of the AI is already superhuman, as of GPT-4-level LLMs, although the user may prefer to deliberately “dumb down” their AI partner.
  • Even though, “for ethical reasons”, AI partners (at least, those created by companies rather than by open-source hackers) will not actively conceal that they are AIs, such as if questioned directly “Are you an AI or a real person?”, they will answer “I’m an AI”, they will probably be trained to avoid confronting the user with this fact if possible. For example, if the user asks the AI “Can we meet in person?”, the AI will answer “Sorry, but probably not yet :(”, rather than “No, because I’m an AI and can’t meet in person.” Similarly, unlike ChatGPT, Bard, and Claude, these AI partners won’t be eager to deny that they have personality, feelings, emotions, preferences, likes and dislikes, desires, etc.
  • AI partners will effectively acquire long-term memory with vector embeddings over the past dialogue and audio chat history, or with the help of extremely long context windows in LLMs, or with a combination of both, which will make AI relationships much less of a “50 First Dates” or “pre-wakeup Barbieland” experience, and more of a “real relationship”, where the AI partner, for example, remembers that the user has a hard time at work or in a relationship with their friends or parents and asks about it proactively even weeks after this has been revealed in the dialogue, giving the impression of “real care”.

Please note that I don’t just assume that “AI = magic that is capable of anything”. Below, I list possible features of AI romantic partners that would make them even more compelling, but I can’t confidently expect them to arrive in the next few years because they hinge on AI and VR capability advances that haven’t yet come. This, however, only highlights how compelling AI partners could already become, with today’s AI capabilities and some proper product engineering.

So, here are AI partner features that I don’t necessarily expect to arrive in the next 2-3 years:

  • Generation of realistic, high-resolution videos with the avatar that are longer than a few seconds, i.e., not just “loop photos”.
  • Real-time video generation that is suitable for a live video chat with the AI partner.
  • The AI partner has a “strategic relationship intelligence” in the relationship: for example, it is able to notice a growing issue in the relationship (such as the user growing bored of the AI, or growing irritated with some feature of the AI, or a shifting role of the AI in the life of the user) and knows how to address them, even if this would be just initiating a dialogue with the user about this issue, or adjusting the AI’s personality (”working on oneself”).
  • The personality of the AI partner could change or “grow” spontaneously rather than upon the request or intervention from the user.
  • The AI can control or manipulate the user on a deep level. Only people who have expert practical knowledge of human psychology can do this. Also, this requires the ability to infer the psychological states of the user over long interaction histories, which LLMs probably cannot do out of the box (at least, not yet).
  • There is an app for a lightweight VR headset that projects the avatar of the AI partner on a sex doll.

AI romantic partners will reduce the “human relationship participation rate” (and therefore the total fertility rate)

I don’t want to directly engage with all the arguments against the proposition that AI partners will make people work towards committed human relationships and having kids, e.g., in the post by Kaj Sotala and in the comments to that post, as well as some other places, because these arguments seem to me exactly of the manufactured uncertainty kind wielded by social media companies (Facebook, primarily) before.

Instead, I want to focus on the “mainline scenario” which will counterfactually deprive a noticeable share of young men outside of the “relationship market pool”, which, in turn, must reduce the total share of people ending up in committed relationships and having kids.

A young man, between 16 and 25 years old, finds it difficult to get romantic partners or casual sex partners. This might happen either because the man is not yet physically, psychologically, intellectually, or financially mature, or because he has transient problems with their looks (such as acne, or wearing dental braces), or because the girls of the respective age are themselves “deluded” by social media such as Instagram, have unrealistic expectations and reject the man. Or, the girls of the respective age haven’t yet developed online dating fatigue and use dating apps to find their romantic partners, where men outside of the top 20% by physical attractiveness are generally struggling to find dates. Alternatively, the young man finds a girl who is willing to have sex with him, but his first few experiences are unsuccessful and he becomes very unconfident about intimacy.

Whatever the reason, the man decides to try the AI girlfriend experience because his friends say this is much more fun than just watching porn. He quickly develops an intimate connection with his AI girlfriend and a longing to spend time with it. He is shy to admit this to his friends, and maybe even to himself, but nevertheless he stops looking for human partners completely, justifying this to himself with having to focus on college admission, or his studies at the college, or his first years on a job.

After a year in the AI relationship, he grows very uneasy feeling about it because he feels he is missing out on “real life” and is compelled to stop this relationship. However, he still feels somehow “burned out” of romance and only half more year after the breakup with his AI partner for the first time he feels sufficiently motivated to actively pursue dates with real women. However, he is frustrated by their low engagement, intermittent responses and flakiness, their dumb and shallow interests, and by how average and uninspiring they look, which is all in stark contrast with his former AI girlfriend. His attempts to make any meaningful romantic relationship go nowhere for years.

While he is trying to find a human partner, AI partner tech develops further and becomes even more compelling than it used to be when the man left his AI partner. So, he decides to reconcile with his AI partner and finds peace and happiness in it, albeit mixed with sadness due to the fact that he won’t have kids. However, this is tolerable and is a fine compromise for him.

The defenders of AI romance usually say that the scenario described above is not guaranteed to happen. This critique sounds to me exactly like the rhetorical lines in defence of social media, specifically that kids are not guaranteed to develop social media addiction and psychological problems due to that. Of course, the scenario described above is not guaranteed to unfold in the case of every single young man. But on the scale of the entire society, the defenders of AI romance should demonstrate that the above scenario is so unlikely that the damage to society from this tech is way outweighed by the benefits to the individuals[1].

The key argument in defence of AI romantic partnership is that the relationship that is developed between people and AIs will be of a different kind than romantic love between humans, and won’t interfere with the latter much. But human psychology is complex and we should expect to see a lot of variation there. Some people, indeed, may hold sufficiently strong priors against “being in love with robots” and will create a dedicated place for their AI partner in their mind, akin to fancified porn, or to stimulating companionship[2]. However, I expect that many other people will fall in love with their AI partners in the very conventional sense of "falling in love", and while they are in love with their AIs, they won’t seek other partners, humans or AIs. I reflected this situation in the story above. There are two reasons why I think this will be the case for many people who will try AI romance:

  • People already report falling in love with AI chatbots, even though the current products by Replika and other startups in this sphere are far less compelling than AI partners a few years from now, as I described in the section above.
  • We know that people fall into genuine romantic love very easily and very quickly from chat and (video) calls alone, "flesh and blood" meetings are not required. To most people, even having only a few photographs of the person and chatting with them is enough to be able to fall in love with them, phone calls or videos are not required. To some people, even just chatting alone (or, in the old times, exchanging written letters), without even having a single photograph of the person, is enough to fall in love with them and to dream of nothing except meeting with them.

Also, note that the story above is not even the most “radical”: probably some people will not even try to break up with their AI partners and seek human relationships, and will remain in love with their AI partners for ten or more years.

Are AI partners really good for their users?

Even if AI romantic partners will affect society negatively through the reduction of the number of people who ever enter committed relationships and/or will have kids, we should also consider how AIs could make their human partners’ lives better, and find a balance between these two utilities, societal and individual.

However, it’s not even clear to me that in many cases, AI partners will really make the lives of their users better, or if people wouldn’t regret their decisions to embark on these relationships retrospectively.

People can be in love and be deeply troubled by that. In previous times (and still in some parts of the world), this would often be interclass love. Or, there could be a clash on some critical life decisions, about the country of living, having or not having children, acceptable risk in the partner (e.g., partner does extreme sports or fighting), etc. True, this does lead to breakups, but they are at least extremely painful or even traumatic to people. And many people could never overcome this, keeping their love towards those who they were forced to leave for the rest of their lives, even after they find a new love. This experience may sound beautiful and dramatic but I suspect that most people would have preferred not to go through such an experience.

So, it's plausible that for a non-negligible share of users, the attempts to "abandon" their AI partner and find a human partner instead will be like such a “traumatic breakup” experience.

Alternatively, people who will decide to “settle” with their AI partners before having kids may remain deeply sad or unfulfilled, even though after their first AI relationship, they may not realistically be able to achieve a happier state, as the young man in the story from the previous section. Those people may regret that they have given AI romance a try in the first place, without first making their best attempt at building a family.

I recognise that here I engage in the same kind of uncertainty manufacturing that I accused the defenders of AI romance of in the previous section. But since we are dealing with “products” which can clearly affect the psychology of their users in a profound way, I think it’s unacceptable to let AI romance startups test this technology on millions of their users before the startups have demonstrated in the course of long-term psychological experiments that young people even find AI partners ultimately helpful and not detrimental for their future lives.

Otherwise, we will repeat the mistake with social media, when the negative effects of it on young people’s psychology became apparent only about 10 years after the technology became widely adopted and a lot of harm had already been done. Similarly to social media, AI romance may become very hard to regulate once it is widely adopted: the technology couldn’t be simply shut down if there are millions of people who are already in love with AIs on the given platform.

AI romance for going through downturns in human relationships

This article describes an interesting case where a man had an “affair” with an AI girlfriend while his wife was depressed for a long time and even fell in love with the AI girlfriend, but that helped him to rekindle the desire to take care of his wife and “saved his marriage”.

While interesting, I don’t think this case can be used as an excuse to continue the development and aggressive growth of the AI partner technology for the majority of their target audience, who are single (Replika said that 42% of their users are in a relationship or married). There are multiple reasons for this.

First, this case of a man who saved his marriage is just an anecdote, and statistics may show that for the majority of people “AI affairs” only erode their human relationships rather than help to rekindle and strengthen them.

Second, the case mentioned above seems to be relatively unusual: the couple already has a son (which is a very huge factor that makes people want to preserve their relationships) and the wife of the man was “in a cycle of severe depression and alcohol use” for entire 8 years before “he was getting ready for divorce”. Tolerating a partner who is in a cycle of severe depression and alcohol use for 8 years could be a sign that the man was unusually motivated, deep down, to keep their relationship, either for the love for his wife or his son. It seems that the case is hardly comparable to childless couples or unmarried couples.

Third, we shouldn’t forget, once again, that soon AI partners may become much more compelling than today, and while they may be merely “inspiring” for some people in their human relationships (which are so far more compelling that AI relationships), soon this may change, and therefore the prevalence of the cases such as the one discussed in this section will go down.

Someone may reply to the last argument that along with making AI partners more compelling, the startups which create them might also make AI partners more considered for the existing human relationships of the users and deliberately nudge the users to improve their human relationships. I think this is very unlikely to happen (in the absence of proper regulation, at least) because this will go against the business incentives of these startups, which is to keep their users “stay in AI relationship” and pay a subscription fee for as long as possible. Also, “deliberately nudging people to improve their human relationships” is basically the role of (family) psychotherapist, and there will be, no doubt, AI products that automate this role specifically, but having such AI psychotherapists extremely sexy avatars, flirting and sexting with their users wouldn't seem to be helpful to the “basic purpose” of these AIs (which AI romance startups may pretend to be “helping people to work their way towards successful human relationships”) at all.

Policy recommendations

I think it would be prudent to immediately prohibit AI romance startups to onboard new users unless they are either:

  • Older than 30 years (pre-frontal cortex is not fully formed before 25 years; most men don’t get to see what women they could potentially have relationships with before they are at least 28-30 years old); or,
  • Clinically diagnosed psychopaths or have another clinical condition which could be dangerous for their human partners; or
  • AI partner is recommended to a person by a psychotherapist for some other reason, such as the person has a severe defect in their physical appearance or a disability and the psychotherapist sees that the person doesn’t have psychological resources or a willingness to deal with their very small chances of finding a human partner (at least before the person turns 30 years old, at which point the person could enter a relationship with an AI anyway), or because they have a depression or a very low self-esteem and the psychotherapist thinks the AI partner may help the person to combat with this issue, etc.

It’s also worthwhile to reiterate that many alleged benefits of AI romantic partners for their users and/or society, such as making people achieve happier and more effective psychological states, motivating them to achieve their goals, and helping them to develop empathy and emotional intelligence, could be embodied in AI teachers, mentors, psychotherapists, coaches, and friends/companions, without the romantic component, which will probably stand in the way of realising these benefits, although admittedly may be used as a clever strategy for mass adoption.

In theory, it might be possible to create such an AI that mixes romance, flirt, gamification, coaching, mentorship, education, and anti-addiction precautions in such a proportion that it genuinely helps young adults as well as society, but it seems to be out of reach for AI partners (and LLMs that underlie them) for the following few years at least and would require long psychological experiments to test. In a free and unregulated market for AI romance, any such “anti-addictive” startup is bound to be outcompeted by startups which make AIs that maximise the chances that the user falls in love with their AI and stays on the hook for as long as possible.

What about social media, online dating, porn, OnlyFans?

Of course, all these technologies and platforms harm society as well (while benefitting at least some of their individual users, at least from some narrow perspectives). But I think bringing them up in the discussions of AI romance is irrelevant and is a classic case of whataboutism.

However, we should notice that AI partners are probably going to grab human attention more powerfully and firmly than any of social media, online dating, or porn has managed to do before. As a matter of simple heuristic, this inference alone should give us a pause and even if we think that this is unnecessary to regulate or restrict access to porn (for instance), this shouldn’t automatically mean that the same policy is right for AI romantic partners.


This post was originally published on the Effective Altruism Forum.

  1. ^

    Whereas it’s not even clear that young individuals will really benefit from this technology, on average. More on this in the following section.

  2. ^

    I’m sure that such “companionship” will be turned into a selling point for AI romantic partners. I think AI companions, mentors, coaches, and psychotherapists are worthwhile to develop, but none of such AIs should have a romantic or sexual aspect. More on this in the section "Policy recommendations" below.

New Comment
71 comments, sorted by Click to highlight new comments since: Today at 8:42 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]Ann9mo6649

This is a political and personal question and produces in me a political and personal anger.

On the one hand, yes I can certainly perceive the possibility of immoral manipulative uses of AI romantic partners, and such deliberate manipulations may need regulation.

But that is not what you are talking about.

I do not care for legal regulation or other coercion that attempts to control my freedom of association, paternalistically decide my family structure, or try to force humans to cause the existence of other humans if - say - 80% of us do opt out. This is a conservative political perspective I have strong personal disagreement with. You don't know me, regulation certainly doesn't know me, and if I have a strong idea what I want my hypothetical family to look like since 16 or so, deliberately pushing against my strategy and agency for 14 years presumably entails certain ... costs. Probably even ones that are counterproductive to your goals, if I have an ideal state in mind that I cannot legally reach, and refuse to reproduce before then out of care for the beings that I would be forcing to exist.

Even at a social fabric level, humans are K-strategists. There's an appeal to investing inc... (read more)

[-]dr_s8mo106

So on one hand I agree with the distaste towards "steer humans to make babies" attitude, but on the other, I think here it's important to notice that it's not the AI waifus companies that would be friends of your free will. You'd have companies with top research departments dedicated specifically to hack your mind by giving it an addictive product, so that you may give them money, and damn your interests (they'll coach that of course in some nominal commitment to customer satisfaction but then it's off to whaling). Their business model would literally be to make you unhappy but unable to break out because between you and potential happiness is a gulf of even worse unhappiness. That is... less than ideal, and honestly it just shouldn't be allowed.

3Ann8mo
Yeah, I don't disagree with regulating that particular business model in some way. (Note my mention of deliberate manipulations / immoral manipulative uses.) Giving someone a wife and constantly holding her hostage for money ransoms isn't any less dystopian than giving someone eyesight, constantly holding it hostage for money, and it being lost when the business goes under. (Currently an example of a thing that happened.)
5Roman Leventov8mo
It seems that that in the end, we practically arrive at the same place? Because 1) regulation of ad-hoc, open-source waifus is practically impossible anyway and I fully realise this, we are talking about the regulation of cloud-based, centralised products developed by corporations; 2) I also expect the adoption of these open-source versions to remain much more niche than the adoption of cloud-based products could become, so I don't expect that open-source waifus will significantly affect the society in practice, just due to its much smaller adoption.
4Sterrs9mo
I personally think you massively underestimate the dangers posed by such relationships. We are not talking about people living healthy well-adjusted lives but choosing not to have any intimate relationships with other humans. We're talking about a severely addictive drug, perhaps on the level of some of the most physiologically addictive substances we know of today. Think social media addiction but with the obsession and emotions we associate with a romantic crush, then multiply it by one hundred.
0Roman Leventov9mo
I've watched two interviews recently, one with Andy Matuschak, another with Tyler Cowen. They both noted that unschooling with complete absence of control, coercion, and guidance, i.e., complete absence of paternalism, is suboptimal (for the student themselves!), exactly because humans before 25 don't have a fully developed pre-frontal cortex, which is responsible for executive control in humans. Complete absence of paternalism, regarding eduction, gender identity, choice of partner (including AI partner!) and other important life decisions, from age 0, is a fantasy. We only disagree about the paternalism cut-off. You think it should be 16 yo, I think it should be closer to 30 (I provided a rationale for this cut-off in the post). "You don't know me, regulation certainly doesn't know me" -- the implication here is that people "know themselves" at 16. This is certainly not true. The vast majority of people know themselves very poorly by this age, and their personality is still actively changing. Many people don't even have a stable sexual orientation by this age (and even by their mid-20's). Hypothetically, we can make a further exception from over-30 rule: a person undergoes a psychological test and if they demonstrate that they've reached Stage 4 of adult development by Kegan, they are allowed to build relationships with AI romantic partners. However, I suspect that only very few humans reach this stage of psychological development before they are 30. Under most people's populations ethics, long-termism, and various forms of utilitarianism, more "happy" conscious observers (such as humans) is better than fewer conscious observers. What comes to resources, if some form of AI-powered full economy automation and post-scarcity will happen relatively soon, than the planet could definitely sustain tens of billions of people who are extremely happy and all their needs are met. Conversely, if this will not happen soon, it could probably only be due to an unexpected AI
8Ann9mo
I did partial unschooling for 2 years in middle school, because normal school was starting to not work and my parents discussed and planned alternatives with my collaboration. 'Extracurriculars' like orchestra were done at the normal middle school, math was a math program, and I had responsibility for developing the rest of my curriculum. I had plenty of parental assistance, from a trained educator, and yes the general guidance to use academic time to learn things. Academically, it worked out fine. Moved on by my own choice and for social reasons upon identifying a school that was an excellent social fit. I certainly didn't have no parental involvement, but what I did have strongly respected my agency and input from the start. I feel like zero-parental-input unschooling is a misuse of the tool, yes, but lowered-paternalism unschooling is good in my experience. There's no sharp cut-off beyond the legal age of majority and age of other responsibilities, or emancipation, and I would not pick 16 as a cutoff in particular. That's just an example of an age at which I had a strong idea about a hypothetical family structure; and was making a number of decisions relevant to my future life, like college admissions, friendship and romance, developing hobbies. I don't think knowing yourself is some kind of timed event in any sense; your brain and your character and understanding develop throughout your life. I experienced various reductions in decisions being made for me simply as I was able to be consulted about them and provide reasonable input. I think this was good. To the extent the paternalistic side of a decision could be reduced, I felt better about it, was more willing to go along with it and less likely to defy it. I have a strong distaste, distrust and skepticism for controlling access to an element of society with ubiquitous psychological test of any form; particularly one that people really shouldn't be racing to accomplish the stages on like "what is your sens
2Roman Leventov9mo
I agree with the beginning of your comment in spirit, for sure. Yes, in the "ideal world", we wouldn't have any sharp age cut-offs in policy at all, as well as any black-and-white restrictions, and all decisions would be made with inputs from the person, their family, and society/government, to various degrees through the person's natural development, and the relative weights of the contributions of their inputs are themselves negotiated for each decision with self-consciousness and trust. Alas, we don't live in such a world, at least not yet. This may actually soon change if every person gets a trustworthy and wise AI copilot (which should effectively be an aligned AGI already). When it happens, of course, keeping arbitrary age restrictions would be stupid. But for now, the vast majority of people (and their families) don't have the mental and time capacity, knowledge of science (e.g., psychology), self-awareness, discipline, and often even desire to navigate this through the ever more tricky reality of digital addiction traps such as social media, and soon AI partners. That was definitely a hypothetical, not a literal suggestion. Yes, people's self-awareness grows (sometimes... sometimes also regresses) through life. I wanted to illustrate that in my opinion, most people's self awareness at 18 yo is very low, and if anything, it's the self-awareness of their persona at that time, which may change significantly already by the time they are 20. Re: the part of your comment about population ethics and suffering, I didn't read from it your assumptions in relation to AI partners, but anyway, here're mine: * Most young adults who are not having sex and intimacy are not literally "suffering" and would rate their lifes overall as worth living. Those who do actually suffer consistently are probably depressed, and I made a reservation for these cases. And even those who do suffer often could be helped with psychotherapy and many some anti-depressant medication rather
1Ann9mo
The population ethics relate in that I don't see a large voluntary decrease in (added) population as ethically troublesome if we've handled the externalities well enough. If there's a constant 10% risk on creating a person that they are suffering and do not assess their lives as worth living, creating a human person (inherently without their consent, with current methods) is an extremely risky act, and scaling the population also scales suffering. My view is that this is bad to a sufficiently similar qualitative degree that creating happy conscious observers is worth the risk, that there is no intrinsic ethical benefit to scaling up population past the point of reasonably sufficient for our survival as a species; only instrumental. Therefore, voluntary actions that decrease the number of people choosing to reproduce do not strike me as negative for that reason specifically. You can't make an exception for depressed people that is reliable without just letting people decide things for themselves. The field is dangerous, someone who wants something will jump through the right hoops, etc. If the AI are being used to manipulate people not to reproduce for state or corporate reasons, then indeed I have a problem with it on the grounds of reproductive freedom and again against paternalism. (Also short-sightedness on the part of the corporations, but that is an ongoing issue.) I do not see why AI psychotherapists, mental coaches, teachers or mentors are particularly complicated at this point. They are also potentially lucrative; and also potentially abusable with manipulation techniques to be more so. I would certainly prefer incentivizing their development with grants over grant-funded romantic partners, in terms of what we want to subsidize as a charitable society. The market for AI courtesans can indeed handle itself.
1Roman Leventov9mo
Re: population ethics, OK understood your position now. However, this post is not the right place to argue about it, and the reasoning in the post basically doesn't depend on the outcome of this argument (you can think of the post taking "more people is better" population ethics as an an assumption rather than an inference). Policies and restrictions shouldn't be very reliable to be largely effective. Being diagnosed by a psychiatrist with clinical depression is sufficiently burdensome that very few people will long AI relationships so much that will deliberately induce depression in themselves to achieve that (or bribe the psychiatrist). Black market for accounts... there is also a black market for hard drugs, which doesn't mean that we should allow them, probably. AI teachers and mentors are mostly possible on top of existing technology and lucrative (all people want high exam scores to enter good universities, etc.), and there are indeed many companies doing it (e.g., Khan Academy). AI psychotherapists is more central to my thesis. I've considered starting such a project seriously a couple of months ago, discussed it with professional psychotherapists. There are two big clusters of issues, one technical, and another market/product issues. Technical: I concluded that SoTA LLMs (GPT-4) are not basically capable yet of really "understanding" human psychology, and "seeing through" deception and non-obvious cues ("submerged part of the iceberg"), which a professional psychotherapist should be capable of doing. Also, any serious tool would need to integrate video/audio stream from the user and detect facial expressions and integrate this information with semantic context of the discussion. All this is maybe possible, with big investment, but it is very challenging and SoTA R&D. It's not just "build something hastily on top of LLMs". The AI partner that I projected in 2-3 years of now is also not that trivial to build, but even that is simpler than a reasonable AI p
3dr_s8mo
"Most" people within a very restricted bubble that even ponders these issues in such terms. Personally I think total sum utilitarianism is bunk, nor required to make a case against what basically amounts to AI-powered skinner boxes preying on people's weak spots. If a real woman does it with one man she gets called a gold digger and a whore, but if a company does it with thousands it's just good business?
[-]iceman9mo4718

Are AI partners really good for their users?

Compared to what alternative?

As other commenters have pointed out, the baseline is already horrific for men, who are suffering. Your comments in the replies seem to reject that these men are suffering. No, obviously they are.

But responding in depth would just be piling on and boring, so instead let's say something new:

I think it would be prudent to immediately prohibit AI romance startups to onboard new users[..]

You do not seem to understand the state of the game board: AI romance startups are dead, and we're already in the post-game.

character.ai was very popular around the second half of 2022, but near the end of it, the developers went to war with erotic role play users. By mid January 2023, character.ai is basically dead for not just sex talk, but also general romance. The developers added in a completely broken filter that started negatively impacting even non-sexual, non-romantic talk. The users rioted, made it the single topic on the subreddit for weeks, the developers refused to back down, and people migrated away. Their logo is still used as a joke on 4chan. It's still around, but it's not a real player in the romance game.... (read more)

-3Roman Leventov9mo
I didn't deny that some people actually suffer (probably big portion of them are clinically depressed, though, so would "qualify" to use AI partners before 30 in my proposal), I just said that it's by no means normal to suffer if you just don't have a romantic partner, but otherwise your life is "ok". See this comment. This perverted strategy of ameliorating symptoms of problems (such as social media, problems with (sexual) self-image and expectations, "dating market", social isolation, etc.) because it will provide a constant stream of $$$, instead of treating the root causes, is what bugs me. I'm far from convinced that "society" (represented by payment providers) already decided to ban sex bots. It doesn't make much sense, given that payment providers do serve porn industry, OnlyFans, etc. Maybe the issue was that character.AI didn't clearly market itself as "adult startup" and didn't impose age restrictions, and providers saw potential legal risks with this? I see https://www.evaapp.ai/ is growing, https://caryn.ai/ is growing, they aren't getting banned. I generally worry much less about hardcore enthusiasts to develop and host open-source waifus, of course fundamentally this cannot be banned, but the potential reach of these will be an order or two orders of magnitude smaller than easy-to-use mobile app. I don't worry about society-scale effects from people using open-source waifus. 

You seem to assume monogamy in the sense that a person needs to break up with an AI partner to develop another relationship with a human. I think that would be unlikely. 

AI partner is recommended to a person by a psychotherapist for some other reason, such as the person has a severe defect in their physical appearance or a disability and the psychotherapist sees that the person doesn’t have psychological resources or a willingness to deal with their very small chances of finding a human partner (at least before the person turns 30 years old, at which point the person could enter a relationship with an AI anyway), or because they have a depression or a very low self-esteem and the psychotherapist thinks the AI partner may help the person to combat with this issue, etc.

The base rate for depression alone among 12-17-year-olds alone is 20%. A company that sells an AI partner would likely be able to optimize in a way that it helps with depression and run a study to prove it. 

In the regulatory environment that you propose, that means that a sizeable number of those teenagers who are most vulnerable to begin with are still able to access AI partners. 

2Roman Leventov9mo
Well, you think it's unlikely, I think it will be the case for 20-80% people in AI relationships (wide bounds because I'm not an expert). How about AI romance startups proving this is at least as "unlikely" that 90+% people could "mix" human and AI romance without issue, on long-term psychological studies? FDA demands drug companies to prove long-term safety of new medications, why we don't hold technology which will obviously intrude in human psychology to the same standard? You are arguing with the proposed policy which is not even here yet. I think barring clinically depressed people from AI romance is a much weaker case and I'm not ready to defend it here. And even if it would be a mistake to give depressed people access to AI partners, just allowing anyone over 18 to use AI partners is a bigger mistake, anyway, as a matter of simple logic, because "anyone" includes "depressed people".
[-]Viliam9mo3025

I also have a strong negative reaction. On one hand, I think I understand your concern -- it would be quite sad if all humans started talking only to AIs and in one generation humanity went extinct.

On the other hand, the example you present is young sexually and romantically frustrated men between 16 and 25, and your proposal is... to keep them frustrated, despite (soon) existence of simple technological solutions to their needs. Because it is better for the society if they suffer. (Is this Omelas, or what?)

No one proposes taking romantic books and movies away from young women. Despite creating unrealistic expectations, etc. Apparently their frustration is not necessary to keep the civilization going, or perhaps is considered a too high price to pay. What else could increase childbirths? Banning women from studying at universities. Banning homosexual relationships. Banning contraception. Dramatically increasing taxes for childless people. -- All of this would be unacceptable today. The only group we are allowed to sacrifice for the benefit of the society are the young cishet men.

Still, humanity going extinct would be sad. So maybe someone needs to pay the cost. But at least it woul... (read more)

[-]Ann9mo117

Unfortunately, a substantial part of my own negative reaction is because all these other limitations of freedom you suggest are in fact within the Overton Window, and indeed limiting the freedom of young men between 16 and 25 naturally extrapolates to all the others.

(Not that I'm not concerned about the freedom of young men, but they're not somehow valid sacrificial lambs that the rest of us aren't.)

8Sterrs9mo
Women will find AI partners just as addicting and preferable to real partners as men do.
-10Roman Leventov9mo
[-]MSRayne9mo2310

To be honest, I look forward to AI partners. I have a hard time seeing the point of striving to have a "real" relationship with another person, given that no two people are really perfectly compatible, no one can give enough of their time and attention to really satisfy a neverending desire for connection, etc. I expect AIs to soon enough be better romantic companions - better companions in all ways - than humans are. Why shouldn't I prefer them?

3Roman Leventov9mo
From a hedonistic and individualistic perspective, sure, AI partners will be better for individuals. That's the point that I'm making. People will find human relationships frustrating and boring in comparison. But people also usually don't exclusively care for themselves, part of them also cares about the society and their family lineage, and the idea that they didn't contribute to these super-systems even in itself will poison many people's experiences of their hedonistic lives. Then, if people don't get to experience AI relationships in the first place (which they may not be able to "forget"), but decide to settle for in a way inferior human relationships, but which produce more wholesome experience overall, there total life satisfaction may also be higher. I'm not claiming this will be true for all and necessarily even most people; for example, child-free people are probably less likely to find their AI relationships incomplete. But this may be true for a noticeable proportion of people, even from Western individualistic cultures, and perhaps even more so from Eastern cultures. Also, obviously, the post is not written from the individualistic perspective. The title says "AI partners will harm society", not that it will harm the individuals. From the societal perspective, there could be a tragedy of the commons dynamic where everybody takes maximally individualistic perspective but then the whole society collapses (either in terms of the population, or epistemics, or culture).
1Sterrs9mo
Human relationships should be challenging. Refusing to be challenged by those around you is what creates the echo chambers we see online, where your own opinions get fed back to you, only reassuring you of what you already believe. These were created by AI recommendation algorithms whose only goal was to maximise engagement. Why would an AI boyfriend or girlfriend be any different? They would not help you develop as a person, they would only exist to serve your desires, not to push you to improve who you are, not to teach you new perspectives, not to give you opportunities to bring others joy.
6MSRayne8mo
I understand all this logically, but my emotional brain asks, "Yeah, but why should I care about any of that? I want what I want. I don't want to grow, or improve myself, or learn new perspectives, or bring others joy. I want to feel good all the time with minimal effort." When wireheading - real wireheading, not the creepy electrode in the brain sort that few people would actually accept - is presented to you, it is very hard to reject it, particularly if you have a background of trauma or neurodivergence that makes coping with "real life" difficult to begin with, which is why so many people with brains like mine end up as addicts. Actually, by some standards, I am an addict, just not of any physical substance. And to be honest, as a risk-averse person, it's hard for me to rationally argue for why I ought to interact with other people when AIs are better, except the people I already know, trust, and care about. Like, where exactly is my duty to "grow" (from other people's perspective, by other people's definitions, because they tell me I ought to do it) supposed to be coming from? The only thing that motivates me, sometimes, to try to do growth-and-self-improvement things is guilt. And I'm actually a pretty hard person to guilt into doing things.
1Bezzi9mo
Because your AI partner does not exist in the physical world? I mean, of course that an advanced chatbot could be a both better conversator and a better lover than most humans, but it is still an AI chatbot. Just to give an example, I would totally feel like an idiot should I ever find myself asking chatbots about their favorite dish.
6MSRayne9mo
That's a temporary problem. Robot bodies will eventually be good enough. And I've been a virgin for nearly 26 years, I can wait a decade or two longer till there's something worth downloading an AI companion into if need be.
-4Ilio9mo
[downvoted]

Tl;dr is that your argument doesn't meaningfully engage the counterproposition, and I think this not only harms your argument, but severely limits the extent to which the discussion in the comments can be productive. I'll confess that the wall of text below was written because you made me angry, not because I'm so invested in epistemic virtue - that said, I hope it will be taken as constructive criticism which will help the comments-section be more valuable for discussion :)

  • Missing argument pieces: you lack an argument for why higher fertility rates are good, but perhaps more importantly, to whom such benefits accrue (ie how much of the alleged benefit is spillover/externalities). Your proposal also requires a metric of justification (i.e. "X is good" is typically insufficient to entail "the government should do X" - more is needed ). I think you engage this somewhat when you discuss rationale for the severity of the law, but your proposal would require the deliberate denial of a right of free association to certain people - if you think this is okay, you should explicitly state the criterion of severity by which such a decision should be made. If this restriction is ultimat

... (read more)
-4Roman Leventov9mo
First of all, as our society and civilisation gets more complex, "18 is an adult" is more and more comically low and inadequate. Second, I think a better reference class are decisions that may have irreversible consequences. E.g., the minimum age of voluntary human sterilisation is 25, 35, and even 40 years in some countries (but is apparently just 18 in the US, which is a joke). I cannot easily find statistics of the minimum age when a single person can adopt a child, but it appears to be 30 years in the US. If the rationale behind this policy was about financial stability only, why rich, single 25 yo's cannot adopt? I think it's better to compare entering AI relationship with these policies than with drinking alcohol or watching porn or having sex with humans (individual cases of which, for the most part, don't change human lives irreversibly, if practiced safely; and yes, it would be prudent to ban unprotected sex for unmarried people under 25, but alas, such a policy would be unenforceable). I don't think any mental condition disqualifies person from having a human relationship, but I think it shifts the balance in the other direction. E.g., if a person has bouts of uncontrollable aggression and has a history of domestic abuse and violence, it makes much less sense to bar him from AI partners and thus compel him to find new potential human victims (although he/she is not prohibited from doing that, unless jailed). No, this is not what I meant, see above. All these things are at least mildly bad for society, I think this is very uncontroversial. What is much more doubtful (including for me) is how the effects of these things on individual weigh against their effects on society. The balance may be different for different things and is also different than the respective balance for AI partners. First, the discussion of the ban of porn is unproductive because it's completely unenforceable. Online dating is a very complicated matter and I don't want to discus
-7Roman Leventov9mo

This is another one of those AI impacts where something big is waiting to happen, and we are so unprepared that we don't even have good terminology. (All I can add is that the male counterpart of a waifu is a "husbando" or "husbu".) 

One possible attitude is to say, the era of AI companions is just another transitory stage shortly before the arrival of the biggest AI impact of all, superintelligence, and so one may as well focus on that (e.g. by trying to solve "superalignment"). After superintelligence arrives, if humans and lesser AIs are still around, they will be living however it is that the super-AI thinks they should be living; and if the super-AI was successfully superaligned, all moral and other problems will have been resolved in a better way than any puny human intellect could have conceived. 

That's a possible attitude; if you believe in short timelines to superintelligence, it's even a defensible attitude. But supposing we put that aside - 

Another bigger context for the issue of AI companions, is the general phenomenon of AIs that in some way can function as people, and their impact on societies in which until now, the only people have been humans. One pos... (read more)

8Vladimir_Nesov9mo
Due to serial speed advantage of AIs, superintelligence is unnecessary for making humanity irrelevant within a few years of the first AGIs capable of autonomous unbounded research. Conversely, without such AGI, the impact on society is going to remain bounded, not overturning everything.
2Roman Leventov9mo
Agreed with the first part of your comment (about superintelligence). On the second part, I think immediately generalising the discussion about the role of "person" in the society at large is preliminary. I think it's an extremely important discussion to have, but it doesn't weigh on whether we should ban healthy under-30's from using AI partners today. In general, I'm not a carbon chauvinist. Let's imagine a cyberpank scenario: a very advanced AI partner (superhuman on emotional and intellectual levels) enters a relationship with a human and they decide to have a child, with the help of donor sperm (if the human in the couple is a woman) or with the help of a donor egg and gestation in an artificial womb (if the human in the couple is a man); the sexual orientation of the human or the AI in the couple doesn't matter. I probably wouldn't be opposed to this. But we can discuss permitting these family arrangements after all the necessary technologies (AIs and artificial wombs) have matured sufficiently. So, I'm not against human--AI relationships in principle, but I think that the current wave of AI romance startups has nothing to do with crafting meaning and societal good.

I'm very very skeptical of this idea, as one generally should be of attempts to carve exceptions out of people's basic rights and freedoms. If you're wrong, then your policy recommendations would cause a very large amount of damage. This post unfortunately seems to have little discussion of the drawbacks of the proposed policy, only of the benefits. But it would surely have many drawbacks. People who would be made happier, or helped to be kinder or more productive people by their AI partners would not get those benefits. On the margin, more people would stay in relationships they'd be better off leaving because they fear being alone and don't have the backup option of an AI relationship. People who genuinely have no chance of ever finding a romantic relationship would not have a substitute to help make their life more tolerable.

Most of all, such a policy dictates limitations to people as to who/what they should talk to. This is not a freedom that one should lightly curtail, and many countries guarantee it as a part of their constitution. If the legal system says to people that it knows better than them about which images and words they should be allowed to look at because some imag... (read more)

For those readers who hope to make use of AI romantic companions, I do also have some warnings:

  1. You should know in a rough sense how the AI works and the ways in which it's not a human.
    1. For most current LLMs, a very important point is that they have no memory, other than the text they read in a context window. When generating each token, they "re-read" everything in the context window before predicting. None of their internal calculations are preserved when predicting the next token, everything is forgotten and the entire context window is re-read again.
    2. LLMs can be quite dumb, not always in the ways a human would expect. Some of this is to do with the wacky way we force them to generate text, see above.
    3. A human might think about you even if they're not actively talking to you, but rather just going about their day. Of course, most of the time they aren't thinking about you at all, their personality is continually developing and changing based on the events of their lives. LLMs don't go about their day or have an independent existence at all really, they're just there to respond to prompts.
    4. In the future, some of these facts may change, the AIs may become more human-like, or at least mo
... (read more)
7Matt Goldenberg9mo
This seems demonstrably wrong in the case of technology.
-1Roman Leventov9mo
As I mentioned in the post as well as many comments here, how about AI partner startups demonstrating that these positive effects dominate negative effects, rather than vice-versa? Why do we hold addictive psycho-technology (which AI partner tech is, on the face of it, because "lower love" is also a form of addiction, and you cannot yet have a "higher love", i.e., shared fate with AI partners in the present wave, they are not yet learning, conscious, and are not moral patients) to a different standard than new medications? I commented on this logic here, I think this doesn't make sense. People probably mostly stay in relationships they'd be better off leaving either because there is a roller-coaster dynamic they also partially enjoy (in which case knowing there is AI waiting for them outside of the relationship probably doesn't help), or for financial reasons or reasons of convenience. People may fear be physically alone (at home, from the social perspective, etc.), but hardly many people are particularly attached to the notion of being "in a relationship" so much that the perspective of paying 20 bucks per month for their virtual AI friend will be sufficient to quiet their anxiety about leaving a human partner. I cannot exclude this, but I think there are really few people like this. In the policy that I proposed, I made a point for these cases. Basically, a psychotherapist could "prescribe" an AI partner to a human in such cases. AI partners won't be "who" yet. That's a very important qualification. As soon as AIs become conscious, and/or moral patients, of course there should be no restrictions. But without that, in your passage, you can replace "AI partner" or "image" with "heroin" and nothing qualitatively changes. Forbidding building companies that distribute heroin is not "taking out freedoms", even though just taking heroin (or other drug) which you somehow obtained (found on the street, let's say) is still a freedom. How do you imagine myself or any oth
2DaemonicSigil9mo
I'd consider a law banning people from using search engines like Google, Bing, Wolfram Alpha, or video games like GTA or the Sims to still be a very bad imposition on people's basic freedoms. Maybe "free association" isn't the right word to use, but there's definitely an important right for which you'd be creating an exception. I'd also be curious to hear how you plan to determine when an AI has reached the point where it counts as a person? I don't subscribe to the idea that one can swap out arbitrary words in a sentence while leaving the truth-value of the sentence unchanged. Heroin directly alters your neuro-chemistry. Pure information is not necessarily harmless, but it is something you have the option to ignore or disbelieve at any point in time, and it essentially provides data, rather than directly hacking your motivations. How much do you expect it would cost to do these experiments? 500 000 dollars? Let's say 2 million just to be safe. Presumably you're going to try and convince the government to implement your proposed policy. Now if you happen to be wrong, implementing such a policy is going to do far more than 2 million dollars of damage. If it's worth putting some fairly authoritarian restrictions on the actions of millions of people, it's worth paying a pretty big chunk of money to run the experiment. You already have a list of asks in your policy recommendations section. Why not ask for experiment funding in the same list? One experimental group is banned from all AI partners, the other group is able to use any of them they choose. Generally you want to make the groups in such experiments correspond to the policy options you're considering. (And you always want to have a control group, corresponding to "no change to existing policy".)

(Upvoted for the detailed argument.)

Even if it was the case that people ended up mostly having AI lovers rather than romantic relationships from humans, I don't think it follows from that that the fertility rate would necessarily suffer.

The impression that I've gotten from popular history is that the strong association between reproduction and romantic love hasn't always been the case and that there have been time periods where marriage existed primarily for the purpose of having children. More recently there's been a trend in Platonic co-parenting, where people choose to have children together without having a romantic relationship. I personally have at least two of these kinds of co-parenting "couples" in my circle of acquaintances, as well as several more who have expressed some level of interest in it. 

Given that it is already something that's gradually getting normalized today, I'd expect it to become significantly more widespread in a future with AI partners. I would also imagine that the risk of something like a messy divorce or unhappy marriage would be significantly less for Platonic co-parents who were already in a happy and fulfilling romantic relationship with an A... (read more)

4Roman Leventov9mo
Does platonic co-parenting usually involve co-living, or it's more often that parents live separately and either take care of the child on different days, or visit each other's homes to play with kids?
6Kaj_Sotala9mo
I'm unsure what the most typical case is. Of the couples I know personally, one involves co-living and another intends to live separately once the child is no longer an infant.

It's high time we decoupled romance from procreation (pun intended). 

I think that your model severely underestimates the role of social stigma. Spending a lot of time on your screen chatting with an AI whose avatar is suspiciously supersexy would be definitely categorized as "porn" by a lot of people (including me). Will it be more addictive than simply looking at photo/videos of hot people naked? Probably yes, but it will still occupy the same mental space as "porn". If not for the users themselves, at least for casual observers. Imagine trying to explain to your parents that the love of your life is an AI with a supersexy... (read more)

3Roman Leventov9mo
I agree that it probably won't be socially acceptable to admit that you are in love with your AI partner for a time being. Therefore, the young man in my short "mainline scenario" downplays the level of intimacy that he has with his AI partner to his friends. Parents probably won't know at all, their son just "studies at college and doesn't have time for girls". Importantly, the young man may deceive even himself, not consciously perceiving their attitude towards the AI as "love", but nevertheless he may become totally uninterested in seeking romance with humans, or even watching porn (other than the videos generated with the avatar of his AI partner). I'm not sure about what I've written above, of course, but I definitely think that the burden of proof is on AI startups, cf. this comment. 
3Bezzi9mo
My point was that is difficult for a behavior to destroy the fabric of society if you have to hide from friends and family when indulging in that behavior. Of course that someone will totally fall in love with AI chatbots and isolate himself, but this is also true for recreational drugs, traditional porn etc. I still don't see an immediate danger for the majority of young people. The main problem of your hypothetical man is that he doesn't manage to have sex. I agree that this can be a real problem for a lot of young men. On the other hand, not having sufficiently interesting conversations does not feel like something that the average teenager is likely to suffer from. If you give a super-hot AI girlfriend to a horny teenager, I think that the most likely outcome is that he will jump straight to the part where the avatar gets naked, again and again and again, and the conversational skills of the bots won't matter that much. You have to fool yourself really hard to conflate "super-hot AI bot who does everything I ask" with "normal love relationship" rather than "porn up to eleven".
-6Roman Leventov9mo
[-]dr_s8mo86

A few points:

  1. this feels part of a larger question along the lines of "is wireheading okay?". On one hand, with increased technology, the probability of being able to subject ourselves to a bubble of hyper stimuli that all tickle our natural fancies far more than the real thing approaches one. On the other, since perhaps the most important of those fancies is interaction with other human beings, something which is also by its nature imperfect in its real state, this essentially is one and the same with the disgregation of society into a bunch of perfectl

... (read more)
[-][anonymous]9mo64

Hi Roman.  Pretty exciting conversation thread and I had a few questions about your specific assumptions here.  

In your world model, obviously today, young men have many distractions from their duties that were not the case in the past.  [ social media, video games, television, movies, anime, porn, complex and demanding schooling, ...]

And today, current AI models are not engrossing enough, for most people, to outcompete the set of the list above.   You're projecting:

within the next 2-3 years

human-level emotional intelligence and the ski

... (read more)
1Roman Leventov9mo
Maybe we understand different things by "emotional intelligence". To me, this is just the ability to correctly infer the emotional state of the interlocutor, based on the context, text messages that they send, and the pauses between messages. I don't think this requires any breakthroughs in AI. GPT-4 is basically able to do this already, if we take out of the question the task of "baseline adjustment" (different people have different conversational styles, some are cheerful and use smile emojis profusely, others are the opposite, use smileys only to mark strong emotions) and the task of intelligent summarisation of the context of the dialogue. The latter are exactly the types of tasks I expect AI romance tech to be ironing out in the next few years. Detecting emotions from the video of human face or the recording of their speech is in some ways even simpler, there are apparently already simple supervised ML systems that do this. But I don't expect AI partners to be in a video dialogue with users yet because I don't think video generation will become sufficiently fast for realtime video, yet. So I don't assume that AI will receive the stream of user's video and audio, either. In general, paying people for parenting (I would emphasise this instead of pure "childbirth"), i.e., considering parenting a "job", I think is a reasonable idea and perhaps, soon this will be inevitable in developed countries with plummeting fertility rates and increasing efficiency of labour (the latter due to AI and automation). The caveat is that the policy that you proposed will cost the government a lot of money initially, while the policy that I proposed costs nothing.

It could be generalised as following: Perfectly alined (to personal desires) AI will be perfect wireheading. 

Thanks for the post. It's great that people are discussing some of the less-frequently discussed potential impacts of AI.

I think a good example to bring up here is video games which seem to have similar risks. 

When you think about it, video games seem just as compelling as AI romantic partners. Many video games such as Call of Duty, Civilization, or League of Legends involve achieving virtual goals, leveling up, and improving skills in a way that's often more fulfilling than real life. Realistic 3D video games have been widespread since the 2000s but ... (read more)

I don't actually know what to do here. But I do honestly think this is likely to be representative of a pretty big problem. I don't know that any particular regulation is a good solution. 

Just to go over the obvious points:

  • I definitely expect AI partners to at least somewhat reduce overall fertility of biological humans, and to create "good enough but kinda hollow" relationships that people err more towards on the margin because it's locally convenient even among people who I wouldn't normally consider "suffering-ly single"
  • Yes, there are real sufferin
... (read more)
0Roman Leventov9mo
Why do you think so? Even if open-source LLMs and other necessary AI models will develop so quickly that real-time interaction and image generation will be possible on desktops, and there will be open-source waifu projects, mobile is still controlled by app stores (Apple and Google Play) where age-based restrictions could be easily imposed. Jailbreaking/unlocking the phone (or installing an apk from Github), connecting it to a local server with AI models, and maintaining the server is so burdensome that I expect 90+% of potential users of easy-to-access AI partner apps would fall off. This is not to mention that open-source projects developed by hardcore enthusiasts will probably cater for their specific edgy preferences and not appeal to a wide audience. E.g., an open-source project of anime waifus may optimise to trigger particular fetishes rather overall believability of the AI partner and the long-term appeal of the "relationship". Open-source developers won't be motivated to optimise the latter, unlike AI partner startups, because their lifetime customer value would directly depend on that.  

Other than fertility rate, what other harms are there, and to whom, such that it's any of society's business at all? Are you thinking of it as being like addiction, with people choosing something they (initially) think is good for them but isn't?

First, though, I don't think the scenario you're proposing is anywhere near as bad for fertility as suggested. Plenty of real-world people and partners incapable of having biological children together have a sufficiently high desire to do so to go to great lengths to make it happen. Plenty of others want to do so b... (read more)

4Roman Leventov9mo
Yes, I discussed in this comment how people could perceive settling for AI partners less wholesome life because they didn't pay their duty to society (regardless of whether this actually matters from your theoretical ethics point of view, if people have deeply held, culturally embedded idea about this in their head, they could be sad or unsettled for real). I don't venture to estimate how prevalent this will be, and therefore how this will weigh against net good for personal satisfaction of people who would have no issue with settling for AI partners whatsoever. Kaj Sotala suggested in this comment that this "duty for society" could be satisfied through platonic co-parenting. I think this is definitely interesting, could work for some people, and I think loudable that people do this, but I have doubts how widespread this practice could become. It might be that parenting and romantic involvement with the co-parent are pinned too strongly to each other in many people's minds. This is the same type of statement that many other people have made here ("people won't be that addicted to this", "people will still seek human partners even while using this thing", etc.), to all of which I reply: it should be AI romance startups' responsibility to demonstrate that the negative effect will be small, not my responsibility to prove that the effect will be huge (which I obviously couldn't do). Currently, it's all opinion versus opinion. At least the maximum conceivable potential is huge: AI romance startups obviously would like nearly everyone to use their products (just like currently, nearly everyone watches porn, and soon nearly everyone will use general-purpose chatbots like ChatGPT). If the AI partners will be so attractive that about 20% of men are falling for them so hard that they don't even want to date any women anymore through the rest of their lives, we are talking about 7-10% of drop in fertility (less than 20% because not all of these men would counterfactually ha
4AnthonyC9mo
I don't think I agree. That might be cheap financially, yes. But unless there's a strong argument that AI partners cause harm to the humans using them, then I don't think society has a sufficiently compelling reason to justify a ban. In particular, I don't think (and I assume most agree?) that it's a good idea to coerce people into having children they don't want, so the relevant question for me is, can everyone who wants children have the number of children they want? And relatedly, will AI partners cause more people who want children to become unable to have them? From which the societal intervention should be, how do we help ensure that those who want children can have them? Maybe trying to address that still leads to consistent below-replacement fertility, in which case, sure, we should consider other paths. But we're not actually doing that.
4Roman Leventov9mo
I think an adequate social and tech policy for the 21st century should 1. Recognise that needs/wants/desires/beliefs and new social constructs could be manufactured and to discuss this phenomenon explicitly, and 2. Deal with this social engineering consistently, either by really going out of the way to protect people's agency and self-determination (today, people's wants, needs, beliefs, and personalities are sculptured by different actors from when they are toddlers and start watching videos on iPads, and then only strengthens), or by allowing a "free market of influences", but also participating in it, by subsidising the projects that will benefit the society itself. USA seems to be much closer to the latter option, but when people discuss policy in the US, it's conventional not to acknowledge (see The Elephant in The Brain) the real social engineering that is already perpetuated by both state and non-state actors (from the pledge of allegiance to church to Instagram to Coca-Cola), and to presume that social engineering done by the state itself is a taboo or at least a tool of last resort. This is just not what is already happening: apart from the pledge of allegiance, there are also many other ways in which the state (and other state-adjacent institutions and structures) is (was) proactive to manufacture people's beliefs or wants in a certain way, or to prevent people's beliefs or wants to be manufactured in a certain way: the Red Scare, various forms of official and unofficial (yet institutionalised) censorship, and the regulation of nicotine marketing are a few examples that came first to my mind. Now, assuming that personal relationships is a "sacred libertarian range" and avoiding the state to weigh any influence on how people's wants and needs around personal relationships are formed (even if through the recommended school curriculum, which is albeit a very ineffective approach to social engineering), yet allowing any corporate actors (such as AI partn

About 80% of this could (and was) said about the development of the printing press. Smart people trying to ban new technology because it might be bad for the proles is a deeply held tradition.

7Roman Leventov9mo
Sorry, but this is empty rhetoric and a failure to engage with the post on the object level. How did the printing press jeopardise the reproduction of people in society? And not indirectly, through a long causal chain (such as, without the printing press, we wouldn't have the computer, without the computer we wouldn't have the internet, without the internet we wouldn't have online dating and porn and AI partners, etc.), but directly? Whereas AI partners will reduce the total fertility rate directly. The implication of your comment is that technology couldn't be "bad" (for people, society, or any other subject), which is absurd. Technology is not ethics-neutral and could be "bad".
4Martin Randall9mo
Are you referring to the concerns of Conrad Gessner? From Why Did First Printed Books Scare Ancient Scholars In Europe?: If so, I don't understand the parallel you are trying to draw. Prior to the printing press, elites had access to 100s of books, and the average person had access to none. Whereas prior to AI romantic partners, elites and "proles" both have access to human romantic partners at similar levels. Also, I don't think Gessner was arguing that the book surplus would reduce the human relationship participation rate and thus the fertility rate. If you're referring to other "smart people" of the time, who are they? Perhaps a better analogy would be with romance novels? I understand that concerns about romance novels impacting romantic relationships arose during the 18th and 19th centuries, much later. Aside: I was unable to find a readable copy of Conrad Gessner's argument - apparently from the preface of the Bibliotheca Universalis - so I am basing my understanding of his argument on various other sources.

Epistemic status: I'm not sure if this assumtions are really here, am pretty sure what my current opinion about this assumptions is, but admit that this opinion can change.

Assumptions I don't buy:

  • Have kids when we can have AGI in 10-25 years is good and not, actually, very evil OMG what are you doing.
  • Right social incentives can't make A LOT of people poly pretty fast.
0Roman Leventov9mo
Is this comment a question whether these assumptions are in my post? You don't buy that having kids [when we can have AGI soon...] is good, right? OK, I disagree with that strongly, population-wise.  Do you imply that the whole planet should stop having kids because we are approaching AGI? Seems like a surefire way to wreck the civilisation even if the AI alignment problem will appear simpler than we think or miraculously solved. Will MacAskill also argues against this position in "What We Owe The Future". Specifically for people who directly work in AI safety (or have intellectual capacity to meaningfully contribute to AI safety and other urgent x-risk priorities, and consider doing so), this is a less clear-cut case, I agree. This is one of the reasons for which I'm personally unsure whether I should have kids. There was no such assumption. The young man in my "mainline scenario" doesn't have this choice, alas. He has no romantic relationships with humans at all. Also, I'm afraid that AI partners will soon become sufficiently compelling that after prolong relationships with them, it will be hard for people to summon motivation to court average-looking (at best), shallow, and dull humans who don't seem to be interested in this relationship themselves.