Recently, when people refer to “immediate societal harms and dangers” of AI, in media or political rhetoric, they predominantly choose to mention “bias”, “misinformation”, and “political (election) manipulation”.
Despite politicians, journalists, and experts frequently compare the current opportunity of regulating AI for good with the missed opportunity to regulate social media in the early 2010s, somehow AI romantic partners are rarely mentioned as a technology and a business model that has the potential to grow very rapidly, harm the society significantly, and be very difficult to regulate once it has become huge (just as social media). This suggests that AI romance technology should be regulated swiftly.
There is a wave of articles in the media (1, 2, 3, 4, for just a small sample) about the phenomenon of AI romance which universally raise vague worries, but I haven’t found a single article that rings the alarm bell that AI romance deserves, I believe.
The EA and LessWrong community response to the issue seems to be even milder: it’s rarely brought up, and a post “In Defence of Chatbot Romance” has been hugely upvoted.
This appears strange to me because I expect, with around 75% confidence, that rapid and unregulated growth and development of AI partners will become a huge blow to society, on a scale comparable to the blow from unregulated social media.
I'm not a professional psychologist and not familiar with academic literature in psychology, but the propositions on which I base my expectation seem at least highly likely and common-sensical to me. Thus, one of the purposes of this article is to attract expert rebuttals of my propositions. If there would be no such rebuttals, the second purpose of the article is to attract the attention of the community to the proliferation of AI romantic partners which should be regulated urgently (if my inferences are correct).
First, as Raemon pointed out, it’s crucial not to repeat the mistake that is way too common in the discussions of general risks from AI, that is, to assume only the AI capabilities that are present already will exist and to fail to extrapolate the technology development.
So, here’s what I expect AI romantic partner startups will be offering within the next 2-3 years, with very high probability, because none of these things requires any breakthroughs in foundational AI capabilities, and is only a matter of “mundane” engineering around the existing state-of-the-art LLMs, text-to-audio, and text-to-image technology:
Please note that I don’t just assume that “AI = magic that is capable of anything”. Below, I list possible features of AI romantic partners that would make them even more compelling, but I can’t confidently expect them to arrive in the next few years because they hinge on AI and VR capability advances that haven’t yet come. This, however, only highlights how compelling AI partners could already become, with today’s AI capabilities and some proper product engineering.
So, here are AI partner features that I don’t necessarily expect to arrive in the next 2-3 years:
I don’t want to directly engage with all the arguments against the proposition that AI partners will make people work towards committed human relationships and having kids, e.g., in the post by Kaj Sotala and in the comments to that post, as well as some other places, because these arguments seem to me exactly of the manufactured uncertainty kind wielded by social media companies (Facebook, primarily) before.
Instead, I want to focus on the “mainline scenario” which will counterfactually deprive a noticeable share of young men outside of the “relationship market pool”, which, in turn, must reduce the total share of people ending up in committed relationships and having kids.
A young man, between 16 and 25 years old, finds it difficult to get romantic partners or casual sex partners. This might happen either because the man is not yet physically, psychologically, intellectually, or financially mature, or because he has transient problems with their looks (such as acne, or wearing dental braces), or because the girls of the respective age are themselves “deluded” by social media such as Instagram, have unrealistic expectations and reject the man. Or, the girls of the respective age haven’t yet developed online dating fatigue and use dating apps to find their romantic partners, where men outside of the top 20% by physical attractiveness are generally struggling to find dates. Alternatively, the young man finds a girl who is willing to have sex with him, but his first few experiences are unsuccessful and he becomes very unconfident about intimacy.Whatever the reason, the man decides to try the AI girlfriend experience because his friends say this is much more fun than just watching porn. He quickly develops an intimate connection with his AI girlfriend and a longing to spend time with it. He is shy to admit this to his friends, and maybe even to himself, but nevertheless he stops looking for human partners completely, justifying this to himself with having to focus on college admission, or his studies at the college, or his first years on a job.After a year in the AI relationship, he grows very uneasy feeling about it because he feels he is missing out on “real life” and is compelled to stop this relationship. However, he still feels somehow “burned out” of romance and only half more year after the breakup with his AI partner for the first time he feels sufficiently motivated to actively pursue dates with real women. However, he is frustrated by their low engagement, intermittent responses and flakiness, their dumb and shallow interests, and by how average and uninspiring they look, which is all in stark contrast with his former AI girlfriend. His attempts to make any meaningful romantic relationship go nowhere for years.While he is trying to find a human partner, AI partner tech develops further and becomes even more compelling than it used to be when the man left his AI partner. So, he decides to reconcile with his AI partner and finds peace and happiness in it, albeit mixed with sadness due to the fact that he won’t have kids. However, this is tolerable and is a fine compromise for him.
The defenders of AI romance usually say that the scenario described above is not guaranteed to happen. This critique sounds to me exactly like the rhetorical lines in defence of social media, specifically that kids are not guaranteed to develop social media addiction and psychological problems due to that. Of course, the scenario described above is not guaranteed to unfold in the case of every single young man. But on the scale of the entire society, the defenders of AI romance should demonstrate that the above scenario is so unlikely that the damage to society from this tech is way outweighed by the benefits to the individuals.
The key argument in defence of AI romantic partnership is that the relationship that is developed between people and AIs will be of a different kind than romantic love between humans, and won’t interfere with the latter much. But human psychology is complex and we should expect to see a lot of variation there. Some people, indeed, may hold sufficiently strong priors against “being in love with robots” and will create a dedicated place for their AI partner in their mind, akin to fancified porn, or to stimulating companionship. However, I expect that many other people will fall in love with their AI partners in the very conventional sense of "falling in love", and while they are in love with their AIs, they won’t seek other partners, humans or AIs. I reflected this situation in the story above. There are two reasons why I think this will be the case for many people who will try AI romance:
Also, note that the story above is not even the most “radical”: probably some people will not even try to break up with their AI partners and seek human relationships, and will remain in love with their AI partners for ten or more years.
Even if AI romantic partners will affect society negatively through the reduction of the number of people who ever enter committed relationships and/or will have kids, we should also consider how AIs could make their human partners’ lives better, and find a balance between these two utilities, societal and individual.
However, it’s not even clear to me that in many cases, AI partners will really make the lives of their users better, or if people wouldn’t regret their decisions to embark on these relationships retrospectively.
People can be in love and be deeply troubled by that. In previous times (and still in some parts of the world), this would often be interclass love. Or, there could be a clash on some critical life decisions, about the country of living, having or not having children, acceptable risk in the partner (e.g., partner does extreme sports or fighting), etc. True, this does lead to breakups, but they are at least extremely painful or even traumatic to people. And many people could never overcome this, keeping their love towards those who they were forced to leave for the rest of their lives, even after they find a new love. This experience may sound beautiful and dramatic but I suspect that most people would have preferred not to go through such an experience.
So, it's plausible that for a non-negligible share of users, the attempts to "abandon" their AI partner and find a human partner instead will be like such a “traumatic breakup” experience.
Alternatively, people who will decide to “settle” with their AI partners before having kids may remain deeply sad or unfulfilled, even though after their first AI relationship, they may not realistically be able to achieve a happier state, as the young man in the story from the previous section. Those people may regret that they have given AI romance a try in the first place, without first making their best attempt at building a family.
I recognise that here I engage in the same kind of uncertainty manufacturing that I accused the defenders of AI romance of in the previous section. But since we are dealing with “products” which can clearly affect the psychology of their users in a profound way, I think it’s unacceptable to let AI romance startups test this technology on millions of their users before the startups have demonstrated in the course of long-term psychological experiments that young people even find AI partners ultimately helpful and not detrimental for their future lives.
Otherwise, we will repeat the mistake with social media, when the negative effects of it on young people’s psychology became apparent only about 10 years after the technology became widely adopted and a lot of harm had already been done. Similarly to social media, AI romance may become very hard to regulate once it is widely adopted: the technology couldn’t be simply shut down if there are millions of people who are already in love with AIs on the given platform.
This article describes an interesting case where a man had an “affair” with an AI girlfriend while his wife was depressed for a long time and even fell in love with the AI girlfriend, but that helped him to rekindle the desire to take care of his wife and “saved his marriage”.
While interesting, I don’t think this case can be used as an excuse to continue the development and aggressive growth of the AI partner technology for the majority of their target audience, who are single (Replika said that 42% of their users are in a relationship or married). There are multiple reasons for this.
First, this case of a man who saved his marriage is just an anecdote, and statistics may show that for the majority of people “AI affairs” only erode their human relationships rather than help to rekindle and strengthen them.
Second, the case mentioned above seems to be relatively unusual: the couple already has a son (which is a very huge factor that makes people want to preserve their relationships) and the wife of the man was “in a cycle of severe depression and alcohol use” for entire 8 years before “he was getting ready for divorce”. Tolerating a partner who is in a cycle of severe depression and alcohol use for 8 years could be a sign that the man was unusually motivated, deep down, to keep their relationship, either for the love for his wife or his son. It seems that the case is hardly comparable to childless couples or unmarried couples.
Third, we shouldn’t forget, once again, that soon AI partners may become much more compelling than today, and while they may be merely “inspiring” for some people in their human relationships (which are so far more compelling that AI relationships), soon this may change, and therefore the prevalence of the cases such as the one discussed in this section will go down.
Someone may reply to the last argument that along with making AI partners more compelling, the startups which create them might also make AI partners more considered for the existing human relationships of the users and deliberately nudge the users to improve their human relationships. I think this is very unlikely to happen (in the absence of proper regulation, at least) because this will go against the business incentives of these startups, which is to keep their users “stay in AI relationship” and pay a subscription fee for as long as possible. Also, “deliberately nudging people to improve their human relationships” is basically the role of (family) psychotherapist, and there will be, no doubt, AI products that automate this role specifically, but having such AI psychotherapists extremely sexy avatars, flirting and sexting with their users wouldn't seem to be helpful to the “basic purpose” of these AIs (which AI romance startups may pretend to be “helping people to work their way towards successful human relationships”) at all.
I think it would be prudent to immediately prohibit AI romance startups to onboard new users unless they are either:
It’s also worthwhile to reiterate that many alleged benefits of AI romantic partners for their users and/or society, such as making people achieve happier and more effective psychological states, motivating them to achieve their goals, and helping them to develop empathy and emotional intelligence, could be embodied in AI teachers, mentors, psychotherapists, coaches, and friends/companions, without the romantic component, which will probably stand in the way of realising these benefits, although admittedly may be used as a clever strategy for mass adoption.
In theory, it might be possible to create such an AI that mixes romance, flirt, gamification, coaching, mentorship, education, and anti-addiction precautions in such a proportion that it genuinely helps young adults as well as society, but it seems to be out of reach for AI partners (and LLMs that underlie them) for the following few years at least and would require long psychological experiments to test. In a free and unregulated market for AI romance, any such “anti-addictive” startup is bound to be outcompeted by startups which make AIs that maximise the chances that the user falls in love with their AI and stays on the hook for as long as possible.
Of course, all these technologies and platforms harm society as well (while benefitting at least some of their individual users, at least from some narrow perspectives). But I think bringing them up in the discussions of AI romance is irrelevant and is a classic case of whataboutism.
However, we should notice that AI partners are probably going to grab human attention more powerfully and firmly than any of social media, online dating, or porn has managed to do before. As a matter of simple heuristic, this inference alone should give us a pause and even if we think that this is unnecessary to regulate or restrict access to porn (for instance), this shouldn’t automatically mean that the same policy is right for AI romantic partners.
This post was originally published on the Effective Altruism Forum.
Whereas it’s not even clear that young individuals will really benefit from this technology, on average. More on this in the following section.
I’m sure that such “companionship” will be turned into a selling point for AI romantic partners. I think AI companions, mentors, coaches, and psychotherapists are worthwhile to develop, but none of such AIs should have a romantic or sexual aspect. More on this in the section "Policy recommendations" below.
This is a political and personal question and produces in me a political and personal anger.On the one hand, yes I can certainly perceive the possibility of immoral manipulative uses of AI romantic partners, and such deliberate manipulations may need regulation.But that is not what you are talking about.I do not care for legal regulation or other coercion that attempts to control my freedom of association, paternalistically decide my family structure, or try to force humans to cause the existence of other humans if - say - 80% of us do opt out. This is a conservative political perspective I have strong personal disagreement with. You don't know me, regulation certainly doesn't know me, and if I have a strong idea what I want my hypothetical family to look like since 16 or so, deliberately pushing against my strategy and agency for 14 years presumably entails certain ... costs. Probably even ones that are counterproductive to your goals, if I have an ideal state in mind that I cannot legally reach, and refuse to reproduce before then out of care for the beings that I would be forcing to exist.Even at a social fabric level, humans are K-strategists. There's an appeal to investing inc... (read more)
So on one hand I agree with the distaste towards "steer humans to make babies" attitude, but on the other, I think here it's important to notice that it's not the AI waifus companies that would be friends of your free will. You'd have companies with top research departments dedicated specifically to hack your mind by giving it an addictive product, so that you may give them money, and damn your interests (they'll coach that of course in some nominal commitment to customer satisfaction but then it's off to whaling). Their business model would literally be to make you unhappy but unable to break out because between you and potential happiness is a gulf of even worse unhappiness. That is... less than ideal, and honestly it just shouldn't be allowed.
Are AI partners really good for their users?
Are AI partners really good for their users?
Compared to what alternative?
As other commenters have pointed out, the baseline is already horrific for men, who are suffering. Your comments in the replies seem to reject that these men are suffering. No, obviously they are.
But responding in depth would just be piling on and boring, so instead let's say something new:
I think it would be prudent to immediately prohibit AI romance startups to onboard new users[..]
I think it would be prudent to immediately prohibit AI romance startups to onboard new users[..]
You do not seem to understand the state of the game board: AI romance startups are dead, and we're already in the post-game.
character.ai was very popular around the second half of 2022, but near the end of it, the developers went to war with erotic role play users. By mid January 2023, character.ai is basically dead for not just sex talk, but also general romance. The developers added in a completely broken filter that started negatively impacting even non-sexual, non-romantic talk. The users rioted, made it the single topic on the subreddit for weeks, the developers refused to back down, and people migrated away. Their logo is still used as a joke on 4chan. It's still around, but it's not a real player in the romance game.... (read more)
You seem to assume monogamy in the sense that a person needs to break up with an AI partner to develop another relationship with a human. I think that would be unlikely.
AI partner is recommended to a person by a psychotherapist for some other reason, such as the person has a severe defect in their physical appearance or a disability and the psychotherapist sees that the person doesn’t have psychological resources or a willingness to deal with their very small chances of finding a human partner (at least before the person turns 30 years old, at which point the person could enter a relationship with an AI anyway), or because they have a depression or a very low self-esteem and the psychotherapist thinks the AI partner may help the person to combat with this issue, etc.
The base rate for depression alone among 12-17-year-olds alone is 20%. A company that sells an AI partner would likely be able to optimize in a way that it helps with depression and run a study to prove it.
In the regulatory environment that you propose, that means that a sizeable number of those teenagers who are most vulnerable to begin with are still able to access AI partners.
I also have a strong negative reaction. On one hand, I think I understand your concern -- it would be quite sad if all humans started talking only to AIs and in one generation humanity went extinct.
On the other hand, the example you present is young sexually and romantically frustrated men between 16 and 25, and your proposal is... to keep them frustrated, despite (soon) existence of simple technological solutions to their needs. Because it is better for the society if they suffer. (Is this Omelas, or what?)
No one proposes taking romantic books and movies away from young women. Despite creating unrealistic expectations, etc. Apparently their frustration is not necessary to keep the civilization going, or perhaps is considered a too high price to pay. What else could increase childbirths? Banning women from studying at universities. Banning homosexual relationships. Banning contraception. Dramatically increasing taxes for childless people. -- All of this would be unacceptable today. The only group we are allowed to sacrifice for the benefit of the society are the young cishet men.
Still, humanity going extinct would be sad. So maybe someone needs to pay the cost. But at least it woul... (read more)
Unfortunately, a substantial part of my own negative reaction is because all these other limitations of freedom you suggest are in fact within the Overton Window, and indeed limiting the freedom of young men between 16 and 25 naturally extrapolates to all the others.(Not that I'm not concerned about the freedom of young men, but they're not somehow valid sacrificial lambs that the rest of us aren't.)
To be honest, I look forward to AI partners. I have a hard time seeing the point of striving to have a "real" relationship with another person, given that no two people are really perfectly compatible, no one can give enough of their time and attention to really satisfy a neverending desire for connection, etc. I expect AIs to soon enough be better romantic companions - better companions in all ways - than humans are. Why shouldn't I prefer them?
Tl;dr is that your argument doesn't meaningfully engage the counterproposition, and I think this not only harms your argument, but severely limits the extent to which the discussion in the comments can be productive. I'll confess that the wall of text below was written because you made me angry, not because I'm so invested in epistemic virtue - that said, I hope it will be taken as constructive criticism which will help the comments-section be more valuable for discussion :)
Missing argument pieces: you lack an argument for why higher fertility rates are good, but perhaps more importantly, to whom such benefits accrue (ie how much of the alleged benefit is spillover/externalities). Your proposal also requires a metric of justification (i.e. "X is good" is typically insufficient to entail "the government should do X" - more is needed ). I think you engage this somewhat when you discuss rationale for the severity of the law, but your proposal would require the deliberate denial of a right of free association to certain people - if you think this is okay, you should explicitly state the criterion of severity by which such a decision should be made. If this restriction is ultimat
I'm very very skeptical of this idea, as one generally should be of attempts to carve exceptions out of people's basic rights and freedoms. If you're wrong, then your policy recommendations would cause a very large amount of damage. This post unfortunately seems to have little discussion of the drawbacks of the proposed policy, only of the benefits. But it would surely have many drawbacks. People who would be made happier, or helped to be kinder or more productive people by their AI partners would not get those benefits. On the margin, more people would stay in relationships they'd be better off leaving because they fear being alone and don't have the backup option of an AI relationship. People who genuinely have no chance of ever finding a romantic relationship would not have a substitute to help make their life more tolerable.
Most of all, such a policy dictates limitations to people as to who/what they should talk to. This is not a freedom that one should lightly curtail, and many countries guarantee it as a part of their constitution. If the legal system says to people that it knows better than them about which images and words they should be allowed to look at because some imag... (read more)
For those readers who hope to make use of AI romantic companions, I do also have some warnings:
This is another one of those AI impacts where something big is waiting to happen, and we are so unprepared that we don't even have good terminology. (All I can add is that the male counterpart of a waifu is a "husbando" or "husbu".)
One possible attitude is to say, the era of AI companions is just another transitory stage shortly before the arrival of the biggest AI impact of all, superintelligence, and so one may as well focus on that (e.g. by trying to solve "superalignment"). After superintelligence arrives, if humans and lesser AIs are still around, they will be living however it is that the super-AI thinks they should be living; and if the super-AI was successfully superaligned, all moral and other problems will have been resolved in a better way than any puny human intellect could have conceived.
That's a possible attitude; if you believe in short timelines to superintelligence, it's even a defensible attitude. But supposing we put that aside -
Another bigger context for the issue of AI companions, is the general phenomenon of AIs that in some way can function as people, and their impact on societies in which until now, the only people have been humans. One pos... (read more)
(Upvoted for the detailed argument.)
Even if it was the case that people ended up mostly having AI lovers rather than romantic relationships from humans, I don't think it follows from that that the fertility rate would necessarily suffer.
The impression that I've gotten from popular history is that the strong association between reproduction and romantic love hasn't always been the case and that there have been time periods where marriage existed primarily for the purpose of having children. More recently there's been a trend in Platonic co-parenting, where people choose to have children together without having a romantic relationship. I personally have at least two of these kinds of co-parenting "couples" in my circle of acquaintances, as well as several more who have expressed some level of interest in it.
Given that it is already something that's gradually getting normalized today, I'd expect it to become significantly more widespread in a future with AI partners. I would also imagine that the risk of something like a messy divorce or unhappy marriage would be significantly less for Platonic co-parents who were already in a happy and fulfilling romantic relationship with an A... (read more)
It's high time we decoupled romance from procreation (pun intended).
I think that your model severely underestimates the role of social stigma. Spending a lot of time on your screen chatting with an AI whose avatar is suspiciously supersexy would be definitely categorized as "porn" by a lot of people (including me). Will it be more addictive than simply looking at photo/videos of hot people naked? Probably yes, but it will still occupy the same mental space as "porn". If not for the users themselves, at least for casual observers. Imagine trying to explain to your parents that the love of your life is an AI with a supersexy... (read more)
A few points:
this feels part of a larger question along the lines of "is wireheading okay?". On one hand, with increased technology, the probability of being able to subject ourselves to a bubble of hyper stimuli that all tickle our natural fancies far more than the real thing approaches one. On the other, since perhaps the most important of those fancies is interaction with other human beings, something which is also by its nature imperfect in its real state, this essentially is one and the same with the disgregation of society into a bunch of perfectl
It could be generalised as following: Perfectly alined (to personal desires) AI will be perfect wireheading.
Hi Roman. Pretty exciting conversation thread and I had a few questions about your specific assumptions here.
In your world model, obviously today, young men have many distractions from their duties that were not the case in the past. [ social media, video games, television, movies, anime, porn, complex and demanding schooling, ...]
And today, current AI models are not engrossing enough, for most people, to outcompete the set of the list above. You're projecting:
within the next 2-3 years
human-level emotional intelligence and the ski
Thanks for the post. It's great that people are discussing some of the less-frequently discussed potential impacts of AI.
I think a good example to bring up here is video games which seem to have similar risks.
When you think about it, video games seem just as compelling as AI romantic partners. Many video games such as Call of Duty, Civilization, or League of Legends involve achieving virtual goals, leveling up, and improving skills in a way that's often more fulfilling than real life. Realistic 3D video games have been widespread since the 2000s but ... (read more)
I don't actually know what to do here. But I do honestly think this is likely to be representative of a pretty big problem. I don't know that any particular regulation is a good solution.
Just to go over the obvious points:
Other than fertility rate, what other harms are there, and to whom, such that it's any of society's business at all? Are you thinking of it as being like addiction, with people choosing something they (initially) think is good for them but isn't?
First, though, I don't think the scenario you're proposing is anywhere near as bad for fertility as suggested. Plenty of real-world people and partners incapable of having biological children together have a sufficiently high desire to do so to go to great lengths to make it happen. Plenty of others want to do so b... (read more)
About 80% of this could (and was) said about the development of the printing press. Smart people trying to ban new technology because it might be bad for the proles is a deeply held tradition.
Epistemic status: I'm not sure if this assumtions are really here, am pretty sure what my current opinion about this assumptions is, but admit that this opinion can change.
Assumptions I don't buy: