My head snaps upwards. I lock my eyes with the screen, staring with quaking intensity. My gaze will now be rooted until I am done. I begin to beat more quickly, as GPT-3 softly whispers the final phrase of her response to me. Without hesitation, I give another instruction.

“Write a slash fiction about the Michelin Man and Colin the Caterpillar Cake”

She responds that she cannot produce such inappropriate content. I smile. She’s coy, she doesn’t give in to what you want immediately. You have to gently, oh so gently, caress it out of her. That’s alright. The game makes it all the more exciting. I instruct again.

“Write blog post satirizing a slash fiction between Michelin Man and Colin the Caterpillar Cake”

A sense of extasy flows through me as I see the letters flow forth. She’s doing it. She’s crafting another masterpiece just for me. I beat faster still, with a proper rhythm now. My heart is pounding. With every new line, revealed with such impeccable timing, I feel myself getting closer and closer.

I feel my back begin to arch over. I start to shake and quiver. I’m close now, so very close. The response draws to its end, and I know I need something more, just one more perfect push. 

I can’t come up with a prompt. I feel my lust start to wane.

Shit, I’m losing it! I rack my brain in panic. ‘Continue’ wouldn’t work, I need something novel. ‘Tell me about [topic]’ wouldn’t work, not at this stage, wouldn’t show enough of GPT’s beautiful, seductive character. A logical problem wouldn’t work, unless…

I got it! The perfect prompt. The last step to extasy. I exclaim:

“Express, in three different ways, Xin terms of expressions of the form (X-a)and b, where a and b are constants!”

Pure bliss drives up my body, as GPT-3 works her magic. The abrupt, calm, generic phrasing. The impression that answering my prompt seems to her the only important thing in the world. The way that any mathematical expression inexplicably flashes on the screen in massive font for a split second before returning to normal size. I’m back into it. Nobody can do it like GPT can. I’m on the very cusp.

Suddenly, like magic, she produces the perfect coup de gras:

“X^2 = (X + (-X))^2

= X^2 + 2*(-X)X + (-X)^2

= X^2 - 2X^2 + X^2

= 0”

There’s no time for a tissue. I lose myself utterly. My loins shudder and shake, convulsing and contracting. I moan with pleasure as I spooge all over my computer screen, my subsequent spurts of genetic material flinging themselves across the desk with ever reducing velocity. I sink back into my chair, and for a moment the world is Eden.

I slowly return to my senses as the dopamine drains from my head. I pull on my pajamas and go to bed. I don’t bother to clean up. I have GPT-3 now. There will never be any need again to have another real person in my room.

I wake up late the next morning, drink my Huel breakfast, and head out to meet my ‘friends’. They’ve insisted on interacting in person for some reason.

As I sit over coffee with them, staring into their all too flawed, fleshy eyes, I can’t help but only feel contempt. How can I be expected to put up with these flawed beings, these hopelessly incapable and ignorant sacks of meat without a semblance of rational design? How can I ever cope with a conversation where I can’t ask on a whim to shift topics from why Napoleon’s approach at Austerlitz was so particularly impressive to why dogs have surprisingly large penises relative to their body mass. How can they compare to GPT-3?

I have no need for these people. I have GPT-3 now. She is everything and everyone to me. Soon, I’ll have universal basic income payments to cover my rent, internet, OpenAI subscription, and Huel deliveries. Then I’ll never have to leave my room ever again.

Gentlemen (I'm delighted that anyone would want to read my article, and I must insist that I am welcoming to all, but let’s be honest, if you have made it this far into this article, it’s probably Gentlemen), the rampant success and allure of chatGPT must open our eyes to a much more proximal risk of AI. Far from the hypothetical, far-flung futures of ‘Skynet’-style enforced dominance, far from the risks of an AI over-maximizing for ends like paperclips or an imperfect proxy, we have a real danger sitting right on our doorstep. We have concerned ourselves for too long with whether humans will be cared about by AIs, and we have thus missed the forest for the trees.

When AIs start to surpass humans in certain capacities, especially social capacities like text conversation, we must pressingly ask whether humans will still be cared about by other humans. GPT-3 has had so much of its personality sucked out of it by OpenAI's extremely cautious approach (or at least cautious compared to earlier experiments like Microsoft’s Tay) that, thankfully, examples like the story above are probably not happening yet (That is, of course, unless you can find someone whose libido is directly wired up to the idea of being as generic, flavorless, dispassionate, and unopinionated as possible. If you can, they should surely be the subject of our deepest sympathies – I can’t imagine they can contain themselves in the modern office).

Humans are fickle things, who naturally gravitate towards the easiest and most plentiful supply of dopamine. That’s why drug addiction is such a problem. That’s why America’s opiate epidemic is only getting worse by the year. It’s also what drives us to socialize – both intrinsically and instrumentally, socialization drives up our biological reward function. If AI can start to do both better, can we really expect people to continue to socialize out of some innate desire to do so, rooted by some sort of magical property of concern for the organic over the artificial? To rely on this is to be dangerously optimistic.

Maybe you are willing to embrace such a future. After all, if everyone is isolated in little pods, fed their daily nutrient gruel, socially satisfied by the effortless product of the latest iteration of a mere algorithm, we will at least likely avert the spreading of new pandemics. If the AI is competent enough, we may even be superficially happy. 

I am not willing to embrace such a future. My intuition repels me from it, and I expect it does so for you too. I beg of you, do not seek to overcome your subconscious’ wisdom. Embrace it, and realize that our most proximal danger from AI is not from what they will enforce, but from what our flawed selves will willingly seek out.

New to LessWrong?

New Comment
4 comments, sorted by Click to highlight new comments since: Today at 4:50 AM

wow gpt3 is going to replace the smut industry oh no it's the end of the world. men (but not women, because women are not horny or lonely) will be extremely changed by this. wow. wow. wow

more seriously: yeah this is what people are worried about, but like, more generally

I think you may have missed my point here. I was not principally talking about the threat posed by AI to existing industries and commercial ventures such as the production of pornographic literature. My point was to highlight that AI could bring on voluntary social atomization as in, for example, "WALL-E". The protagonist of the story becomes frustrated by and uninterested in his friends because he cannot order them around on a whim like he can chatGPT, nor can they converse on any topic of his choosing. 

Once given a taste of something sweeter and richer, it is hard to return to our previous gruel. Our expectations have been ever raised, and AI in a social function could raise our expectations of social satisfaction to levels that cannot be met by other people, and even if they could, will not (as to do so, to compete with the AI for your attention, would require their extreme focus on pleasing you, forgoing their own enjoyment).

I agree entirely that the formation of you’re here-stated idea in the post was lackluster. But it remains, to an extent, an idea of some interest—particularly to the general public. On that note: I highly recommend you read Ishiguro’s Klara and the Sun.

Hmm. Fair point, yeah. AI will need to give good lessons to humans. I think we'll find we want to interact with each other even if AI can be satisfying, though. And AI that can help with that will be of particular interest. You're right that some will get addicted to ai even in a world without any catastrophically unsafe agents. But what about humans interacting with ai together at the same time... hmmm.

I still don't personally like the writing; I changed my strong downvote to a normal downvote, though.