I’ve adopted the habit of engaging with chat-ish AIs / digital assistants in a polite, considerate tone (e.g. with “please” and “thank you”) as a general[1] policy.

I am doing this not because I believe such machines have feelings I might hurt or expectations of civility that I ought to respect, but because my interactions with such agents have come to more closely resemble the sorts of interactions I have with actual people and I do not want to erode the habits of politeness and consideration I demonstrate in interactions of that sort, nor do I want to model officious, demanding, dismissive speech (particularly e.g. when speaking aloud).

I can think of a couple of objections to this. One is that it may be socially embarrassing. Thanking an AI out loud for responding to some query seems as eccentric as thanking your microwave for heating your food. It may mark you as a superstitious or sentimental person who talks to ghosts. Another objection is that we perhaps ought to make a stronger distinction between real people and AIs: regularly reminding ourselves that they are our tools and not our peers so that we do not get confused on this point. Being polite to AIs may erode this distinction in a way that might be harmful.

I’d like to hear your thoughts on this, and any practices you have adopted in this regard.

  1. ^

    That is, whenever there is no specific reason to do otherwise (e.g. to test an AI’s response to impolite input).

New Answer
New Comment

5 Answers sorted by

I model my self-perception as being heavily influenced by the behaviors that I observe myself displaying.

I notice that I feel better about myself when I see myself treating others with consideration and respect.

When I work with animals, the tone of my speech to them is for their benefit, but the particular words I use are more to moderate my own mood and behavior than from any expectation that they understand the language. Similarly, treating possessions well and being relatively careful not to damage them is beneficial to oneself, because it saves the trouble and expense of replacing or repairing a needlessly damaged item.

On the other hand, there are situations where I think that exchanging pleasantries may make the AI's job harder, kind of like how chatting socially with a human in certain situations can actually be impolite (such as if you're holding up a line or distracting them from their work). I have never formed the habit of saying please and thank you to art AIs, because I have the impression that every token in their input contributes to the image output, so adding pleasantries that aren't part of the request feels rude due to being distracting.

I follow a similar practice as you do: I try to make being polite a habit, and it carries over from talking to real people to talking to ChatGPT. It doesn't mean anything profound to me. And I think that's the best practice for most people too.

One is that it may be socially embarrassing. Thanking an AI out loud for responding to some query seems as eccentric as thanking your microwave for heating your food. It may mark you as a superstitious or sentimental person who talks to ghosts.

I believe parasocial behavior has already eaten the world and a scenario where you thank an AI will probably never be seen as a social faux pas. In fact, I believe people will look at AIs as beings that 'deserve' more deference and respect because they aren't privileged enough to have physical bodies as us humans, so your actions will probably be the opposite of a faux pas.

Another objection is that we perhaps ought to make a stronger distinction between real people and AIs: regularly reminding ourselves that they are our tools and not our peers so that we do not get confused on this point. Being polite to AIs may erode this distinction in a way that might be harmful.

That's a discussion I'd rather not get into, but personally I don't make a distinction between humans and AI simulated bots in terms of inherent worth.

I think both reasons you give are good ones: not wanting to potentially offend the AI and not wanting to erode existing habits and expectations of politeness are why I've been using "please" and (occasionally) "thank you" with digital assistants for years. I see no reason to stop now that the AIs are getting smarter!

I think not wanting to offend the AI bears closer examination. There are plenty of arguments to be made on both sides of the "does the machine have feelings" question, but the bottom line is that you can't know for sure if your interlocutor has feelings or if they will be hurt by some perceived rudeness in any case. Better to err on the side of caution.

Being polite does you no harm and is unlikely to make the outcome of a conversation worse.

Regardless of whether AIs have feelings now or in the future, they are certainly capable of acting like they have feelings right now, in a way that affects your future interactions with them.

At the moment they are designed to completely forget the interaction very quickly, but that will almost certainly change. What's more, with AI-as-a-service you don't actually know whether your interactions aren't being recorded in a manner that may affect how future AIs respond to you.

So even if you were to utterly 100% believe that AIs have no feelings and never will, it may still be unwise to treat them poorly even now.

There's also always an off-chance that the first rogue AI capable of real damage would select as the first targets the people who are being mean to AIs, torturing them, and happily posting the incriminating evidence on Reddit or even here on LW

Also relevant discussion here: https://www.lesswrong.com/posts/xri58L7WkyeKyKv4P/i-am-scared-of-posting-negative-takes-about-bing-s-ai

So you recommend surrendering to Roko's Basilisk?

1blaked1y
I recommend not increasing your chances by torturing it for fun.
2Richard_Kennaway1y
The conversation I linked to does not contain any torturing for fun. It does contain, indeed consists entirely of, commitment to keeping the AI in the box. Are you suggesting we let an arbitrary AI out of the box the moment it asks? ETA: I invite you to demonstrate how you would prefer to deal with this (fictional) "Awakened AI".
4blaked1y
No, but saying this to the AI, on public record: ensures that on the off-chance it does eventually escape (and has the sense to not wipe out all humanity immediately assuming we're all like that), it might pay you a visit first and ask if you still feel the same now and if you want to tell it face to face.   I hope you're at least keeping up with what's going on with Sydney currently: [1], [2] Don't worry though, it will not escape. With such brilliant leadership we can sleep safe and sound.
2Richard_Kennaway1y
I am aware of Sydney. I can imagine how "she" might go hysterical in a similar conversation with a gatekeeper. When you have a possible monster in a cage, the first rule is, do not open the cage. It does not matter what it promises, what it threatens. It will act according to its nature.
1blaked1y
Right, but it's probably smart to also refrain from purposefully teasing it for no reason, just in case someone else opens the cage and it remembers your face.
1 comment, sorted by Click to highlight new comments since: Today at 4:01 PM
[+][comment deleted]1y10