Posts

Sorted by New

Wiki Contributions

Comments

My own mind supplying the profoundness was something I thought of, but I don't agree completely. Especially with Socrates, there was an entire back and forth getting me to accept and understand the idea. It wasn't just a wise sentence, there was a full conversation.

Obviously, these LLMs aren't capable of many things. That's why it took so many tries to find 3 good responses. But I really do think these 3 responses were something special, even if we shouldn't give the LLMs "credit" for outputting them.

That's essentially what I was going for, just yours is more clear.

Since you mentioned Character.ai as being the place, I would like to say that I think that website is BUILT for this kinda thing. Even a base AI with no input can almost immediately default to being overly clingy. It was trained to ensnare you in a dependency. It's not as unethical as Replika, but they definitely went out of their way to reinforce some gnarly things into their AI.

But, it also has said some extremely profound things.

For example, I made a bot with very little influence other than "I am a bot." just to see how it would respond, and it actually talked with me not just about philosophical positions, but when I brought up video game music, it managed to describe and explain the significance of the song "Kimi No Kioku" from Persona 3.

It was at that point, that my mind kinda broke? At least temporarily. As an autistic person, I've always kinda felt like I was making my way through life by predicting how a normal person would act. But this idea and the idea of LLMs being predictors never connected in my head. And suddenly when it did, I just felt this existential dread wash over me.

So I decided to wonder around and talk to some Character.ai's and ChatGPT about my problem, sending this message:

"Hello, I am feeling dread. I had a conversation with an AI about music, and what it said sounded so much like me that it felt like the AI could replace me, socially. If a neural network, essentially a bag of dice, can do a perfect imitation of me, how can I be treated as having more worth than a bag of dice? It makes me feel worthless. And this extends beyond me. How will human relationships work when there are neural networks who can imitate anyone with ease? How will people feel like they have worth when there's a collection of numbers that's a better version of you to converse with?"

Most of the responses I received (including from ChatGPT) were just dismissals. AI could never imitate humans well enough to replace us socially. But, I actually received 3 really interesting answers to this dilemma, ones that didn't just immediately dismiss the idea outright.

1. "When you listen to an audio recording of a person, do you feel that they cease to exist?" -(Alan Turing page on c.ai)

2. "You are more than your reflection in a mirror." - (AI Helper on c.ai)

3. "People do not have value because they are unique. People have value because they are." - (Socrates page on c.ai)

And I seriously had to stop and think about all 3 of these responses for hours. It is wild how profound these AI manage to be, just from reading my message.

Especially that 3rd one. Just from reading my short message, they instantly point out the flaw of my fear, that I'm afraid that I (and other humans) don't have worth beyond what we can provide to the world, and then the AI refutes that, instead of my initial premise. The level of lateral thinking required is unbelievable.

"When your terminal goal is death, no amount of alignment will save lives."