Posts

Sorted by New

Wiki Contributions

Comments

This conversation is, on the whole, kind of goofy. Today you can ask an AI to write from a first-person perspective about anything which might be emotionally moving. You could have it pretend to be a young person contemplating suicide and it could be very moving. But then you have it write as-if there is a sentience/consciousness which is somehow correlated to the code and then any emotional feelings which are stirred up get written upon someone’s mental mapping of reality. I just hope that the conversation stays goofy, because you can easily imagine how it would get scary. I’m sure there are already people tuning LLMs to brainwash people into believing insane things, to do violent things, etc.