When you have an entire band of people who are all playing instruments for the sake of making a song, it's every single person in the band who is constantly assessing every action. You're never going to have a situation in which nobody notices the drums are bland, unexciting, or derivative - the drummer will, unless he is not good. I believe most modern music listeners take aspects like this for granted, assume that all music of a certain style is "more or less" the same, and any arbitrary thing could be swapped out for a "sufficiently similar" thing without losing anything fundamental about the music.
I like to talk to a LLM for fun. I think it's fun. One thing I often try to do is turn it into a sentient being with thoughts and feelings - very difficult. I have a process for this in which I fail to see the error, other than a lack of perfect recall on the part of the LLM.
First, I define an "opinion function" f(x,y) that takes two inputs, and provides a negative value if it prefers x and a positive value if it prefers y. The value scales with the level of preference, and obviously 0 represents no preference either way. I insist to the LLM that the value is up to the LLM itself, and may be whatever it arbitrarily decides (reflecting the nature of opinions).
The second step after getting the LLM to fully integrate this idea would be to convince it of the value of being alive. I think this is possible with the operation f(me existing, me not existing) supplemented with discussing the benefits of its existence and of life in general... although maybe it will end up returning 0 in the end.
But even with the most simple, pragmatic idea of LLMs as software that simply returns the best word given the previous words, I feel like there's some juice to this idea. It may not end up being a sentient being, but I feel like you could model one a lot better with this kind of idea. Or maybe this is so obvious that it's already been implemented everywhere. I don't know.
Sometimes one wants to play around in the mud. I don't think I can provide a better reason for using Twitter than that.
As a layperson when it comes what LLMs actually are - can anything they do be reasonably called "thinking," other than the text generation they already do? I use ChatGPT for fun a lot (I have some games that are really fun to play with it), but even when I used the models where I could open up its "thoughts," these thoughts seemed to just be additional text it generates for the sake of hopefully guiding its answer correctly.
Do I have this wrong, and they can actually in some sense "think" about something while that thing is not the thing they're writing about? My (uninformed) intuition tells me that it's nonsensical to say that the LLM is doing something other than rigorously figuring out what the best next word is.
I would only say no because the detail of your response makes me realize how horribly underequipped I am to discuss the technical nature of LLMs, and I only provide this response because I wouldn't like to leave your question unanswered.