Just asking for some feedback.
Hi everyone! I have an ontological framework (the highest level) that I use in my everyday philosophical thinking. But I'm too lazy to write an actual article about it. And eventually I decided to introduce it here on LessWrong, with the focus on AI and AI alignment, because this is a popular topic potentially important. Why now? I'm thinking about mental agency being powered by language, and LLMs hitting very close to it, so I want to share my thoughts.
Some context
After discovering the mind/body problem 15+ years ago I have been thinking about it a lot. There is something very deep about physical and mental not being the same. But... (read 3436 more words →)
Are you asking for a p-zombie test? It should be theoretically possible, for any complex system and using appropriate tools, to tell what pattern recognition and word prediction is happening underneath, but I'm not sure it's possible to go beyond that.