Hallo, I’m Mayowa.
I work on Cross-Disciplinary Research at Aditu ( https://aditu.tech )
I think this is a really interesting post. It’s interesting to see an outline on the general relationships between self-reporting and sentience.
The idea that "Training an LLM to develop a model of its internal operations which enables it answer non-trivial questions about its mental states" could be a straightforward way to optimize models for Sentience - I think that’s very thought-provoking.
I'm curious to hear your thoughts on these.
The general topic of sentience in machines is one I'm interested in thinking about and discussing with people.
I wrote an article exploring sentience in machines by studying the neural activations in Artificial Neural Networks and applying insights from Neuroscience's analysis of neuronal activations in humans. I put an intro post here on LessWrong (it didn't do well on this website), you can feel free to take a look here.