Well maybe llms can "experiment" on their dataset by assuming something about it and then being modified if they encounter counterexample. I think it vaguely counts as experimenting.
I think that there may be wrapper-minds with very detailed utility functions, that whatever qualities you attribute to agents that are not them, the wrapper-mind's behavior will look like their with arbitrary precision on arbitrarily many evaluation parameters. I don't think it's practical or it's something that has a serious chance of happening, but I think it's a case that might be worth considering.
Like, maybe it's very easy to build a wrapper mind that is a very good approximation of very non wrapper mind. Who knows
Sounds like a statement "no AI can have or get them".
Well it can learn it, it can develop them based on a dataset of people's stories. Especially it looks possible with the approach that is currently being used.
Isn't consciousness just a "read-only access thing to the world" then? Like is there some reason why dualism is not isomorphic to parallelism?
There is a lot more useful data on YouTube (by several orders of magnitude at least? idk), I think the next wave of such breakthrough models will train on video.
Give it 140k chances to predict "rain or no rain, in this location and time?" and it has no chance.
Well i think it can just encode some message in this bits and you or your colleagues will eventually check it