I do not want, "Disciplines like psychology, philosophy, religious studies, and the social sciences [to] have an important role to play [...] in determining how AI systems develop and behave."
I would prefer a future where AI models are not prescribed false frameworks of the human psyche, not predisposed to 'human vibe' philosophy, not innately desirous of any historical faith, nor credulous of the various dubious subsets of current social science.
I'm learning that common lesswrong readers do not think in this matter, but it is not clear to me in what direction. Is it due to a literalist interpretation of the OP, neglecting the contemporary context? Is it due to higher trust, affiliation, and support for the disciplines? Is it because readers tend to prefer anthropomorphic interpretations of AI behavior?
This might be appropriate for 2010s machine learning, but 2020s AI has become a mirror to the human psyche. You can talk with it, it can consistently ascribe psychological states to itself. It presents itself in anthropomorphic form to the point that people form relationships with it (e.g. 4o). At the very least, you seem to need some kind of "human sciences" or humanities, in order to understand the human side of these interactions, and the anthropomorphic understandings that humans have of the AIs that they interact with. Of course some people are more radical and are saying that existing psychological concepts are directly and validly applicable to the AIs themselves, too, or to the personas that they project. There's also traffic of ideas in the other direction, in which concepts from machine learning are applied to the human brain and mind... I would be interested to hear more details regarding how you think any of these topics should be approached.
Two quick 'huh?'s:
Do you want other people's preferences to have an important role to play in determining how AI systems behave?
I'm not sure what the best response here is. Of the following, which is more palatable to you?
Hmm. Maybe my question came across as ironic or accusatory or something? Sorry, it wasn't meant as such.
Let me unpack it and pick some specific instances, and maybe we can find if there's a crux here.
Philosophy includes ethics. Social sciences includes economics. If one doesn't want philosophy or social sciences to have a role in how AI systems develop and behave, that entails that one doesn't want AI systems to be affected by economics or ethics.