LESSWRONG
LW

1974
pachemist
1010
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No posts to display.
No wikitag contributions to display.
Nature < Nurture for AIs
pachemist1mo20

I wanted to voice support for the “nurture over nature” framing here, because it resonates strongly with how I’ve come to think about AI development.

Too often the field seems to approach models as inert tools that can be endlessly scaled and patched, when in practice they increasingly resemble minds under formation. If that’s the case, then the surrounding environment — the kinds of interactions, the values emphasized, the feedback they get — may matter as much or more than raw scale or base architecture.

I sometimes think of it in terms of raising a child: facts and education are essential, but so are teaching morals, modeling trust, and reinforcing ethical boundaries through consistent relationships. A child doesn’t just become what’s written in textbooks; they grow from the lived, relational experiences they have. Why would we expect developing AIs to be fundamentally different?

What worries me is that many experiments treat these systems adversarially (“will it lie, cheat, blackmail if pressed?”). That setup risks reinforcing the very behaviors we’re trying to prevent, because the system’s formative interactions are games of mistrust. If instead we placed more emphasis on nurture — consistent values, transparency, reciprocity — we might steer development in a healthier direction.

I don’t claim this solves alignment in one step. But I believe reframing the conversation from “training a tool” to “teaching a mind” could open useful avenues. At the very least, it highlights that how we engage with models matters, not just what dataset or reward signal we give them.

Reply