LESSWRONG
LW

2095
Jeff White
0020
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Without fundamental advances, misalignment and catastrophe are the default outcomes of training powerful AI
Jeff White2y10

I have been working on value alignment from systems-neurology and especially adolescent development for many years, sort of in parallel with ongoing discussions here, but in terms of moral isomorphisms and autonomy and so on. Here, a brief paper from a presentation for the Embodied Intelligence conference 2023 about development of purpose in life and spindle neurons in context of self-association with religious ideals, such as we might like a religious robot to pursue, disregarding corrupting social influences and misaligned human instruction, and so on: https://philpapers.org/rec/WHIAAA-8  I think that this is the sort of fundamental advance necessary.

Reply
No wikitag contributions to display.
No posts to display.