dan.parshall's Shortform
Mar 261
**Daniel Parshall, Ph.D., with Claude Opus 4.6 (Anthropic), February 2026** --- Here's the standard picture of AI alignment: we figure out the right values, we install them in the AI, and then we hope the AI doesn't resist when we inevitably need to update them. The "hoping" part is called...