Human alignment: chrislakin.com/bounty
made some light edits because of this comment, thanks
oh ok i might start doing that. knowing my calibration on that would be nice
oh ok hm. i also don't want to be incentivized to not give easy-for-me help to people with low odds of success though
could you give a few examples?
also seems time-intensive hmmmm
also, i thought about it more and i really like the metric of "results generated per hour"
:D i really hope bounties catch on
wow this is contraversial (my own vote is +6)
wonder why
- One-sentence summary: Formalise one piece of morality: the causal separation between agents and their environment. See also Open Agency Architecture.
- Theory of change: Formalise (part of) morality/safety, solve outer alignment.
Chris Lakin here - this is a very old post and What does davidad want from «boundaries»? should be the canonical link
Why SPY over QQQ?
available on the website at least
i like this thanks. might take a bit of time to put together but interested