i absolutely hate bureaucracy, dumb forms, stupid websites etc. like, I almost
had a literal breakdown trying to install Minecraft recently (and eventually
failed). God.
1
3Quinn8d
"EV is measure times value" is a sufficiently load-bearing part of my worldview
that if measure and value were correlated or at least one was a function of the
other I would be very distressed.
Like in a sense, is John
[https://www.lesswrong.com/posts/voLHQgNncnjjgAPH7/utility-maximization-description-length-minimization]
threatening to second-guess hundreds of years of consensus on is-ought?
3
3Stephen Fowler8d
Are humans aligned?
Bear with me!
Of course, I do not expect there is a single person browsing Short Forms who
doesn't already have a well thought out answer to that question.
The straight forward (boring) interpretation of this question is "Are humans
acting in a way that is moral or otherwise behaving like they obey a useful
utility function." I don't think this question is particularly relevant to
alignment. (But I do enjoy whipping out my best Rust Cohle impression
[https://www.youtube.com/watch?v=Z5vwDfg3JNQ])
Sure, humans do bad stuff but almost every human manages to stumble along in a
(mostly) coherent fashion. In this loose sense we are "aligned" to some higher
level target, it just involves eating trash and reading your phone in bed.
But I don't think this is a useful kind of alignment to build off of, and I
don't think this is something we would want to replicate in an AGI.
Human "alignment" is only being observed in an incredibly narrow domain. We
notably don't have the ability to self modify and of course we are susceptible
to wire-heading. Nothing about current humans should indicate to you that we
would handle this extremely out of distribution shift well.
1
3kuira8d
it's interesting that an intelligence in the 'original'/'top-level' universe
also might [if simulation theory is valid] have evidence to assume it's
close-to-certainly simulated
maybe it would do acausal trade and precommit to not shutting down simulated
intelligences
1Omega.7d
Quick updates:
* Our next critique (on Conjecture) will be published in 10 days.
* The critqiue after that will be on Anthropic. If you'd like to be a reviewer,
or have critiques you'd like to share, please message us or email
anonymouseaomega@gmail.com [anonymouseaomega@gmail.com].
* If you'd like to help edit our posts (incl. copy-editing - basic grammar etc,
but also tone & structure suggestions and fact-checking/steel-manning),
please email us!
* We'd like to improve the pace of our publishing and think this is an area
that external perspectives could help us
* Make sure our content & tone is neutral & fair
* Save us time so we can focus more on research and data gathering