wouldn't interpret this as necessarily limiting the space of AI values, but rather (somewhat conservatively) as shared (linguistic) features between humans and AIs
I fail to see how the latter could arise without the former. Would you mind to connect these dots?
Big achievement, even if nobody should be surprised (it’s been known for vision for a decade or so).
https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003963
@anyone To those who believe a future AGI might pick its value at random: don’t you think this result suggests it should restricts its pick to something human langage and visiospatial cognition push for?
I think the other philosophical meanings are mostly the same thing
I beg to differ, but that’s exactly why I liked your suggestion.
I like that. Agency on LW usually means « the kind of behavior where you might hide your intentions to better fight anything that could stand in your way ». Calling that « wanters » or « strategical wanters » would help avoid confusion with the technical and philosophical meanings.
In the same sense you could say this is exactly the same. For any classical computer:
-protein folding is intractable in general, then whatever natural selection found must constitute special cases that are tractable, and that’s most probably what alphafold found. This was extraordinary cool, but that doesn’t mean alphafold solved protein folding in general. Even nature can get prions wrong.
-quantum computing is intractable in general, but one can find special cases that are actually tractable, or where good approximations is all you need, and that what occupy most of physicists time.
In other words, you can expect a superintelligence to find marvelous pieces of science, or to kill everyone with classical guns, or to kill everyone with techs that looks like magic, but it won’t actually break RSA, for the same reason it won’t beat you at tic-tac-toe: superintelligences won’t beat math.
What's your response to my "If I did..." point?
Sorry, that was the « Idem if your data forms clusters ». In other words, I agree a cluster to (0,0) and a cluster to (+,+) will turn into positive correlation coefficients, and I warn you against updating based on that (it’s a statistical mistake).
If you agree that agency as I've defined it in that sequence is closely and positively related to intelligence, then maybe we don't have anything else to disagree about.
I respectfully disagree with the idea that most disagreements comes from making different conclusion based on the same priors. Most disagreements I have with anyone on LessWrong (and anywhere, really) is about what priors and prior structures are best for what purpose. In other words, I fully agree that
I would then ask of you and Boaz what other notion of agency you have in mind, and encourage you to specify it to avoid confusion, and then maybe that's all I'd say since maybe we'd be in agreement.
Speaking for myself only, my notion of agency is basically « anything that behaves like an error-correcting code ». This includes conscious beings that want to promote their fate, but also life who want to live, and even two thermostats fighting over who’s in charge.
I do disagree that collective stupidity from our use of social networks is our main present danger; I think it's sorta a meta-danger, in that if we could solve it maybe we'd solve a bunch of our other problems too, but it's only dangerous insofar as it it leads to those other problems, and some of those other problems are really pressing...
That and the analogy are very good points, thank you.
Yes: their sleep differs, for obvious reasons, and messing with REM sleep could well be why they need more neurons. Specifically, we know that echidna (a terrestrial species that lack REM sleep) has much more neurons than it should given its body mass (and, arguably, behavior), and one hypothesis is there’s a causal link, e.g. REM sleep could be a mean to make neuron use more efficient.
Indeed their representations could form a superset of human representations, and that’s why it’s not random. Or, equivalently, it’s random but not under uniform prior.
(Yes, these further works are more evidence for « it’s not random at all », as if LLMs were discovering (some of) the same set of principles that allows our brains to construct/use our language rather than creating completely new cognitive structures. That’s actually reminiscent of alphazero converging toward human style without training on human input.)