I think it bears repeating here:
Influence is only one aspect of the moral formula; the other aspect is the particular context of values being promoted.
These can be quite independent, as with a tribal chief, with substantial influence, acting to promote the perceived values of his tribe, vs. the chief acting to promote his narrower personal values. [Note that the difference is not one of fitness but of perceived morality. Fitness is assessed only indirectly within an open context.]
Excellent advice Eliezer!
I have a game I play ever few months or so. I get on my motorcycle, usually on a Friday, pack spare clothes and toiletries, and head out in a random direction. At most every branch in the road I choose randomly, and take my time exploring and enjoying the journey. After a couple of days, I return hugely refreshed, creative potential flowing.
But we already live in a world, right now, where people are less in control of their social destinies than they would be in a hunter-gatherer band...
If you lived in a world the size of a hunter-gatherer band, then it would be easier to find something important at which to be the best - or do something that genuinely struck you as important, without becoming lost in a vast crowd of others with similar ideas.
Can you see the contradiction, bemoaning that people are now "less in control" while exercising ever-increasing freedom of expression? Harder to "find something important" with so many more opportunities available? Can you see the confusion over context that is increasingly not ours to control?
Eliezer, here again you demonstrate your bias in favor of the context of the individual. Dunbar's (and others') observations on organizational dynamics apply generally, while your interpretation appears to speak quite specifically of your experience of Western culture and your own perceived place in the scheme of things.
Plentiful contrary views exist to support a sense of meaning, purpose, pride implicit in the recognition of competent contribution to community without the (assumed) need to be seen as extraordinary. Especially still in modern Japan and Asia, the norm is to bask in recognition of competent contribution and to recoil from any suggestion that one might substantially stand out. False modesty this is not. In Western society too, examples of fulfillment and recognition through service run deeply, although this is belied in the (entertainment) media.
Within any society, recognition confers added fitness, but to satisfice it is not necessary to be extraordinary.
But if people keep getting smarter and learning more - expanding the number of relationships they can track, maintaining them more efficiently...[relative to the size of the interacting population]..then eventually there could be a single community of sentients, and it really would be a single community.
But as the cultural matrix keeps getting smarter—supporting increasing degrees of freedom with increasing probability—then eventually you could see self-similarity of agency over increasing scale, and it really would be a fractal agency.
Well, regardless of present point of view—wishing all a rewarding New Year!
Ironic, such passion directed toward bringing about a desirable singularity,
rooted in an impenetrable singularity of faith in X.
X yet to be defined, but believed to be [meaningful|definable|implementable] independent of future context.
It would be nice to see an essay attempting to explain an information or systems-theoretic basis supporting such an apparent contradiction (definition independent of context.)
Or, if the one is arguing for a (meta)invariant under a stable future context, an essay on the extended implications of such stability, if the one would attempt to make sense of "stability, extended."
Or, a further essay on the wisdom of ishoukenmei, distinguishing between the standard meaning of giving one's all within a given context, and your adopted meaning of giving one's all within an unknowable context.
Eliezer, I recall that as a child you used to play with infinities. You know better now.
Coming from a background in scientific instruments, I always find this kind of analysis a bit jarring with its infinite regress involving the rational, self-interested actor at the core.
Of course two instruments will agree if they share the same nature, within the same environment, measuring the same object. You can map onto that a model of priors, likelihood function and observed evidence if you wish. Translated to agreement between two agents, the only thing remaining is an effective model of the relationship of the observer to the observed.
I'll second jb's request for denser, more highly structured representations of Eliezer's insights. I read all this stuff, find it entertaining and sometimes edifying, but disappointing in that it's not converging on either a central thesis or central questions (preferably both.)
Crap. Will the moderator delete posts like that one, which appear to be so off the Mark?
…but the self-taught will simply extend their knowledge when a lack appears to them.
Yes, this point is key to the topic at hand, as well as to the problem of meaningful growth of any intelligent agent, regardless of its substrate and facility for (recursive) improvement. But in this particular forum, due to the particular biases which tend to predominate among those whose very nature tends to enforce relatively narrow (albeit deep) scope of interaction, the emphasis should be not on "will simply extend" but on "when a lack appears."
In this forum, and others like it, we characteristically fail to distinguish between the relative ease of learning from the already abstracted explicit and latent regularities in our environment and the fundamentally hard (and increasingly harder) problem of extracting novelty of pragmatic value from an exponentially expanding space of possibilities.
Therein lies the problem—and the opportunity—of increasingly effective agency within an environment of even more rapidly increasing uncertainty. There never was or will be safety or certainty in any ultimate sense, from the point of view of any (necessarily subjective) agent. So let us each embrace this aspect of reality and strive, not for safety but for meaningful growth.
A few posters might want to read up on Stochastic Resonance, which was surprisingly surprising a few decades ago. I'm getting a similar impression now from recent research in the field of Compressive Sensing, which ostensibly violates the Nyquist sampling limit, highlighting the immaturity of the general understanding of information-theory.
In my opinion, there's nothing especially remarkable here other than the propensity to conflate the addition of noise to data, with the addition of "noise" (a stochastic element) to (search for) data.
This confusion appears to map very well onto the cybernetic distinction between intelligently knowing the answer and intelligently controlling for the answer.