615C68A6

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Like many people I felt compelled to distinguish myself by solving your problem while playing by your rules (rules which aren't completely clear). But after all ... and I guess I should offer an apology if this doesn't help, but, why should any of that change anything? Picture someone who for his whole life thought he had free will, then discovered that the universe is deterministic, with all that entails about ideas like "free will" as normal people envision it. This sounds pretty similar to your situation. You discovered that you may at any point "become" or "jump" to another conscious being whose memories are consistent with your own, but whose life and environment/universe is vastly different from your current/own universe.

Then what? What are your goals anyway? How does that change of perspective affect them? How can you best act, adjust yourself to still pursue those goals? What else should matter to you? The waters may be a little muddier than you believed them to be before, but not so muddied that it should be impossible to move forward. Seriously, aside from the vague existential angst, elaborate how this change of perspective affects your beliefs, and what actions you think you should act to reach your goals in life (if you have a good grasp of your goals. If you don't, then you should solve that first).

Why exactly does selecting and testing work better than grooming (and breeding)

Assuming it does,

Several factors may come into play and selecting may not be the only thing that is different between our current society and say a medieval society. Quantitatively, how much of a part does this one play in our current economic success?

That being said, we also have a pretty large pool of people to select from nowadays (stemming from for instance, our total population being larger, leading to more outliers in capability/skills, and from better communication, transportation, etc. which allows to search for appropriate candidates wide and far.).

Also, maybe our ability to select has grown faster and better than our ability to groom/breed (and this was at least in part coincidental and not actively pursued - see above about population size), while our capacity to groom/breed may have stagnated. The quality and efficiency of education may be better than what it used to be back then for groomed elites - though I don't know if it is for sure (what science knows and what can be taught would appear to be better now than then though) . Our ability to "breed" doesn't seem to have improved much (eugenism is a dead idea). I'd actually expect that eugenism and genetic engineering could fill an arbitrarily large part of that gap if it was actively pursued (which it may yet be, the debate about CRISPR-Cas is still hot, and places like China may well push forward with such ideas).

There was something of this in "Twelve Virtues of Rationality" too, for instance :

Study many sciences and absorb their power as your own. Each field that you consume makes you larger. If you swallow enough sciences the gaps between them will diminish and your knowledge will become a unified whole. If you are gluttonous you will become vaster than mountains. It is especially important to eat math and science which impinges upon rationality: Evolutionary psychology, heuristics and biases, social psychology, probability theory, decision theory. But these cannot be the only fields you study. The Art must have a purpose other than itself, or it collapses into infinite recursion.

Also, check Go Forth and Create the Art!

How could you measure health in absolute terms anyway? Where exactly do you set the cutoff between healthy and non-healthy? Does it vary relative to current medical technology? Does your income or socio-cultural group matter, or do you average this over everyone? Why average over the US? Why not over the world, or in developed countries, or in particular states?

can only occur if for some reason we care about some people's opinion more than others in some situations

Isn't that the description of an utility maximizer (or optimizer) taking into account the preferences of an utility monster?

There's something a little rediculous about claiming that every member of a group prefers A to B, but that the group in aggregate does not prefer A to B.

That would look a bit like Simpson's paradox actually.

I don't see how I could agree with this conclusion :

But many people don't like this, usually for reasons involving utility monsters. If you are one of these people, then you better learn to like it, because according to Harsanyi's Social Aggregation Theorem, any alternative can result in the supposedly Friendly AI making a choice that is bad for every member of the population.

If both ways are wrong, then you haven't tried hard enough yet.

Well explained though.