Lycaos King
Lycaos King has not written any posts yet.

Lycaos King has not written any posts yet.

Strictly speaking there is no such thing as "natural selection" or "fitness" or "adaptation" or even "evolution". There are only patterns of physical objects, which increase or decrease in frequency over time in ways that are only loosely modeled by those terms.
But it's practically impossible to talk about physical systems without fudging a bit of teleology in, so I don't think it's a valid objection.
How would this method distinguish between apparently and actually optimized features? In an evolutionary example, for instance, what's the difference between a bird with:
a large beak that was optimized to consume certain kinds of food
a large beak that was the result of a genetic bottleneck that resulted from a series of accidental deaths culling small beaks from the gene pool (neutral drift).
a large beak that is the result of a single generation mutation that superficially resembles an environmental adaptation, but is, in actuality, unfit.
A large beak that helps with consuming certain kinds of food, but whose primary ancestral optimization pressure was purely sexual selection.
I remain skeptical that this approach is able to add teleological concepts back into my physical reality lexicon, but I'm willing to be convinced. (Currently, my leading theory is that teleology is a pure illusion.)
(You're correct. I was using fictionalist in that sense.)
I think the equivocation of "Theorem X is provable from Axiom Set Y" <--> such-and-such thing is Good; would be the part of that chain of reasoning a self-described fictionalist would ascribe fictionality to.
As I understand it, it's the difference between thinking that Good is a real feature of the universe and Good being a wordgame that we play to make certain ideas easier to work with. Maybe a different example could illuminate some of this.
Fictionalism would be a good tool to describe the way we talk about Evolution and Nature. As has sometimes been said on this site, humans are not aligned towards... (read more)
Thanks for the reply.
I don't disagree with Eliezer's position for the most part, I just don't see where he lays out a coherent foundation for why he believes certain things about human values. (Or maybe I'm just being uncharitable in my evaluation and not counting some things as "real arguments" that others would.)
By objective and discoverable, I meant in the sense of the values understood in and of themselves without reference to humans in particular. Obviously you can just model human brains and understand what they value, but I meant that you can't learn about "beauty" or "friendship" or what have you outside of that. That part of the post was inelegantly... (read more)
"Why would any supermind want something so inherently worthless as the feeling of discovery without any real discoveries?"
"No free lunch. You want a wonderful and mysterious universe? That's your value."
"These values do not emerge in all possible minds. They will not appear from nowhere to rebuke and revoke the utility function of an expected paperclip maximizer."
"Touch too hard in the wrong dimension, and the physical representation of those values will shatter - and not come back, for there will be nothing left to want to bring it back."
I've chosen a small representation of the sort of things that Eliezer says about human values. When I call Eliezer a moral fictionalist, I don't... (read more)
No. Yudkowski is a moral fictionalist, but he has never (to my knowledge) ever justified his position. Granted I haven't read his whole corpus of work, but from what I've seen he just takes it as a given.
The correct moral choice is for both people to lower their EA contributions to 0%.
Thoughtfulness, pro-sociality, and conscientiousness have no bearing on people's ability to produce aligned AI.
They do have an effect on people's willingness to not build AI in the first place, but the purpose of working at Meta, OpenAI, and Google is to produce AI. No one who is thoughtful, pro-social, and conscientious is going to decide not to produce AI while working at those companies, while still having a job.
Hence, the effect of discouraging those sorts of people from working at those companies has no net increase in Pdoom.
If you want to avoid building unaligned AI, you should avoid building AI.
Who says you contribute to the pool at the same rate you'd contribute to your own children? Surely other people in the pool would have different priorities than you, wouldn't they? What if there are N people in the pool and you contribute 1/5N to the children in each pool?
Add that to the fact, that maybe you only have one standout chromosome, and you could easily see a situation where genetic analysis of the population in your family + your pool shows a sudden disappearance of 90% of your genes with a proliferation of 5% of your genes. Is that equivalent to having children? Some people might say it's not.
Also, yes, obviously... (read more)
That one seemed pretty obvious to me. Angle of the hairline, sharper shadows on the nose to give it a different shape. Smaller eyes and head overall (technically looks a bit larger, but farther away). Eyebrows are larger and rougher. Mouth is more prominent, philtrum is sharper. Angle of the jaw changes.
That's what I got in about 45 seconds of looking it over. It was an interesting exercise. Thanks for sharing that link.