Radford Neal

Wiki Contributions

Comments

Second-order selection against the immortal

There are lot of unstated assumptions involved here.  Let's assume that the tendency to take the anti-aging drug is hereditary - so we're really discussing whether or not selection will favour the gene for doing this.  If the mortals and "immortals" (who actually die after ca. 1000 years) are not reproductively isolated, then it seems quite clear that the gene for taking the drug will be favoured by selection.  If one assumes reproductive isolation (as the post seems to), perhaps for social reasons, but that the two groups compete for resources, then the immortal group loses out only if their higher reproductive capacity is outweighed by worse adaptation to changing circumstances. Whether the "immortals" would be less-well adapted will depend on selection effects within that group - if the young immortals out-compete the older immortals, then they adapt just as fast as the mortals.  I think you would have a difficult (but maybe not impossible) time finding values for the various within and between group selection effects that would produce a rapid-adaptation advantage for the mortals that would outweigh the huge reproductive advantage of the immortals.

The whole scenario seems rather far from reality to me - talking about evolution implies selection - ie, death or infertility. Assuming the anti-aging drug does not directly affect fertility, I think one can assume that any behavioural trait of low fertility among the immortals will be strongly selected against, after which we're in the Malthusian state in which the "immortals" often die early from starvation.  So the average age  when they have children may not be so high after all...

Plus, of course, the scenario assumes a world-changing innovation of an anti-aging drug, but no other world-changing innovations (AI, space travel, ...?) that would render the whole discussion irrelevant.

Second-order selection against the immortal

I'm still not following you.  If the "immortals" start having children at age 20, and have one every 5 years or so for about the next 1000 years (the average age when they get hit by a bus, or whatever), for a total of about 200 children, why isn't this much better from an evolutionary point of view than being a mortal, who has maybe 10 children?

Sure, the average age at which the immortals have children is much higher. But why does that matter, when they have children at a young age, just like the mortals?  They have everything the mortals have, plus more.

Of course, this isn't sustainable - something will put a stop to it, such as famine.  But then, any species that isn't a total failure cannot sustain its maximum ("when times are good") reproductive rate.  (If its maximum reproduction rate is bare replacement, it won't be able to recover from some setback, a hurricane or whatever.) If one assumes that "immortality" is cost-free (eg, it doesn't lead to reduced muscle mass, hence reduced strength, and greater chance of losing a fight), it seems like a definite evolutionary advantage.

Second-order selection against the immortal

See the story, "Good-bye, Robinson Crusoe", by John Varley.

Second-order selection against the immortal

Several things about this post are unclear or don't make much sense to me.

What do you mean by "immortal"? 

You seem to be responding to the question, "Why didn’t the body just evolve to secrete that [ immortality ] drug itself?" That would imply that "immortality" doesn't really mean that death is impossible, just that you don't age.  (It's obvious why absolute immortality can't evolve.) You could still die from starvation, predation, homicide, war, some sufficiently aggressive disease, some sufficiently severe accident, etc.  But much of what you write seems to make no sense once one acknowledges that everyone is still going to die sooner or later, even if later is much later.

I'm also not clear on why the not-really-immortals wouldn't have children.  Indeed, why not just as many children as those who choose not be to immortal? And actually, they could have many, many more children.

You also seem to assume that the not-really-immortals live separately from the mortals.  But why?  The scenario would seem to be that a drug that prevents aging is discovered.  Some people then choose to take it.  Some don't.  I don't see how they end up in different societies.

Considering all this, I don't see why the decision to take the anti-aging drug is not strictly better from an inclusive fitness point of view.  (You can still decide to give your share of food to your children if there's not enough to go around...)

AI Risk for Epistemic Minimalists

I'm not saying that Michael Faraday's work in the earlier 19th century didn't actually contribute to existential risk, by being part of the developments ultimately enabling unfriendly AI hundreds of years after he lived.  Perhaps it did.  What I'm saying is that you can't take the huge progress Michael Faraday made as evidence that rapid technological progress leads to existential risk, in order to argue that AI poses an existential risk, because the only people who believe that Michael Faraday's work contributed to existential risk are the ones who already think that AI poses an existential risk.  Your argument won't convince anyone who isn't already convinced.

AI Risk for Epistemic Minimalists

Well, what I'm saying is that you're invoking historical experience of existential risk arising from rapid growth in power, when there is no such historical experience, up until at least 1945 (or a few years earlier, for those in the know). Until then, nobody thought that there was any existential risk arising from technological progress. And they were right - unless you take the rather strange viewpoint that (say) Michael Faraday's work increased existential risk because it was part of the lead up to risk from unfriendly AI hundreds of years in the future...

The Validity of Self-Locating Probabilities (Pt. 2)

I think we've had this discussion before, but let me try one more time...

You say, "And the "probability I am the Orignal" is not a valid concept. "I" is an identification not based on anything but my first-person perspective. Whether "I" am the Orignal or the Clone is something primitive, not analyzable. Any attempt to justify this probability requires additional postulates such as equating "I" to a random sample of some sort."

But to me, this means throwing out the whole notion of probability as a subjective measure of uncertainty.  Perhaps you're fine with that, but it also means throwing out all use of probability in scientific applications, such as  evaluating the probability that a drug will work and/or have side effects - because the practical use of such evaluations is to conclude that "I" will probably be cured by that drug, which is a statement you have declared meaningless.  Maybe you're assuming that some "additional postulate" will fix that, but if so, I don't see why something similar wouldn't also render the probability that "I" am the Original in your problem meaningful.

I think an underlying problem here is an insistence on overly abstract thought experiments.  You're assuming that the subject of the experiment cannot simply walk out the door of the room they're in and see whether they're in the same place where they went to sleep (in which case they're the Original), or in a different place.  They can also do all sorts of other things, whose effects for good or bad may depend on whether or not they are the Original (before they figure this out). They will in general need some measure of uncertainty in making decisions of this sort - they can't simply say that self-locating probabilities are meaningless, when implicitly they will be using them to decide. This is all true even if they in fact decide to cooperate with the experimenter and do none of this.

The assumption that the experiment must proceed in the manner as it is abstractly described severs all connection between the answers being proposed and the real world.  There is then nothing stopping anyone from proposing that the probability of Heads is 1/2, or 3/4, or 2/7 - since none of these have consequences - and similarly the probability of "I am the Original" can be anything you like, or be meaningless, if you treat the person making the judgement as an abstract entity constrained to do nothing but what they're supposed to do in the problem statement, rather than as a real person.

AI Risk for Epistemic Minimalists

Maybe I'm missing something in your argument, but it seems rather circular to me.  

You argue that rapid technological change produces existential risk, because it has in the past.  But it turns out that your argument for why technological change in the past produced existential risk is that it set the stage for later advances in bioweapons, AI, or whatever, that will produce existential risk only in the future.

But you can't argue that historical experience shows that we should be worried about rapid AI progress as an existential risk, if the historical experience is just that this past progress was a necessary lead up to progress in AI, which is an existential risk...

It's certainly plausible that technological progress today is producing levels of power that pose existential risks.  But I think it is rather strange to argue for that on the basis of historical experience, when historically technological progress did not in fact lead to existential risk at the time.  Rather, you need to argue that current progress could lead to levels of power that are qualitatively different from the past.

AI Risk for Epistemic Minimalists

"quick increases in human power have historically led to increases in existential risk"

You lost me here.  You seem to think that this statement is obviously true, and hence not necessary to argue for.  But it doesn't seem true to me.

I'll assume that by "existential risk" you mean extinction of humanity (or reduction of humanity to some terrible state that we would regard as at least as bad as extinction).  With that definition, the only increases in human power that might arguably have increased existential risk are the development of nuclear weapons and the development of biological warfare capabilities.  I think the first of these is not actually an existential risk.  So the historical record has one (possible) instance, which does not seem like a good basis for generalization.  And for neither of these capabilities does the quickness with which they were developed seem particularly relevant to whatever existential risk they may pose.

Most technological developments reduce existential risk, since they provide more ways of dealing with the consequences of something like a meteor impact.  The only exception I can think of is that new technologies may lead to old technologies being forgotten, and maybe the old technologies are the ones that would be useful after a disaster.  But this would be an issue only in the last hundred years or so (before that there were still many agricultural and hunter-gatherer societies using earier technologies).  So there's not a long historical record here either.

Factors of mental and physical abilities - a statistical analysis

Assuming you're using "C" to denote Covariance ("Cov" is more common), that seems right.

It's typical that the noise covariance is diagonal, since a general covariance matrix for the noise would render use of a latent variable unnecessary (the whole covariance matrix for x could be explained by the covariance matrix of the "noise", which would actually include the signal as well).  (Though it could be that some people use a non-diagonal covariance matrix that is subject to some other sort of constraint that makes the procedure meaningful.)

Of course, it is very typical for people to use factor analysis models with more than one latent variable.  There's no a priori reason why "intelligence" couldn't have a two-dimensional latent variable.  In any real problem, we of course don't expect any model that doesn't produce a fully general covariance matrix to be exactly correct, but it's scientifically interesting if a restricted model (eg, just one latent variable) is close to being correct, since that points to possible underlying mechanisms.

Load More