All of DonGeddis's Comments + Replies

I had the same reaction as Elizabeth.  The data I've seen suggests that the key variable is "time since last dose".  Vaccines protect against severe disease and death very well, possibly for years.  But protection against infection specifically appears to peak about a month after your last dose, and drop to (around) zero about six months after your last dose.

Are you sure you're not confusing a time sequence here, with quantity or quality?  Your sentence suggests that there is something "different" about getting a booster (but it's the s... (read more)

"But raising nominal prices is economically useless" No, that's not true. Raising nominal prices helps with sticky wages & debts. Failing to raise nominal prices causes recession, unemployment, and bankruptcies.

"the healthy inflation" That phrase doesn't refer to anything. "Healthy" isn't a modifier that applies to "inflation". There is only one single thing: the change in the overall price level. There aren't "healthy" and "non-healthy" versions of that one thing.

&q... (read more)

I mean by the "health level of inflation" the level of inflation which is benefitial to the economy without destoying belief in your currency, or creating an assets bubbles. As I explained in another comment below, printing money destoys contracts as people start to rewrite these contracts in a harder currency, as it happened in Russia during money printing experiments. The contracts were rewritten in dollars, exactly because russian central bank could not print dollars. As a result, the central bank lost the ability to affect inflation in dollars contracts. It had to pay a lot later to return the people beilef in russian ruble, by constntly manipulating currency rate.

You mostly seem to be noticing that there is a difference between the nominal economy (the numbers on the prices), and the real economy (what resources you can buy with currency). That's certainly true - but actually beside the point. Because the point is that inflation (actually, NGDP) below trend, causes "business-cycle" recessions. The reason is that many contracts (wages, debts) are written in nominal terms, and so when the value of the Unit of Account (function of money) changes in an unexpected way, these fixed nominal contracts don... (read more)

As soon as agents start to realise that you started to print money, they begin to change their contracts into a harder currency. It has happened in Russia in 1990s, as all contracts were in dollars, because everybody was afraid that the governemnt will print more money. Government fought back, by banning use of foreign currency names in contrats. People created "artifical units", and everybody knows that any "artificial unit" in a contract = 1 USD. So one could increase price by printing money obly in small extent, as it undermines the believe of agents in your currency, and they will stop to use it.

There is zero net evidence that IQ correlates with skin tone.

That's not true at all. There is overwhelming evidence that performance on IQ tests is hugely correlated with "race", which basically implies skin tone. Blacks, as a group, score 10-15 points below whites (almost a standard deviation), and (some) Asians and Jews are about half a deviation above whites.

The controversy is not whether there is correlation. The controversy is over the casual explanation. How much of this observed difference is due to genetics, how much due to environ... (read more)

When I say there is zero net evidence that IQ correlates with skin tone I'm summarizing the findings of the skin tone studies cited in the Nisbett article that was heavily discussed in this conversation. The studies examined IQ among blacks and found that whether the person was light-skinned or dark-skinned had more or less no bearing on that person's IQ (the assumption being that skin tone is a rough proxy for degree of African descent). I think this was obvious at the time from the context of the paragraph: I'm clearly summarizing findings not making general conclusions (until the end). We had been going back and forth on these issues for a while so by that point I was probably using more shorthand than usual. It may not be obvious that is what I was doing a month after the fact. Yes, I'm pretty sure the context is more that sufficient to establish that this is what I was talking about. The entire discussion was about origin of the black-white IQ gap.

A "Jedi"? Obi-Wan Kenobi?

I wonder if you mean old Ben Kenobi. I don't know anyone named Obi-Wan, but old Ben lives out beyond the dune sea. He's kind of a strange old hermit.

Bostrom and Sandberg (in your linked paper) suggest three reasons why we might want to change the design that evolution gave us:

  • Changed tradeoffs. We no longer live in the ancestral environment.
  • Value discordance. Evolution's goal may not match our own.
  • Evolutionary restrictions. We might have tools that were not available to evolution.

On #2, I'll note that evolution designed humans as temporary vessels, for the goal of propagating genes. Not, for example, for the goal of making you happy. You may prefer to hijack evolution's design, in service of... (read more)

introspection can't be scientific by definition

What you observe via introspection, is not accessible to third parties, yes.

But you use those observations to build models of yourself. Those models can be made explicit and communicated to others. And they make predictions about your future behavior, so they can be tested.

Most people's moral gut reactions say that humans are very important, and everything else much less so. This argument is easier to make "objective" if humans are the only things with everlasting souls.

Once you get rid of souls, making the argument that humans have some special moral place in the world becomes much more difficult. It's probably an argument that is beyond the reach of the average person. After all, in the space of "things that one can construct out of atoms", humans and goldfish are very, very close.

I like what Hook wrote. If I believed that babies were valuable because they have souls and then was told, "no they don't have souls", I might for a while value them less. But it has been a very long time since I believed in souls and the value I assign to babies is no longer related at all to my belief about souls (if it ever was).

After all, in the space of "things that one can construct out of atoms", humans and goldfish are very, very close.

Sure, they just don't resemble each other in many morally significant ways (the exceptio... (read more)

I think "making the argument that humans have some special moral place in the world" in the absence of an eternal soul is very easy for someone intelligent enough to think about how close humans and goldfish are "in the space of 'things that one can construct out of atoms.'"

Is an abortion an "ok if regrettable practice?" You've just assumed the answer is always yes, under any circumstances.

Sorry, you have a point that my test won't apply to every rationalist.

The contrast I meant was: if you look at the world population, and ask how many people believe in atheism, materialism, and that abortion is not morally wrong, you'll find a significant minority. (Perhaps you yourself are not in that group.)

But if you then try to add "believes that infanticide is not morally wrong", your subpopulation will drop to ... (read more)

So your point is that anyone who feels there is a moral difference between infanticide and abortion is irrational? Because most pro-lifers already say that, in my experience.
Well, my comment from would probably be better here. I still dispute that argument, as I think this drop-off is justified, even for rationalists.

Your parenthetical comment is the funniest thing I've read all day! The contrast with the seriousness of subject matter is exquisite. (You're of course right about the marginal cases thing too.)

Proposed litmus test: infanticide.

General cultural norms label this practice as horrific, and most people's gut reactions concur. But a good chunk of rationality is separating emotions from logic. Once you've used atheism to eliminate a soul, and humans are "just" meat machines, and abortion is an ok if perhaps regrettable practice ... well, scientifically, there just isn't all that much difference between a fetus a couple months before birth, and an infant a couple of months after.

This doesn't argue that infants have zero value, but instead th... (read more)

You haven't taken account of discounted future value. A child is worth more than a chimpanzee of equal intelligence because a child can become an adult human. I agree that a newborn baby is not substantially more valuable than a close-to-term one and that there is no strong reason for caring about a euthanised baby over one that is never born, but I'm not convinced that assigning much lower value to young children is a net benefit for a society not composed of rationalists (which is not to say that it is not an net benefit, merely that I don't properly understand where people's actions and professed beliefs come from in this area and don't feel confident in my guesses about what would happen if they wised up on this issue alone). The proper question to ask is "If these resources are not spent on this child, what will they be spent on instead and what are the expected values deriving from each option?" Thus contraception has been a huge benefit to society: it costs lots and lots of lives that never happen, but it's hugely boosted the quality of the lives that do. I do agree that willingness to consider infanticide and debate precisely how much babies and foetuses are worth is a strong indicator of rationality.
Are you allowed to use moral questions as litmus tests for rationality? Paper clippers are rational too. It isn't inconceivable that a human might just value babies intrinsically (rather than because they possess an amount of intellect, emotion, and growth potential). If anyone here has been reading this and trying to use more abstract values to try to justify why one should not to harm babies, and is unable to come up with anything, and still feels a strong moral aversion to anyone harming babies anywhere ever, then maybe it means you just intrinsically value not harming babies? As in, you value babies for reasons that go beyond the baby's personhood or lack thereoff? (By the way, the abstract reason i managed to come up with was that current degree of personhood and future degree of personhood interact in additive ways. I'll react with appreciation to someone poking a hole in that, but I suspect I'll find another explanation rather than changing my mind. It's not that I necessarily value babies intrinsically - it's more that I don't fully understand my own preferences at an abstract level, but I do know that a moral system that allows gratuitous baby-killing must be one that does not match my preferences. So if you poke a hole in my abstract reasons, it merely means that my attempt to abstractly convey my preferences was wrong. It won't change the underlying preference.) <But a good chunk of rationality is separating emotions from logic Even if I insert "epistemic", i find this only partially true. Edit: Although, my preferences do agree with yours to the extent that harming a young child does seem worse than harming a baby (though both are terrible enough to be illegal and punishable crimes). So I might respect the idea of merciful killing (in times of famine, for example) at a young age to prevent future death-inducing-suffering.

That's an amusing example because infanticide was extremely common among human cultures, so all good cultural relativists should be fine with this practice.

Usually there was a strong distinction between actually killing a baby (extremely wrong thing to do), and abandoning it to elements (acceptable). I'm not talking about any exotic cultures, ancient Greece and Rome and even large parts of Christian Medieval Europe practiced infant abandonment. There are even examples of Greek and Roman writers noting how strange it is that Egyptians and Jews never kill th... (read more)

Yes, I should also be allowed to kill adults. Especially if they have it coming. After all, the infant still has a chance to grow up to make a worthwhile contribution while there are many adults that are clearly a waste of good oxygen or worse!
Real world test of human value along similar lines: Ashley X.
I'd say the primary value of an infant is the future value of an adult human minus the conversion cost. Adult humans can be enormously valuable, but sometimes, the expected benefits just can't match the expected costs, in which case infanticide would be advisable. However, both costs and benefits can vary by many orders of magnitude depending on context, and there's no reliable, generally-applicable method to predict either. No matter how bad it looks, someone else might have a more optimistic estimate, so it's worth checking the market (that is, considering adoption).

Infanticide and abortion are okay, as long as doing so increases paperclip production.

However, infanticide and abortion are obviously not alone in that respect.

A key point is that they don't need to advocate the legalization of infanticide, they just need to be able to cogently address the arguments for and against it. Personally, I think that in the US at this time optimal law might restrict abortion significantly more than it currently does and also that in many past cultural contexts efforts to outlaw or seriously deter infanticide would have been harmful. Just disentangling morality from law competently gets a person props.

Despite some jokes I made earlier, things that could arguably depend on values don't make good litmus tests. Though I did at one point talk to someone who tried to convert me to vegetarianism by saying that if I was willing to eat pork, it ought to be okay to eat month-old infants too, since the pigs were much smarter. I'm pretty sure you can guess where that conversation went...

Time of birth serves as a bright line.

Once you've used atheism to eliminate a soul, and humans are "just" meat machines, and abortion is an ok if perhaps regrettable practice ...

Kudos to you for forthrightness. But em... no. Ok, first, it seems to me you've swept the ethics of infanticide under the rug of abortion, and left it there mostly unaddressed. Is an abortion an "ok if regrettable practice?" You've just assumed the answer is always yes, under any circumstances.

I personally say "definitely yes" before brain development (~12 weeks I think), "you need... (read more)

I'll be the first to disagree outright.

First, when a woman is pregnant but will be unable to raise her child we do not force a woman to give birth to give up the baby for adoption. This is because bringing a child to term is a painful, expensive and dangerous nine-month ordeal which we do not think women should be forced into. In what possible circumstances is infanticide ethically permissible when the baby is born, the woman has already paid the cost of pregnancy and giving birth, and adoption is an option?

In general, I'm not sure it follows from the fact... (read more)

My mother made this argument to me probably when I was in high school. Given my position as past infanticide candidate, it was an odd conversation. For the record, she was willing to go up to two or six years old, I think. And let us not forget the Scrubs episode she also agreed with: "Having a baby is like getting a dog that slowly learns to talk."

Basically, this is a variant on the argument from marginal cases; infants don't differ from relatively intelligent nonhuman animals in capabilities, so they ought to have the same moral status. If it's okay to euthanize your dog, it should also be okay to euthanize your newborn.

(The most common use of the argument from marginal cases is to argue that animals deserve greater moral consideration, and not that some humans deserve less, but one man's modus ponens is another man's modus tollens.)

Aren't abortions unnecessarily painful? This is as strong an argument pro-life as pro-infanticide. I agree there a continuum between conception and being, say, 2 years old that is only superficially punctuated by the date of birth. Yet our cultural norms are not so inconsistent... For example, many of these same people would find it horrific to kill a late-stage fetus. And they might still find it horrific to murder a younger fetus, but nevertheless respect the mother's choice in the matter.

I like this test, with the following cautions:

The regrettability of abortion is connected to the availability of birth control, and so similarly, the regrettability of infanticide should be connected to the availability of abortion. A key difference is that while birth control may fail, abortion basically doesn't. I can think of a handful of reasons for infanticide to make sense when abortion didn't, and they're all related to things like unexpected infant disability the parents aren't prepared to handle, or sudden, badly timed, unanticipated financial/f... (read more)

If I agreed with this logic, should I be reluctant to admit it here?
Voted up, but I think abortion shouldn't be legal once the fetus is old enough to have brain activity other than for medical reasons (life of the mother), and I'm an unrepentant speciesist.

Do you have the same opinion about gender-linked "genetically-based behavioral variation"?

Not to open a can of worms here, but the pickup-artist (PUA) community is all about how the innate behavior of (generally heterosexual) men and women differ, in dating scenarios. And, in particular, how those real behaviors differ from the behavior that is taught and reinforced by society and culture.

You can have an opinion that all behavior is changeable, and that it is shaped by society and culture. But that would lead you to one model of how men and wom... (read more)

That only follows if the societal pressures on men and women are mostly gender-neutral. This does not appear to be the case.

(generally heterosexual) men and women differ, in dating scenarios

True story: My lesbian roommate runs mad game with remarkable success.

You are correct, that not all activity recorded in GDP is welfare-enhancing. (Note that GDP also underreports some positive welfare activities.)

But that's not the important point. The important point is: does the difference between the GDP measure, and some more accurate measure, have any implications for economic policy? The answer seems to be no: attempts have been made to define more precise welfare-tracking measures of national welfare, and the result seems to be that they track GDP very closely, and that there is basically no implication for policy... (read more)

I'm skeptical. Can I get a link to one or more of the alternative welfare-tracking measures?
That's a fair point. I haven't studied proposed alternative measures. I like using immigration and emigration as rough measures for comparing how good places are to live, with an allowance for willingness to risk dying to move. Have the immigration and emigration stats ever been out of sync with GDP?

It's true that GDP is not identical to national welfare. And you can come up with anecdotes where some welfare measure isn't fully captured by GDP (both positive and negative).

But GDP is useful, because it is very hard to game. The examples in your "fetishism" link are very weak. Unlike the nails example, where we can all agree that the factory made the wrong choice for society, it is far from clear that the GDP examples resulted in the wrong policy, even if GDP is only an approximation for welfare.

GDP is not a good example of Goodhart's Law. It's nothing at all like the (broken) correlation between inflation and unemployment, which varies widely depending on policy choices.

I'm not sure whether it's that hard to game GDP, but I am sure that it just measures the money economy. If people need to spend more on repairing damage, or on something which is useless for them, then the GDP goes up just as if they were getting more of what they want.

Example of wheel-spinning: tax law becomes more complex. People need to spend more on help with their taxes, and possibly work longer hours to afford it. More economic activity, bur are their lives better?

Alcor both stores your body and provides for bedside "standby" service to immediately begin cooling. With CI, it's a good idea to contract a third party to perform that service, and SA is the recommended company to perform that service.

"Pseudoscience" isn't the only possible criticism of cryonics. One could believe that it may be scientifically possible in theory, still without thinking that it's a good idea to sign up for cryonics in the present day. (Basically, by coming up with something like a Drake equation for the chance of it working out positively for a current-day human, and then estimating the probability of the terms to be very low.)

You're right, that most of the popular criticism of cryonics is mere non-technical mocking. Still, there's a place for reasoned objections as well.

8Paul Crowley14y
There certainly is. Please point me to them.
This article gives something like a drake equation for cryonics

With a straightforward interpretation of your question, I'd answer "95%".

But since you made special mention of being "sneaky", I'll assume you've attempted to trick me into misunderstanding the question, and so I'll lower my probability estimate to 75%, with the missing twenty points accounting for you tricking me by your phrasing of the question.

I, for one, am interested in hearing arguments against anti-realism.

If you don't have personal interest in writing up a sketch, that's fine. Might you have some links to other people who have already done so?

Toby already linked to the SEP articles on moral realism and anti-realism in another comment.
Elsewhere in the thread.

It's true that climate is too complex to predict well. Still, I haven't heard many global warming worriers warn about the threat of a new ice age. It's all about the world actually becoming warmer.

Given that, the real problem seems to be the speed. If it took 1000 years to raise 5 degrees, that might not be so bad. If it's 50 years, the necessary adjustments (to both humans and non-humans) might only happen with massive die-off.

But leaving aside the speed, it's not insane to notice that there is vastly more biodiversity in the tropics, than in the arct... (read more)

You are operating under the assumption that warmer implies more tropics. I categorize this as wishful thinking.
Then you aren't listening enough, I'm afraid. This is a routine concern.

There are (bad) interpretations of QM, where they do mean "conscious" observer. This objection is very close to saying that MWI (multiple worlds) is "right", and the others are "wrong".

That may be the case, but it is far from universally acknowledged among practicing physicists. So, it's a bit unfair to suggest this "error", given that many prominent (but wrong) physicists would not agree that it is an error.

Let me see if I have this straight: I devise a double-slit experiment where my electronowhazzit collapses the waveform for an hour-and-a-half, before shutting off; thus resulting in no diffraction pattern during the first portion of the experiment, and a propagated waveform during the second. I set it up to begin the experiment at midnight, and stop at 3AM; a computer automatically records all the data, which I then store on a CD for 1 year's time, without looking at it. At the end of the year, I present this data to a group of these physicists. They declare that it's my conscious observation, going backwards in time, that creates the results; or that it's my conscious intent in setting up the aparatus, or something like that? I wish I were feeling incredulous right now, but to be honest I'm just kind of depressed.

But the "inside view" bias is not amenable to being repaired, just by being aware of the bias. In other words, yes, the suggestion is that the direct arguments are optimistically biased. But no, that doesn't mean that anybody expects to be able to identify specific flaws in the direct arguments.

As to what those flaws are ... generally, they occur by failing to even imagine some event, which is in fact possible. So your question to identify the flaws is basically the same as, "what possible relevant events have you not yet thought of?"

Tough question to answer...

Roughly on the same topic, a few years ago I read Intelligence in War by John Keegan. I was expecting a glorification of that attribute which I believed to be so important; to read story after story of how proper intelligence made the critical difference during military battles.

Much to my surprise, Keegan spends the whole book basically shooting down that theory. Instead, he has example after example where one side clearly had a dominant intelligence advantage (admittedly, here we're talking about "information", not strictly "rationality&q... (read more)

Eliezer and Robin argue passionately for cyronics. Whatever you might think of the chances of some future civilization having the technical ability, the wealth, and the desire to revive each of us -- and how that compares to the current cost of signing up -- one thing that needs to be considered is whether your head will actually make it to that future time.

Ted Williams seems to be having a tough time of it.

Alcor has posted a response to Larry Johnson's allegations.
I'm not sure what to think of Larry Johnson. Some of his claims are normal parts of Alcor's cryopreservation process, but dressed up to sound bad to the layperson. Other parts just seem so outrageous. A monkey wrench? An empty tuna can? Really? He claims that conditions were terrible, which is also unlikely. Alcor is a business and gets inspected by OSHA, the fire department, etc. They even offer free tours to the public. If conditions were so terrible, you'd think they'd have some environmental or safety violations. At the very least, some people who toured the facility would speak up. The article also claims that Ted Williams was cryopreserved against his will, which is almost certainly not true. Alcor requires that you sign and notarize a last will and testament with two witnesses who are not relatives.

It's hard to discuss the subject with the debate becoming emotional, but let me just say that Roissy's goals are to be an entertaining writer, to succeed at picking up women, and to debunk false commonsense notions of dating, through real-life experience.

He's not trying to submit a peer-reviewed paper on evo psych to a rationality audience. To judge him on that basis is to kind of miss the point.

(Ethics is a whole separate question. But then, Stalin was a atheist too, wasn't he?)

Rather than using a PRNG (which, as you say, requires memory), you could use a source of actual randomness (e.g. quantum decay). Then you don't really have extra memory with the randomized algorithm, do you?

I thought of this as well, but it does not really matter because it is the ability to produce the different output in each case event that gives part of the functionality of memory, that is, the ability to distinguish between events. Granted, this is not as effective as deterministically understood memory, where you know in advance which output you get at each event. Essentially, it is memory with the drawback that you don't understand how it works to the extent that you are uncertain how it correlates with what you wanted to remember.

Forget about whether your sandbox is a realistic enough test. There are even questions about how much safety you're getting from a sandbox. So, we follow your advice, and put the AI in a box in order to test it. And then it escapes anyway, during the test.

That doesn't seem like a reliable plan.

The idea that society is smart enough to build machine intelligence, but not smart enough to build a box to test it in does not seem credible to me: Humans build boxes to put other humans in - and have a high success rate of keeping them inside when they put their minds to it. The few rogue agents that do escape are typically hunted down and imprisoned again. Basically the builders of the box are much stronger and more powerful than what it will contain. Machine intelligence testing seems unlikely to be significantly different from that situation. The cited "box" scenario discusses the case of weak gatekeepers and powerful escapees. That scenario isn't very relevant in this case - since we will have smart machines on both sides when restraining intelligent machines in order to test them.

Re: abiogenesis. You say:

we know of no mechanism under which creation of life seems even remotely plausible.

For a plausible mechanism, see this video. (It starts with anti-creationism stuff; skip to 2:45 to watch the science.)

There are plenty of ideas how some part of emergence of life might have happened. The problems is that each idea explains just small part of it, they are not all compatible with each other, and many have serious problems. Yes, life emerged, so it must have emerged somehow, but I haven't seen any mechanism that seemed to make it likely.

Exactly! This is gambling, isn't it? A small expected loss, with a tiny chance of some huge gain.

If your utility for money really is so disproportionate to the actual dollar value, then you probably ought to take a trip to Las Vegas and lay down a few long-odds bets. You'll almost certainly lose your betting money (but you wouldn't "notice it in [your] monthly finances"), while there's some (small) chance that you get lucky and "change [your] month considerably".

It's not hypothetical! You can do this in the real world! Go to Vegas right now.

(If the plane flight is bothering you, I'm sure we could locate some similar online betting opportunities.)

I think there's also a short-term/long-term thing going on with your examples. The drunk really wants to drink in the moment; they just don't enjoy living with the consequences later. Similarly, in the moment, you really do want to continue reading Reddit; it's only hours or days later that you wish you had also managed to complete that other project which was your responsibility.

I bet there's something going on here, about maximizing integrated lifetime happiness, vs. in-the-moment decision-making, possibly with great discounts to those future selves who will suffer the negative effects.

I'm curious if Eliezer (or anyone else) has anything more to say about where the Born Probabilities come from. In that post, Eliezer wrote:

But what does the integral over squared moduli have to do with anything? On a straight reading of the data, you would always find yourself in both blobs, every time. How can you find yourself in one blob with greater probability? What are the Born probabilities, probabilities of? [...] I don't know. It's an open problem. Try not to go funny in the head about it.

Fair enough. But around the same time, Eliezer ... (read more)

"Dust" has been used in SF for nanotech before. And especially runaway nanotech, that is trying to disassemble everything, like a doomsday war weapon that got out of control. I recalled the paperclip maximizer too. Oh, and the Polity/Cormac SF books by Neal Asher, with Jain nodes (made by super AIs) that seem to have roughly the same objective.

Is there anything that you consider proven beyond any possibility of doubt by both empirical evidence and pure logic, and yet saying it triggers automatic stream of rationalizations in other people?

  • Hitler had a number of top-level skills, and we could learn (some) positive lessons from his example(s).

  • Eugenics would improve the human race (genepool).

  • Human "racial" groups may have differing average attributes (like IQ), and these may contribute to the explanation of historical outcomes of those groups.

(Perhaps these aren't exactly topics... (read more)

1st one: Nope, don't think anyone here would dispute that, except on the grounds that it's rather nonspecific. 2nd one: Only if it were in the form of encouraging particularly valuable individuals to reproduce more. Removing even the bottom 50% would have fairly negligible effects compared to doubling the top 1%. Several countries already implement programs to encourage the most valuable members to reproduce more (with mixed success). 3rd one: I find it nearly impossible to find any good data on that either way. Pending evidence, it looks like most of the quality of life and education effects can basically be explained by looking at who got the industrial revolution first. Unless very large effect sizes were found, however, the policy implications would be minimal or nonexistent.
Surely few would argue with that. The more controversial issue is the claim that such differences are genetic.
First one's just plain true. Second one is probably true. The issue with eugenics isn't that it wouldn't work, it's that it would be unethical to try. Third one seems to fail the evidence test. It's proposing a significant deficit in a measurable quantity that has not been observed to exist (after correcting for socio-economic status).
From Paul Graham's essay: Maybe there is something I am missing, but I don't understand his last sentence. How do you take two people, and "subtract one from the other" ?
It struck me that "top-level" is ambiguous. Do you mean high quality or general-purpose? I don't think that it is taboo to say that Hitler was a good orator or that he was good at mass psychology. But people don't admit to desiring to manipulate crowds; I don't think Hitler has to do with that. I've heard it suggested that a lot of people have the skills to be cult leaders, but they just don't want to be. Film makers do study Leni Riefenstahl.

Those are excellent points, particularly the first. Adolf Hitler was one of the most effective rhetoricians in human history - his public speaking skills were simply astounding. Even the people who hated his message were stunned after attending rallies in which Hitler exercised his crowd-manipulation skills.

Related to: Mind-killer.
I'll grant you that they're all taboo, but they're not really useful, either. (I mean, some people claim these are true to justify their prejudices, but that's not what we're talking about.) In particular, the statement about Hitler is too vague to suggest what ought to be imitated, and the statement about racial groups focuses on an effect which is almost entirely obscured by historical facts about the distribution of resources. That said, regarding eugenics: have you read any of David Brin's Uplift books?

rlpowell, you are incorrect. You are spouting an untested theory that is repeated as fact by those with a vested interest in avoiding the harsh light of truth.

In actual fact, there is no problem with breaking someone's arm in an MMA fight (see Mir vs. Sylvia in the UFC, for example). It's also close to impossible to break someone's neck (deliberately), despite what you may see in movies.

The "we're too dangerous to fight" is an easy meme to propagate. But let me just ask you this: let's just say, hypothetically, that your theory ("maximum ... (read more)

Military application of the "maximum damage" martial art, and the restriction of certain training to military personnel, would be solid evidence that it goes beyond what is considered safe in sport. For instance, I know that certain weapon techniques in Krav Maga are generally taught only to policemen or combat soldiers.

(way after the fact)

You know what? You are absolutely right that I'm spouting an untested theory. I have since stopped.

The problem is that I see no way to test either side; either what I said or the converse, which you seem to be asserting, which is that whatever comes out of MMA is basically optimal fighting technique.

The only test I can think of is to load up fighters that assert opposite sides of this, and are both highly trained in their respective arts and so on, on lots of PCP, and see who lives.

There are ... some practical and ethical problems the... (read more)

There is a pretty simple way to test this, it's simply somewhat dangerous and arguably unethical. Take a "maximum damage" fighter, and send them into a number of no rules fights where they can justify using maximum force, and then pit them against MMA fighters in sanctioned matches. I don't know of any style that does this, but I did train for a while in a style that does something similar. In Wun Hop Kuen Do (and possibly other branches of Kajukenbo ( being an instructor level practitioner is essentially a research position. You're required to test your skills in realistic situations, because as a teacher, the danger to your students if you instruct them poorly takes precedence over the danger to yourself. My sifu, Jason Goldsmith, would have collaborators attack him in earnest with a sharpened knife (he worked his way up from rubber ones,) or fight against multiple opponents, in order to make sure his skills worked where it really mattered. Sifu Jason does prepare people for competitions, including MMA, if they request it, but he makes it very clear that that isn't really what the style is meant for, and is not the setting in which it's most effective.

I happened to have a young child about to enter elementary school when I read that, and it crystalized my concern about rote memorization. I forced many fellow parents to read the essay as well.

  • I realize you mostly care about #1, but just for more data: #2 I'd probably put the Quantum Physics sequence, although that is a large number of posts, and the effect is hard to summarize in a few pages.

  • For #3 I liked (within evolution) that we are adaptation-executers, not fitness-maximizers.

I agree with Doug S. What most people think about, when they want to "try being female for awhile", is to keep their same mind (or perhaps they believe in a soul) while just trying out different clothing. Basically, be in The Matrix, but just get instantiated as the Woman in the Red Dress for a week. Or maybe more like the movie Strange Days, with a technology that's like TV (but better!), kind of like virtual reality. Like watching a movie, but using all your senses, and really getting immersed into it.

I don't think most men imagine actually thinking like a woman's brain thinks. As you say, that wouldn't really be them any longer.

@ John: can you really not see the difference between "this is guaranteed to succeed", vs. "this has only a tiny likelihood of failure"? Those aren't the same statements.

"If you play the game this way" -- but why would anyone want to play a game the way you're describing? Why is that an interesting game to play, an interesting way to compare algorithms? It's not about worst case in the real world, it's not about average case in the real world. It's about performance on a scenario that never occurs. Why judge algorithms on... (read more)

@ Will: Yes, you're right. You can make a randomized algorithm that has the same worst-case performance as the deterministic one. (It may have slightly impaired average case performance compared to the algorithm you folks had been discussing previously, but that's a tradeoff one can make.) My only point is that concluding that the randomized one is necessarily better, is far too strong a conclusion (given the evidence that has been presented in this thread so far).

But sure, you are correct that adding a random search is a cheap way to have good confiden... (read more)

To look at it one more time ... Scott originally said Suppose you're given an n-bit string, and you're promised that exactly n/4 of the bits are 1, and they're either all in the left half of the string or all in the right half.

So we have a whole set of deterministic algorithms for solving the problem over here, and a whole set of randomized algorithms for solving the same problem. Take the best deterministic algorithm, and the best randomized algorithm.

Some people want to claim that the best randomized algorithm is "provably better". Really? B... (read more)

@ John, @ Scott: You're still doing something odd here. As has been mentioned earlier in the comments, you've imagined a mind-reading superintelligence ... except that it doesn't get to see the internal random string.

Look, this should be pretty simple. The phrase "worst case" has a pretty clear layman's meaning, and there's no reason we need to depart from it.

You're going to get your string of N bits. You need to write an algorithm to find the 1s. If your algorithm ever gives the wrong answer, we're going to shoot you in the head with a gun a... (read more)

@ Will: You happen to have named a particular deterministic algorithm; that doesn't say much about every deterministic algorithm. Moreover, none of you folks seem to notice much that pseudo-random algorithms are actually deterministic too...

Finally, you say: "Only when the input is random will [the deterministic algorithm] on average take O(1) queries. The random one will take O(1) on average on every type of input."

I can't tell you how frustrating it is to see you just continue to toss in the "on average" phrase, as though it doesn't... (read more)

DonGeddis is missing the point. Randomness does buy power. The simplest example is sampling in statistics: to estimate the fraction of read-headed people in the world to within 5% precision and with confidence 99%, it is enough to draw a certain number of random samples and compute the fraction of read-headed people in the sample. The number of samples required is independent of the population size. No deterministic algorithm can do this, not even if you try to simulate samples by using good PRNGs, because there are ``orderings'' of the world's population that would fool your algorithm. But if you use random bits that are independent of the data, you cannot be fooled. (By the way, whether 'true randomness' is really possible in the real world is a totally different issue.) Note that, for decision problems, randomness is not believed to give you a more-than-polynomial speedup (this is the P vs BPP question). But neither sampling nor the promise problem Scott mentioned are decision problems (defined on all inputs). Regarding your claim that you cannot compare 'worst-case time' for deterministic algorithms with 'worst-case expected time' for randomized ones: this is totally pointless and shows a total lack of understanding of randomization. No one claims that you can have a randomized algorithm with success probability one that outperforms the deterministic algorithm. So either we use the expected running time or the randomized algorithm, or we allow a small error probability. We can choose either one and use the same definition of the problem for both deterministic and randomized. Take Scott example and suppose we want algorithms that succeed with probability 2/3 at least, on every input. The goal is the same in both cases, so they are comparable. In the case of deterministic algorithms, this implies always getting the correct answer. DonGeddis seems not to realize this, and contrary to what he says, any deterministic algorithm, no matter how clever, needs to look

Silas is right; Scott keeps changing the definition in the middle, which was exactly my original complaint.

For example, Scott says: In the randomized case, just keep picking random bits and querying them. After O(1) queries, with high probability you'll have queried either a 1 in the left half or a 1 in the right half, at which point you're done.

And yet this is no different from a deterministic algorithm. It can also query O(1) bits, and "with high probability" have a certain answer.

I'm really astonished that Scott can't see the slight-of-hand i... (read more)

@ Scott Aaronson. Re: your n-bits problem. You're moving the goalposts. Your deterministic algorithm determines with 100% accuracy which situation is true. Your randomized algorithm only determines with "high probability" which situation is true. These are not the same outputs.

You need to establish goal with a fixed level of probability for the answer, and then compare a randomized algorithm to a deterministic algorithm that only answers to that same level of confidence.

That's the same mistake that everyone always makes, when they say that "randomness provably does help." It's a cheaper way to solve a different goal. Hence, not comparable.

@ Venu: Modern AI efforts are so far from human-level competence, that Friendly vs. Unfriendly doesn't really matter yet. Eliezer is concerned about having a Friendly foundation for the coming Singularity, which starts with human-level AIs. A fairly stupid program (compared to humans) that merely drives a car, just doesn't have the power to be a risk in the sense that Eliezer worries about.

It could still kill people if it's not programmed correctly. This seems like a good reason to understand the program well.

I agree with Psy-Kosh too. The key is, as Eliezer originally wrote, never. That word appears in Theorem 1 (about the deterministic algorithm), but it does not appear in Theorem 2 (the bound on the randomized algorithm).

Basically, this is the same insight Eliezer suggests, that the environment is being allowed to be a superintelligent entity with complete knowledge in the proof for the deterministic bound, but the environment is not allowed the same powers in the proof for the randomized one.

In other words, Eliezer's conclusion is correct, but I don't thi... (read more)

Oh, and Thomas says: "There is no way to choose one, except to make another experiment and see which theory - if any (still might be both well or both broken) - will prevail."

Which leads me to think he is constrained by the Scientific Method, and hasn't yet learned the Way of Bayes.

Peter de Blanc is right: Theories screen off the theorists. It doesn't matter what data they had, or what process they used to come up with the theory. At the end of the data, you've got twenty data points, and two theories, and you can use your priors in the domain (along with things like Occam's Razor) to compute the likelihoods of the two theories.

But that's not the puzzle. The puzzle doesn't give us the two theories. Hence, strictly speaking, there is no correct answer.

That said, we can start guessing likelihoods for what answer we would come up wi... (read more)

You know, Eliezer, I've seen you come up with lots of interesting analogies (like pebblesorters) to explain you concept of morality. Another one occurred to me that you might find useful: music. It seems to have the same "conflict" between reductionist "acoustic vibrations" vs. Beethoven, as morality. Not to mention the question of what aliens or AIs might consider to be music. Or, for that matter, the fact that there are someone different kinds of music in different human cultures, yet all sharing some elements but not necessarily ... (read more)

I don't practice what I preach because I'm not the kind of person I'm preaching to.

(by Bob Dobbes, in Newsweek ... long ago)

Eliezer seems to suggest that the only possible choices are morality-as-preference or morality-as-given, e.g. with reasoning like this:

[...] the morality-as-preference viewpoint is a lot easier to shoehorn into a universe of quarks. But I still think the morality-as-given viewpoint has the advantage [...]

But really, evolutionary psychology, plus some kind of social contract for group mutual gain, seems to account for the vast bulk of what people consider to be "moral" actions, as well as the conflict between private individual desires vs. a... (read more)

Load More