it would seem easier to build (or mutate into) something that keeps going forever than it is to build something that goes for a while then stops.
On reflection, I realize this point might be applied to repetitive drudgery. But I was applying it to the behavior "engage in just so much efficient exploration." My point is that it may be easier to mutate into something that explores and explores and explores, than it would be to mutate into something that explores for a while then stops.
the vast majority of possible expected utility maximizers, would only engage in just so much efficient exploration, and spend most of its time exploiting the best alternative found so far, over and over and over.
I'm not convinced of that. First, "vast majority" needs to use an appropriate measure, one that is applicable to evolutionary results. If, when two equally probable mutations compete in the same environment, one of those mutations wins, making the other extinct, then the winner needs to be assigned the far greater weight. So, for example, if humans were to compete against a variant of human without the boredom instinct, who would win?
Second, it would seem easier to build (or mutate into) something that keeps going forever than it is to build something that goes for a while then stops. Cancer, for example, just keeps going and going, and it takes a lot of bodily tricks to put a stop to that.
Seconding the expectation of a "useless clusterf--k."
The hope is that the shared biases will be ones that the site owner considers valuable and useful
The obvious way to do that is for the site owner to make some users more equal than others.
the Hacker News website seems to be doing fine.
Security through obscurity. Last I checked, it confirmed my impression, gathered from Digg and Reddit, that as long as the site remains sufficiently unpopular, it will not deteriorate.
Two separate issues:
1) Is it a good (legitimately persuasive) argument?
2) If not then after all the hairsplitting is done, what sort of bad argument is it?
The more important issue is (a). A few points:
a) Quibbling over the categorization of the fallacy is sometimes used to mask the fact that it's a bad argument.
b) There are plenty of people who can recognize bad arguments without knowing anything about the names of the fallacies, which leads to
c) We learn the names of the fallacies, not in order to learn to spot bad arguments, but as a convenience so that we don't have to explain at length to the other guy why the argument is bad.
d) Often perfectly legitimate arguments technically fall into one of the categories of fallacy. Technically being a classical fallacy is no guaranteed that an argument is actually fallacious. Some counterexamples.
In short, the classical fallacies are a convenient timesaver. But you don't need to have learned them to avoid being an idiot, and learning them will not stop you from being an idiot, and taking them too seriously can make you into an idiot.
Morality is an aspect of custom. Custom requires certain preconditions: it is an adaptation to a certain environment. Great political power breaks a key component of that environment.
More specifically, morality is a spontaneously arising system for resolving conflict among people with approximately equal power, such that adherence to morality is an optimal strategy for a typical person. A person with great power has less need to compromise and so his optimal strategy is probably a mix of compromise and brute force - i.e., corruption.
This does not require very specific human psychology. It is likely to describe any set of agents where the agents satisfy certain general conditions. Design two agents (entities with preferences and abilities) and in certain areas those entities are likely to have conflicting desires and are likely, therefore, to come into conflict and to need a system for resolving conflict (a morality) - regardless of their psychology. But grant one of these entities sufficiently great power, and it can resolve conflict by pushing aside the other agent, thereby dispensing with morality, thereby being corrupted by power.
Someone had just asked a malformed version of an old probability puzzle [...] someone said to me, "Well, what you just gave is the Bayesian answer, but in orthodox statistics the answer is 1/3." [..] That was when I discovered that I was of the type called 'Bayesian'.
I think a more reasonable conclusion is: yes indeed it is malformed, and the person I am speaking to is evidently not competent enough to notice how this necessarily affects the answer and invalidates the familiar answer, and so they may not be a reliable guide to probability and in particular to what is or is not "orthodox" or "bayesian." What I think you ought to have discovered was not that you were Bayesian, but that you had not blundered, whereas the person you were speaking to had blundered.
Aaron - yes, I know that. It's beside the point.
My point was that vampires were by definition not real - or at least, not understandable - because any time we found something real and understandable that met the definition of a vampire, we would change the definition to exclude it.
But the same exchange might have occurred with something entirely real. We are not in the habit of giving fully adequate definitions, so it is often possible to find counterexamples to the definitions we give, which might prompt the other person to add to the definition to exclude the counterexample. For example:
A: What is a dog?
B: A dog is a four-footed animal that is a popular pet.
A: So a cat is a dog.
B: Dogs bark.
A: So if I teach a cat to bark, it will become a dog.
etc.
Time - Philip Johnson is not just a Christian but a creationist. Do you mean, "if there are smart creationists out there..."? I don't really pay much attention to the religious beliefs of the smartest mathematicians and scientists and I'm not especially keen on looking into it now, but I would be surprised if all top scientists without exception were atheists. This page seems to suggest that many of the best scientists are something other than atheist, many of those Christian.
I'm getting two things out of this.
1) Evolutionary cynicism produces different predictions from cognitive cynicism, e.g. because the current environment is not the ancestral environment.
2) Cognitive cynicism glooms up Eliezer's day but evolutionary cynicism does not.
(1) is worth keeping in mind. I'm not sure what significance (2) has.
However, we may want to develop a cynical account of a certain behavior while suspending judgment about whether the behavior was learned or evolved. Call such cynicism "agnostic cynicism", maybe. So we have three types of cynicism: evolutionary cynicism, cognitive cynicism, and agnostic cynicism.
A careful thinker will want to avoid jumping to conclusions, and because of this, he may lean toward agnostic cynicism.