neuromancer92

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

I understand the point you're raising, because it caught me for a while, but I think I also see the remaining downfall of science. Its not that science leads you to the wrong thing, but that it cannot lead you to the right one. You never know if your experiments actually brought you to the right conclusion - it is entirely possible to be utterly wrong, and complete scientific, for generations and centuries.

Not only this, but you can be obviously wrong. We look at people trusting in spontaneous generation, or a spirit theory of disease, and mock them - rightfully. They took "reasonable" explanations of ideas, tested them as best they could, and ended up with unreasonable confidence in utterly illogical ideas. Science has no step in which you say "and is this idea logically reasonable", and that step is unattainable even if you add it. Science offers two things - gradual improvement, and safety from being wrong with certainty. The first is a weak reward - there is no schedule to science, and by practicing it there's a reasonable chance that you'll go your entire life with major problems with your worldview. The second is hollow - you are defended from taking a wrong idea and saying "this is true" only inasmuch as science deprives you of any certainty. You are offered a qualifier to say, not a change in your ideas.

I think this is a key point - given a list of choices, people compare each one to the original statement and say "how well does this fit?" I certainly started that way before an instinct about multiple conditions kicked in. Given that, its not that people are incorrectly finding the chance that A-F are true given the description, but that they are correctly finding the chance that the description is true, given one of A-F.

I think the other circumstances might display tweaked version of the same forces, also. For example, answering the suspension of relations question not as P(X^Y) vs P(Y), but perceiving it as P(Y), given X.

I'm significantly torn on whether to enable this. I understand the downsides of seeing authors (and am confident that I'm engaging in at least some of them), but I have one issue with it. Knowing authors can improve my ability to rapidly and effectively process posts. There's at least one author who makes very good points, but sometimes glosses over issues that turn out to be either quite complicated or openings to criticism of the post. I've found these omissions both important and quite hard to find - at the moment, its worth it to me to leave author names active just to be aware that I need to read these posts with a different style of criticism than I normally engage in.

In short, there are sometimes positive outcomes of knowing authors, if only for general efficiency increases.

Rather than being a sane view, this is a logical fallacy. I don't know of a specific name to give it, but survivorship bias and the anthropic principle are both relevant.

The fallacy is this: for anything a person tries to do, every relevant technology will be inadequate up to the one that succeeds. Inherently, the first success at something will end the need to make new steps towards it, so we will never see a new advance where past advances have been sufficient for an end.

The weak anthropic principle says that we only observe our universe when it is such that it will permit observers. Similarly, we can assume that if new developments are being made towards an aim, they are being made because past steps were inadequate. We cannot view new advances as having their chances of success biased by past failures since they come into existence only in the case that past attempts have indeed failed.

(I am aware that technologies are improved on even after they achieve their aim, but in these cases new objectives like "faster" or "cheaper" are still unsatisfied, and drive the progress.)

This suggestion is certainly an interesting one - that clicks happen in places where pre-existing ideas are weak, and "clicky" people have fewer strongly-entrenched concepts.

I think the explanation goes somewhat beyond this however, based on a personal observation that "clicks" seem to preferentially arise for ideas which are, to the best of our understanding, "right". I know people with very low thresholds of belief, and clicky people, and it seems to me that the correlation between the two is negative if it exists. Credulous people can't click onto an idea because it doesn't seem more right to them than any other - every point is neutral, so new ideas are simply accepted.

Clicky people, by contrast, can click in the positive or negative. Just as intelligence explosion can make "intrinsic" sense to someone, counterarguments to it are likely to throw a mental flag even before they find a clear source for the objection. The click seems to go beyond acceptance to rapid understanding and evaluation.

For me, the discovery that science is too slow was bound up with the realization that science is not safe. My private discovery of the slowness of science didn't come from looking at the process of scientific discovery and reflecting on the time it took - rather, it arose from realizing that the things I learned or discovered via science were slower more painful than those I learned from other methods. "Other methods" encompasses everything from pure mathematics to That Magical Click, the first inescapable and the second, initially, unsupported. Realizing that science was a fairly low-quality set of tools carried with it the realization that its inefficiency was a function of its precautions. Not trusting science as the ideal method for discovery, I ceased to trust it as ideal for reliability.

New to this site, Bayescraft, and rationalism as a whole, I still have a mentor left to distrust. Consciously, I know that these techniques are imperfect, but I have yet to understand them well enough to be failed by them.