Can we stop deleting Caledonian's references to the fact that his comments are being deleted/altered?
Censorship is a form of bias, after all.
Do you believe that books should not be published?
Is that a serious question, or is it rhetorical? I don't object to publishing, I object to the publishing industry, its orientation, and the treatment of authors. Of course I believe writers' work should be published. In fact, in a lot of cases it is the publishing industry which prevents this - because it is too often a game of politics and capital. Most books don't get published anyway, as I'm sure you know - making this objection a moot point. So really, if you support the publishing industry, I should b...
Why? People don't value what they get for free. Education was once valued very highly. . . that changed once education began to be provided as a right, and children were obliged
Nice try. I'm not advocating that we force other people to read Eliezer's writing (I would never advocate that), in the same manner that children are forced to undergo American indoctrination at a young age. By your reasoning, the Nordic countries should value education less than the US since higher education is free there - except that the Nordic people are some of the most educate...
H+ -> Bronsted-Lowry acid
I'm much less likely to try charging for access to my future writings. No promises. . . If my (future) popular book on rationality becomes a hit, I'll upgrade to big-name fees. And later in my life, if all goes as planned, I'll be just plain not available.
Why? That's really very elitist of you, in my opinion. Bear in mind that even if "rationalize" the property owning gentry (which may or may not be possible), the poor, uneducated, and irrational groups will still oppose your AI and H+ on the grounds that they are un...
I don't even know what this blog is supposed to be about anymore. Also, your popular book on rationality - has that come out yet?
Caledonian: What fundamental principles? As far as I can tell the only fundamental principle is that it has to work. But I'm open to counterexamples, if you are.
The recognition of what 'working' is, and the tools that have been found useful in reaching that state, is what constitutes the scientific method.
The scientific method is actually pretty specific - and it is not a set of tools. There is no systematic method of advancing science, no set of rules/tools which are exclusively the means to attaining scientific knowledge.
Scientists do not concern themsel...
Normative beliefs (beliefs about what should be) don't [require evidence], IMHO. What would count as evidence for or against a normative belief?
That's correct if you don't consider pure reason to be evidence - but I consider it to be so. So morality and ethics and all these normative things are, in fact, based on evidence - although it is a mix of abstract evidence (reason) with concrete evidence (empirical data). If you base your morality, or any normative theory (how the world should be) on anything other than how things actually are (including mathematics), you necessarily have to invoke ascribe some supernatural property onto it
Isn't the scientific method a servant of the Light Side, even if it is occasionally a little misguided?
Too restrictive. Science is not synonymous with the hypothetico-deductive method, and nor is there any sort of thing called the "scientific method" from which scientists draw their authority on a subject. Neither is it a historically accurate description of how science has done its work. Read up on Feyerabend.
Science is inherently structureless and chaotic. It's whatever works.
I'm looking for Dark Side epistemology itself - the Generic Defenses of Fail.
In that case - association, essentialism, popularity, the scientific method, magic, and what I'll call Past-ism.
Wait a second - the scientific method? How? It may not be the most efficient way to get the truth, and it may not take into account Baye's theorem that could speed it up, but I don't see how the scientific method is epistemologically (is that a word?) wrong.
In general, beliefs require evidence.
In general? Which beliefs don't?
Think of what it would take to deny evolution or heliocentrism
Or what it would take to prove that the Moon doesn't exist.
As for listing common memes that were spawned by the Dark Side - would you care to take a stab at it, dear readers?
Cultural relativity. Such-and-such is unconstitutional. The founding fathers never intended... (various appeals to stick to the founding fathers original vision) Be reasonable (moderate) Show respect for your elders It's my private property _ is human natur...
"We need to switch to alternative energies such as wind, solar, and tidal. The poor are lazy ... Animal rights"
I don't think these fit. Regardless of whether you agree with them, they are specific assertions, not general claims about reasoning with consistently anti-epistemological effects.
"'In general, beliefs require evidence.' In general? Which beliefs don't?"
This is a language problem. "In general" or "generally" to a scientist/mathematician/engineer means "always," whereas in everyday speech it means "sometimes."
For example I could tell you that a fence with 2 sections has 3 posts ( I=I=I ), or I could tell you that "in general" a fence with N sections has N+1 posts.
*People (realistically) believe that being the Gatekeeper is easy, . . .
*correcting first sentence
People (realistically) believe that the being the Gatekeeper, and being the AI is terribly hard (or impossible, before it was shown to simply be terribly hard in most cases).
Imagine though that we've got a real transhuman/AI around to play with, or that we ourselves are transhuman. Would this paradigm then be inverted? Would everybody want to be the AI, with only the extremely crafty of us daring to be (or to pretend to be) Gatekeeper?
If Eliezer's claim is correct - that anyone can be convinced to let the AI out - then the true test of ability should be to play Gatekeeper. The AI's position would be trivially easy.
Just to make sure I'm getting this right... this is sort of along the same lines of reasoning as quantum suicide?
It depends on the type of "fail" - quenches are not uncommon. And also their timing - the LHC is so big, and it's the first time it's been operated. Expect malfunctions.
But if it were tested for a few months before, to make sure the mechanics were all engineered right, etc., I guess it would only take a few (less than 10) instances of the LHC failing shortly before it was about to go big for me to seriously consider an anthropic explan...
"If the 'boring view' of reality is correct, then you can never predict anything irreducible because you are reducible."
Maybe I missed this yesterday, or in another reductionism post, but doesn't that imply that there is no fundamental level of reality - nothing which is not reducible to something else? It could also be that I'm just not understanding what you mean.
Why tell readers that their other selves in other worlds are dying of cancer, so they should really think about cryonics, and then go on and make a post like this?
If I can't even get a glimpse of these other worlds, and my decisions don't alter them, why would that make utilitarianism seem more valid (it isn't)?
I love these quasi-futuristic exchanges you've come up with. They're really helpful for putting your other posts into perspective - not just facts, but what facts mean for the way we should look at the world.
"There is no time. At most, this is true in the same sense that there is no flag, there is no wind, there is no mind."
It's true in the sense that time is a static dimension.
Abstract synthesis. There.