I think it would be more-graceful of you to just admit that it is possible that there may be more than one reason for people to be in terror of the end of the world, and likewise qualify your other claims to certainty and universality.
That's the main point of what gjm wrote. I'm sympathetic to the view you're trying to communicate, Valentine; but you used words that claim that what you say is absolute, immutable truth, and that's the worst mind-killer of all. Everything you wrote just above seems to me to be just equivocation trying to deny tha...
I say that knowing particular kinds of math, the kind that let you model the world more-precisely, and that give you a theory of error, isn't like knowing another language. It's like knowing language at all. Learning these types of math gives you as much of an effective intelligence boost over people who don't, as learning a spoken language gives you above people who don't know any language (e.g., many deaf-mutes in earlier times).
The kinds of math I mean include:
Agree. Though I don't think Turing ever intended that test to be used. I think what he wanted to accomplish with his paper was to operationalize "intelligence". When he published it, if you asked somebody "Could a computer be intelligent?", they'd have responded with a religious argument about it not having a soul, or free will, or consciousness. Turing sneakily got people to look past their metaphysics, and ask the question in terms of the computer program's behavior. THAT was what was significant about that paper.
It's a great question. I'm sure I've read something about that, possibly in some pop book like Thinking, Fast & Slow. What I read was an evaluation of the relationship of IQ to wealth, and the takeaway was that your economic success depends more on the average IQ in your country than it does on your personal IQ. It may have been an entire book rather than an article.
Google turns up this 2010 study from Science. The summaries you'll see there are sharply self-contradictory.
First comes an unexplained box called "The Meeting of Min...
This “c factor” is not strongly correlated with the average or maximum individual intelligence of group members but is correlated with the average social sensitivity of group members, the equality in distribution of conversational turn-taking, and the proportion of females in the group.
I have read (long ago, not sure where) a hypothesis that most people (in the educated professional bubble?) are good at cooperation, but one bad person ruins the entire team. Imagine that for each member of the group you roll a die, but you roll 1d6 for men, and 1d20 for wom...
But what makes you so confident that it's not possible for subject-matter experts to have correct intuitions that outpace their ability to articulate legible explanations to others?
That's irrelevant, because what Richard wrote was a truism. An Eliezer who understands his own confidence in his ideas will "always" be better at inspiring confidence in those ideas in others. Richard's statement leads to a conclusion of import (Eliezer should develop arguments to defend his intuitions) precisely because it's correct whether Eliezer's intuitions are correct or incorrect.
The way to dig the bottom deeper today is to get government bailouts, like bailing out companies or lenders, and like Biden's recent tuition debt repayment bill. Bailouts are especially perverse because they give people who get into debt a competitive advantage over people who don't, in an unpredictable manner that encourages people to see taking out a loan as a lottery ticket.
Finding a way for people to make money by posting good ideas is a great idea.
Saying that it should be based on the goodness of the people and how much they care is a terrible idea. Privileging goodness and caring over reason is the most well-trodden path to unreason. This is LessWrong. I go to fimfiction for rainbows and unicorns.
I think that was part of the whole "haha goodhart's law doesn't exist, making value is really easy" joke. However, it's also possible that that's... actually one of the hard-to-fake things they're looking for (along with actual competence/intelligence). See PG's Mean People Fail or Earnestness. I agree that "just give good money to good people" is a terrible idea, but there's a steelman of that which is "along with intelligence, originality, and domain expertise, being a Good Person (whatever that means) and being earnest is a really good trait in EA/LW an...
No; most philosophers today do, I think, believe that the alleged humanity of 9-fingered instances *homo sapiens* is a serious philosophical problem. It comes up in many "intro to philosophy" or "philosophy of science" texts or courses. Post-modernist arguments rely heavily on the belief that any sort of categorization which has any exceptions is completely invalid.
I'm glad to see Eliezer addressed this point. This post doesn't get across how absolutely critical it is to understand that {categories always have exceptions, and that's okay}. Understanding this demolishes nearly all Western philosophy since Socrates (who, along with Parmenides, Heraclitus, Pythagoras, and a few others, corrupted Greek "philosophy" from the natural science of Thales and Anaximander, who studied the world to understand it, into a kind of theology, in which one dictates to the world what it must be like).
Many philosophers have ...
I theorize that you're experiencing at least two different common, related, yet almost opposed mental re-organizations.
One, which I approve of, accounts for many of the effects you describe under "Bemused exasperation here...". It sounds similar to what I've gotten from writing fiction.
Writing fiction is, mostly, thinking, with focus, persistence, and patience, about other people, often looking into yourself to try to find some point of connection that will enable you to understand them. This isn't quantifiable, at least not to me; but I would ...
To me, saying that someone is a better philosopher than Kant seems less crazy than saying that saying that someone is a better philosopher than Kant seems crazy.
Isn't the thing Rob is calling crazy that someone "believed he was learning from Kant himself live across time", rather than believing that e.g. Geoff Anders is a better philosopher than Kant?
An easy reason not to play quantum roulette is that, if your theory justifying it is right, you don't gain any expected utility; you just redistribute it, in a manner most people consider unjust, among different future yous. If your theory is wrong, the outcome is much worse. So it's at the very best a break even / lose proposition.
See the 2nd-to-last paragraph of my revised comment above, and see if any of it jogs your memory.
Republic is the reference. I'm not going to take the hours it would take to give book-and-paragraph citations, because either you haven't read the the entire Republic, or else you've read it, but you want to argue that each of the many terrible things he wrote don't actually represent Plato's opinion or desire.
(You know it's a big book, right? 89,000 words in the Greek. If you read it in a collection or anthology, it wasn't the whole Republic.)
The task of arguing over what in /Republic/ Plato approves or disapproves of is arduous and, I think, unnece...
The most-important thing is to explicitly repudiate these wrong and evil parts of the traditional meaning of "progress":
Sorry; your example is interesting and potentially useful, but I don't follow your reasoning. This manner of fertilization would be evidence that kin selection should be strong in Chimaphila, but I don't see how this manner of fertilization is itself evidence that kin selection has taken place. Also, I have no good intuitions about what differences kin selection predicts in the variables you mentioned, except that maybe dispersion would be greater in Chimaphila because of teh greater danger of inbreeding. Also, kin selection isn't controversial, so I don't know where you want to go with this comment.
Hi, see above for my email address. Email me a request at that address. I don't have your email. I just sent you a message.
ADDED in 2021: Some people tried to contact me thru LessWrong and Facebook. I check messages there like once a year. Nobody sent me an email at the email address I gave above. I've edited it to make it more clear what my email address is.
[Original first point deleted, on account of describing something that resembled Bayesian updating closely enough to make my point invalid.]
I don't think this approach applies to most actual bad arguments.
The things we argue about the most are ones over which the population is polarized, and polarization is usually caused by conflicts between different worldviews. Worldviews are constructed to be nearly self-consistent. So you're not going to be able to reconcile people of different worldviews by comparing proofs. Wrong beliefs come in se...
"Cynicism is a self-fulfilling prophecy; believing that an institution is bad makes the people within it stop trying, and the good people stop going there."
I think this is a key observation. Western academia has grown continually more cynical since the advent of Marxism, which assumes an almost absolute cynicism as a point of dogma: all actions are political actions motivated by class, except those of bourgeois Marxists who for mysterious reasons advocate the interests of the proletariat.
This cynicism became even worse with Foucault, who taught people to s...
"At its core, this is the main argument why the Solomonoff prior is malign: a lot of the programs will contain agents with preferences, these agents will seek to influence the Solomonoff prior, and they will be able to do so effectively."
First, this is irrelevant to most applications of the Solomonoff prior. If I'm using it to check the randomness of my random number generator, I'm going to be looking at 64-bit strings, and probably very few intelligent-life-producing universe-simulators output just 64 bits, and it's hard to imagine how an alien in a...
The S. prior is a general-purpose prior which we can apply to any problem. The output string has no meaning except in a particular application and representation, so it seems senseless to try to influence the prior for a string when you don't know how that string will be interpreted.
The claim is that consequentalists in simulated universes will model decisions based on the Solomonoff prior, so they will know how that string will be interpreted.
...Can you give an instance of an application of the S. prior in which, if everything you wrote were correct, i
Me: We could be more successful at increasing general human intelligence if we looked at low intelligence as something that people didn't have to be ashamed of, and that could be remedied, much as how we now try to look at depression and other mental illness as illness--a condition which can often be treated and which people don't need to be ashamed of.
You: YOU MONSTER! You want to call stupidity "mental illness", and mental illness is a bad and shameful thing!
That's technically true, but it doesn't help a lot. You're assuming one starts with fixation to non-SC in a species. But how does one get to that point of fixation, starting from fixation of SC, which is more advantageous to the individual? That's the problem.
It's not that I no longer endorse it; it's that I replied to a deleted comment instead of to the identical not-deleted comment.
Group selection, as I've heard it explained before, is the idea that genes spread because their effects are for the good of the species. The whole point of evolution is that genes do well because of what they do for the survival of the gene. The effect isn't on the group, or on the individual, the species, or any other unit other than the unit that gets copied and inherited.
Group selection is group selection: selection of groups. That means the phenotype is group behavior, and the effect of selection is spread equally among members of the group. ...
You're assuming that the benefits of an adaptation can only be linear in the fraction of group members with that adaptation. If the benefits are nonlinear, then they can't be modeled by individual selection, or by kin selection, or by the Haystack model, or by the Harpending & Rogers model, in all of which the total group benefit is a linear sum of the individual benefits.
For instance, the benefits of the Greek phalanx are tremendous if 100% of Greek soldiers will hold the line, but negligible if only 99% of them do. We can guess--though I ...
Group selection, as I've heard it explained before, is the idea that genes spread because their effects are for the good of the species. The whole point of evolution is that genes do well because of what they do for the survival of the gene. The effect isn't on the group, or on the individual, the species, or any other unit other than the unit that gets copied and inherited.
Group selection is group selection: selection of groups. That means the phenotype is group behavior, and the effect of selection is spread equally among members of the group...
Thanks!
I have great difficulty finding any philosophy published after 1960 other than post-modern philosophy, probably because my starting point is literary theory, which is completely confined to--the words "dominated" and even "controlled" are too weak--Marxian post-modern identity politics, which views literature only as a political barometer and tool.
I think you're assuming that to give in to the mugging is the wrong answer in a one-shot game for a being that values all humans in existence equally, because it feels wrong to you, a being with a moral compass evolved in iterated multi-generational games.
Consider these possibilities, any one of which would create challenges for your reasoning:
1. Giving in is the right answer in a one-shot game, but the wrong answer in an iterated game. If you give in to the mugging, the outsider will keep mugging you and other rationalists until you're all brok...
Your overall point is right and important but most of your specific historical claims here are false - more mythical than real.
Free-market economic theory developed only after millenia during which everyone believed that top-down control was the best way of allocating resources.
Free market economic theory was developed during a period of rapid centralization of power, before which it was common sense that most resource allocation had to be done at the local level, letting peasants mostly alone to farm their own plots. To find a prior epoch of deliberate ce...
I scanned in Extropy 1, 3, 4, 5, 7, 16, and 17, which leaves only #2 missing. How can I send these to you? Contact me at [my LessWrong user name] at gmail.com.
I just now read that one post. It isn't clear how you think it's relevant. I'm guessing you think that it implies that positing free will is invalid.
You don't have to believe in free will to incorporate it into a model of how humans act. We're all nominalists here; we don't believe that the concepts in our theories actually exist somewhere in Form-space.
When someone asks the question, "Should you one-box?", they're using a model which uses the concept of free will. You can't object to that by saying "You don't really have free will."...
Yep, nice list. One I didn't see: Defining a word in a way that is less useful (that conveys less information) and rejecting a definition that is more useful (that conveys more information). Always choose the definition that conveys more information; eliminate words that convey zero information. It's common for people to define words that convey zero information. But if everything has the Buddha nature, nothing empirical can be said about what it means and it conveys no information.
Along similar lines, always define words so that no other word conveys...
But you're arguing against Eliezer, as "God" and "miracle" were (and still are) commonly-used words, and so Eliezer is saying those are good, short words for them.
Great post! There is also the non-discrete aspect of compression: information loss. English has, according to some dictionaries, over a million words. It's unlikely we store most of our information in English. Probably there is some sort of dimension reduction, like PCA. There is in any case probably lossy compression. This means people with different histories will use different frequency tables for their compression, and will throw out different information when encoding a verbal statement. I think you would almost certainly find that if you measu...
I don't think that what you need has any bearing on what reality has actually given you. Nor can we talk about different decision theories here--as long as we are talking about maximizing expected utility, we have our decision theory; that is enough specification to answer the Newcomb one-shot question. We can only arrive at a different outcome by stating the problem differently, or by sneaking in different metaphysics, or by just doing bad logic (in this case, usually allowing contradictory beliefs about free will in different parts of the analysis.)
You...
I can believe that it would make sense to commit ahead of time to one-box at such an event. Doing so would affect your behavior in a way that the predictor might pick up on.
Hmm. Thinking about this convinces me that there's a big problem here in how we talk about the problem, because if we allow people who already knew about Newcomb's Problem to play, there are really 4 possible actions, not 2:
I don't know if the usual statement o...
This was argued against in the Sequences and in general, doesn't seem to be a strong argument. It seems perfectly compatible to believe your actions follow deterministically and still talk about decision theory - all the functional decision theory stuff is assuming a deterministic decision process, I think.
It is compatible to believe your actions follow deterministically and still talk about decision theory. It is not compatible to believe your actions follow deterministically, and still talk about decision theory from a first-person point of view, as ...
I think that first you should elaborate on what you mean by "the goals of humanity". Do you mean majority opinion? In that case, one goal of humanity is to have a single world religious State, although there is disagreement on what that religion should be. Other goals of humanity include eliminating homosexuality and enforcing traditional patriarchal family structures.
Okay, I admit it--what I really think is that "goals of humanity" is a nonsensical phrase, especially when spoken by an American academic. It would be a little better ...
The part of physics that implies someone cannot scan your brain and simulate inputs so as to perfectly predict your actions is quantum mechanics. But I don't think invoking it is the best response to your question. Though it does make me wonder how Eliezer reconciles his thoughts on one-boxing with his many-worlds interpretation of QM. Doesn't many-worlds imply that every game with Omega creates worlds in which Omega is wrong?
If they can perfectly predict your actions, then you have no choice, so talking about which choice to make is meaningless. If yo...
Sorry. I've been reading English literary journals and lit theory books for the past year, and the default assumption is always that the reader is a Marxist.
The rationalist virtue of empiricism...
I'm not disagreeing with any of the content above, but a note about terminology--
LessWrong keeps using the word "rationalism" to mean something like "reason" or possibly even "scientific methodology". In philosophy, however, "rationalism" is not allied to "empiricism", but diametrically opposed to it. What we call science was a gradual development, over a few centuries, of methodologies that harnessed the powers both of rationalism and empiricism, which had previo...
I was unfairly inserting in the parentheses my own presumption about why Christians saw the world as having been created perfect. The passage I was talking about from Aquinas did not talk about perfection of the environment.
I'd like to see what Aquinas did say. Have you got a citation? I'm pretty sure that the notion that the world was created imperfect has never been tolerated by the Catholic Church. Asserting that creation was imperfect might even be condemned as Manicheeism. Opinions vary on what happened after the Fall, but I find it unlikely that...
Yep, the argument to justify the imperfection of children, and thus the necessity of growth, is based on Aristotle's notion of perfect and imperfect actualities. Aquinas wrote:
...Everything is perfect inasmuch as it is in actuality; imperfect, inasmuch as it is in potentiality, with privation of actuality. ... It is impossible therefore for any effect that is brought into being by action to be of a nobler actuality than is the actuality of the agent. It is possible though for the actuality of the effect to be less perfect than the actuality of the acting c
I didn't mean to retract this, but to delete it and move the comment down below.
Historically, Christians objected strongly to fossil evidence that some species had gone extinct. They said God would not have created species and then let them go extinct.
Perfection is a crucial part of Christian ontology. God's creation was perfect. That means, in the Christian way of thinking, it is unchanging. Read Christian descriptions of God (who is perfect), and "unchanging" is always one of the adjectives. "Unchanging" is a necessary attribute of perfection in Christian theology, and God's creation is necessarily perfect. ...