Sorted by New


Reply to Nate Soares on Dolphins

Such a category is called paraphyletic. It can be informationally useful if the excluded subgroup is far-divergent from the overarching group, such that it has gained characteristics not shared by the others, and lost characteristics otherwise shared. But the less divergence has taken place, the harder it is to justify a paraphyletic category. The category "reptile" (excluding birds) makes sense today, but it wouldn't have made sense in the Jurassic period. The mammal/cetacean distinction is somewhere in the middle.

Animal/human is different because the evolutionary divergence is so recent that it's difficult to justify the paraphyletic usage on biological grounds. Rather this is more of an ingroup/outgroup distinction, along the lines of βαρβαρος ("anybody who isn't Greek"). If humans learned to communicate with e.g. crows, the shared language probably wouldn't have a compact word for "non-human animal," although it might have one for "non-human non-crow animal."

Unrefined thoughts on some things rationalism is missing vs religions

I’m also not sure how far non-core and core identity rationalism are mutually exclusive. (Just like a lot of people are vaguely christian without belonging to a church, so maybe a lot of people would be vaguely interested in rationalism without wanting to join their local temple)

Agreed; finding a way for multiple levels of involvement to coexist would be helpful. Anecdotally, when I first tried attending LW meetups in around 2010, I was turned off and did not try again for many years, because the conversation was so advanced I couldn't follow it. But when I did try again, I enjoyed it a lot more because I found that the community had expanded to include a "casual meetup attendee and occasional commenter" tier, which I fitted comfortably into. Now we could imagine adding a 3rd tier, namely "people who come and listen to a speech and then make small talk and go for a picnic afterward" (or whatever).

Could this be considered a "temple"? Maybe, but I'd guess that most prospective members wouldn't think of it that way and would be embarrassed to hear such talk. "Philosophical society" might be closer to the mark. It's fun to imagine a Freemason-like society where people are formally allocated into "tiers" and then promoted to the next inner tier by a secret vote, perhaps involving black and white marbles. But at this point, such a level of ritual would probably be a waste of weirdness points.

If you believe as I do that rationalism makes people better human beings, is morally right and leads to more open, free, just and advanced societies, then creating and spreading it is good pretty much irrespective of social circumstances.

I'm uncertain about this, but there is something I suspect and fear may be true, which is that rationalism (as exemplified by current LW members) is not actually helpful for most people on an individual level (see e.g.). There are some people, like me, who are born in the Uncanny Valley and must study rationalism as part of a lifelong effort to climb up out of it. But for others, I would not want to pull them down into the Valley just so I can have company.

For example, I enjoy going to rationalist meetups and spending hours talking about philosophical esoterica, because it fills an intellectual void that I can't fill elsewhere. But most people wouldn't enjoy this, and it wouldn't be a good use of their time.

That's not to say that rationalism is totally inert in society. The ideas developed by rationalists can percolate into the wider population, even to those who are more passive consumers than active participants.

  • Rationalist content is mostly in english. Most people don’t speak/​read english. Even those that do as a second language don’t consumer primarily english sources

You're probably right, although as a monolingual English speaker I myself wouldn't know. I have heard of efforts to translate some of the sequences into Russian and Spanish. But for less popular languages, it may be difficult to assemble enough people who both speak the language and are interested in rationalism. In that respect it differs from Christianity in that there is no definitive text that you can point to and say "If you read and understand this, then you understand rationality." Rationality must be cultivated through active engagement in dialogue, which requires a critical mass of people.

  • Rationalism is niche and hard to stumble upon. It’s not like christianity or left/​right ideology in the west. Whereas those ideologies are broadcasted at you constantly and you will know about them and roughly what they represent, rationalism is something you only find if you happen to just luck out and stumble on this weird internet trail of breadcrumbs.

This is a challenge I've faced when I've tried to explain what, exactly, rationalism is when friends ask me what it's all about. I struggle to answer, because there is no single creed that rationalists believe. One could try to put together a soundbite-tier explanation, but to do so would risk distorting the very essence of rationality, which at its core is a process, not a conclusion. At best, we might try and draw up a list of 40 statements and say "Rationalists all agree that at least 30 of these are true, but there is vehement disagreement as to which."

Unrefined thoughts on some things rationalism is missing vs religions

A few thoughts on this.

First, I probably have a higher appetite for religion-ifying rationalism than others in the community, but I wouldn't want to push my preferences too hard lest it scare people off. This may stem from my personal background as a cradle atheist. Religious people don't want rationality to become rivalrous with their religion, and ex-religionists don't want it to become they very thing they escaped. To the extent that it's good for rationality to become more religion-like, I think it'll happen on its own in the next few decades or centuries without any concerted effort. I'm not in a hurry.

Second, we should avoid treating "religion" as a fixed concept already optimized for a particular social niche, as if to say that if rationality has some attributes of a religion, then it would necessarily gain by taking on the rest as well. Some of the functions that a religion might manage are:

  1. Marriage and family life
  2. Non-familial social ties
  3. The relationship between people and the state
  4. Matters of interpersonal morality
  5. Matters of private morality
  6. Explaining the origin and fate of the universe
  7. Explaining consciousness and death
  8. Ethnic identification
  9. Etc.

Different societies will have different ways of allocating these responsibilities amongst the various institutions/philosophies within it. In Western cultures we use the word "religion" because it's common for most or all of these domains to be handled by the same thing, so we need a word for whatever category of thing that is. But the Western bias is revealed whenever we try to apply the concept to non-Western societies. E.g. a Chinese person may be a Confucianist with respect to (1) (3) and (4), a Taoist for (2) (6) and (8), and a Buddhist for (5) and (7). Which of these is a "religion"? Does it matter?

Even within the West, these boundaries have shifted over time. (3) was forcibly purged from Christianity in the European Wars of Religion, leading ultimately to the 1st Amendment in the US. And (8) is common in the Middle East and Eastern Europe, while mainline Protestantism is indifferent or outright hostile towards it. We can expect that the boundaries will continue to shift in the future, which leads into the third point.

Third, we should ask ourselves (and I'd be curious to hear your answer) what kind of future we're planning for in which the religion-ification of rationalism becomes relevant. I can think of three scenarios:

  • (A) A technological singularity happens within the next few decades.
  • (B) A major civilizational collapse delays the singularity by hundreds or thousands of years.
  • (C) Civilization doesn't collapse, but the singularity is nevertheless delayed by several centuries, due to technological stagnation (or something).

As for (A), I'm not qualified to weigh in on how likely that is; but if it does happen, then this whole question is pretty much irrelevant anyway, because there won't be any humans (as we know them) to practice any religion. The only possible relevance is that it would be bad for people to expend too much effort now in creating a rationalist religion if they could otherwise have been working on AI safety. But that probably doesn't apply to most people.

I don't think (B) is likely, but there's a compelling cultural narrative in its favor that we need to actively counterbalance in our estimates. We all like to imagine an apocalypse where we can wipe the slate clean and remake a "perfect" society. And everyone likes to look back to the Fall of Rome as an easy-to-apply historical template. If you imagine a rationalist religion in that context, you end up with something like "D&D magic + medieval Catholicism," where monks copy manuscripts to preserve knowledge that would otherwise be lost. But, again, I don't think loss of knowledge is major concern for the future, so efforts to create such an order of monks will probably be wasted.

(C) is where the question becomes most relevant, but since this scenario has no historical precedent, we can't just look to existing or past religions and think that we can just change a few incidentals and slot it into the future world. Whatever rationality ends up becoming in this world, it won't be what we'd call a "religion" (but perhaps a word for it will be devised eventually).

For example, in the future, scientific knowledge may never again be lost, but people will nevertheless feel adrift in a flood of false information so vast and confusing that they can't figure out what to believe. What sort of institution could remedy this situation? Not monks copying manuscripts, to be sure.

Lastly, some disjointed thoughts on outreach. There's a certain personality type that feels drawn to rationalist ideas, for reasons that are probably innate or at least very difficult to change. You know you're one of these people if your reaction upon finding LessWrong was "All my life people have been talking nonsense, but finally I've found something that makes sense!" Even if you don't agree with most of it.

At some point (perhaps already past), all of those people who can be persuaded will be. This will only comprise a small fraction of the population, but they will cling to the "rationalist community" with a near-religious zeal. (I have friends who absolutely loathe "rationalists" but still participate in the community online because, in their view, literally no one else even tries to make convincing arguments.) This zeal is a valuable quality, but most normal people will not sympathize. The question then becomes: For that majority of people who are not rationalists-by-disposition, is there some way they can benefit by associating with the community?

I think the answer will involve addressing this:

We don’t have rituals. Hence meetups are awkward to organize, often stilted and revolve around the discussion of readings or rationality problems or even just lack any structure at all. Contrast this to a church where you show up every Sunday, listen to a service and then make smalltalk or go to a picnic.

Maybe rationalists should give talks that are open to the public and geared towards a general audience, and encourage listeners to talk about it amongst themselves. That way there'd be less pressure to follow along with extremely esoteric conversations. But you don't have to think of it as a "religion" or a "ritual" - it's just a public lecture, which is a perfectly normal thing for someone of any religious views to attend. Putting it forward as a religion-substitute would probably turn people off.

Meetups as Institutions for Intellectual Progress

1-3 months doesn't seem so bad as a timeline. While it's important not to let the perfect be the enemy of the good (since projects like this can easily turn into a boondoggle where everyone quibbles endlessly about what the end-product should look like), I think it's also worth a little bit of up-front effort to create something that we can improve upon later, rather than getting stuck with a mediocre solution permanently. (I imagine it's difficult to migrate a social network to a new platform once it's already gotten off the ground, the more so the more people have joined.)

Meetups as Institutions for Intellectual Progress

I would also like to register my opposition to using Facebook. While it might seem convenient in the short term, it makes the community more fragile by adding a centralized failure point that's unaccountable to any of its members. Communicating on LessWrong.com has the virtue of it being owned by the same community that it serves.

Meetups as Institutions for Intellectual Progress

It seems to me that there's a tension at the heart of defining what the "purpose" of meetups is. On the one hand, the community aspect is one of the most valuable things one can get out of it - I love that I can visit dozens of cities across the US, and go to a Less Wrong meetup and instantly have stuff to talk about. On the other hand, a community cannot exist solely for its own sake. Someone's personal interest in participating in the community will naturally fluctuate over time, and if everyone quits the moment their interest touches zero then nobody will ever feel like it's worth investing in its long-term health.

Personally, I do have a sense that going to meetups matters, in that it helps (however marginally) to raise the sanity waterline in one's local community, and to move important conversations about x-risk and the future of humanity into the mainstream. I myself was motivated to dive into Less Wrong again, after a hiatus of many years, by finding a lively meetup group that was discussing these ideas regularly.

In any case I think that the question of "why meetups matter" is something that we're all collectively trying to figure out over time. I don't claim to know the answer right now.

I do, however, have some concern about creating a "monoculture" among the various sub-groups. It's good that we have a wide variety of intellectual interests, ways-of-running-meetups, etc., because this allows for mistakes to be corrected and innovations to be discovered. If we are all given a directive from on high[1] saying "We are going to mobilize all the resources of the Rationality Community towards goal X, which we will achieve by strategy Y," then it might at first seem like a lot of stuff is getting done. But what if strategy Y is ineffective, or goal X is a bad goal? Then we would have ruined our chance to discover our mistake until it was too late. This is especially important when the goals of the community are so ill-defined, as is the case now.

Of course, in order to reap these benefits of having a diverse community, a prerequisite is that there be any communication at all between groups. So, the suggestion of having meetups write up blog posts for public consumption seems like a good one[2]. But I don't think the groups should be told which topics they must discuss, because they might be interested in something else that nobody else would've thought of. Perhaps it's enough to provide a list of topics that any meetup group can draw from if they can't think of something. And maybe, after one group publishes a writeup, another group might be inspired to discuss the same topic later and submit their own writeup in response.

[1] Or, more realistically, a persuasive message to the effect of "All the cool kids are doing Z and you're going to feel left out if you don't," which can feel like a compulsory directive because of Schelling points, etc.

[2] Caveat: The mood of a conversation is likely to change dramatically if it's known that someone is taking notes that will be posted later, since then one is not speaking merely to those in attendance, but effectively to an indefinitely large audience of all LessWrong readers. So, I would recommend that meetups have a mixture of on- and off-the-record conversations, with a clear signal of which norm is in effect at any given time.

2011 Survey Results

What's the relation between religion and morality? I drew up a table to compare the two. This shows the absolute numbers and the percentages normalized in two directions (by religion, and by morality). I also highlighted the cells corresponding to the greatest percentage across the direction that was not normalized (for example, 22.89% of agnostics said there's no such thing as morality, a higher percentage than any other religious group).

Many pairs were highlighted both ways. In other words, these are pairs such that "Xs are more likely to be Ys" and vice-versa.

  • [BLANK]; [BLANK]
  • Atheist and not spiritual; Consequentialist
  • Agnostic; No such thing
  • Deist/Pantheist/etc.; Virtue ethics
  • Committed theist; Deontology

(I didn't do any statistical analysis, so be careful with the low-population groups.)

Deontological Decision Theory and The Solution to Morality

Would it be correct to say that, insofar as you would hope that the one person would be willing to sacrifice his/her life for the cause of saving the 5*10^6 others, you yourself would pull the switch and then willingly sacrifice yourself to the death penalty (or whatever penalty there is for murder) for the same cause?

Open Thread, August 2010

I think I may have artificially induced an Ugh Field in myself.

A little over a week ago it occurred to me that perhaps I was thinking too much about X, and that this was distracting me from more important things. So I resolved to not think about X for the next week.

Of course, I could not stop X from crossing my mind, but as soon as I noticed it, I would sternly think to myself, "No. Shut up. Think about something else."

Now that the week's over, I don't even want to think about X any more. It just feels too weird.

And maybe that's a good thing.

Open Thread, August 2010

I suppose, perhaps, an asteroid impact or nuclear holocaust? It's hard for me to imagine a disaster that wipes out 99.999999% of the population but doesn't just finish the job. The scenario is more a prompt to provoke examination of the amount of knowledge our civilization relies on.

(What first got me thinking about this was the idea that if you went up into space, you would find that the Earth was no longer protected by the anthropic principle, and so you would shortly see the LHC produce a black hole that devours the Earth. But you would be hard pressed to restart civilization from a space station, at least at current tech levels.)

Load More