• If it’s worth saying, but not worth its own post, here's a place to put it.
  • And, if you are new to LessWrong, here's the place to introduce yourself.
    • Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are welcome.

If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ.

The Open Thread sequence is here.

New Comment
46 comments, sorted by Click to highlight new comments since:

Hi everyone. I've discovered the rationality community gradually over the last several years, starting with Slate Star Codex, at some point discovering Julia Galef on Twitter/Facebook, and then reading Inadequate Equilibria. I still have tons of material on this site to go through!

I'm also the author of a blog, The Roots of Progress (https://rootsofprogress.org), about the history of technology and industry, and more generally the story of human progress.

Oh wow, welcome. I've read many essays on your blog and I think they are great.

I believe you'll find a lot of content (and people) here, who also share the noble pursuit of your blog.

[-][anonymous]30

I, like eigen, am also a fan of your blog! Welcome!

Would people be interested in a series of posts about category theory? There's a lot of great introductions to the subject out there, but I'd like to fill a particular niche—I don't want to assume my audience knows topology yet. I think you can still get a lot of value out of category theory at the high school senior level.

That sounds quite interesting to me.

I'd be interested! No knowledge of topology. I've been annoyed several times by watching talks at a programming conference titled "Why Programmers Should Learn Category Theory", and they never explain why, they only define basic CT ideas and say "These are cool!". Still overall convinced that there's interesting things hiding in CT.

Not to steal countedblessings' thunder, but you may be interested in "Category Theory for Programmers".

I'm not actually convinced that "programmers" in general should learn category theory. (Though I don't know it well, myself.) I do think there's an analogy between programming and category theory which is interesting to think about and can lead to important insights in PL design; but when someone else has had those insights, other people can use them without knowing category theory.

https://bartoszmilewski.com/2014/10/28/category-theory-for-programmers-the-preface/

There are a bunch of really great introductions to category theory, and this is definitely one of them. There's also Youtube videos of his lectures going over the same material.

My plan is to go very slowly, and to assume everything needs to be explained in as much detail as possible. This will make for a tediously long but hopefully very readable series.

For what it’s worth, I tried reading that (I’d seen it recommended elsewhere, and this latest mention reminded me to give it a try).

I haven’t quite given up yet, but it’s not looking good. I found the preface to be thoroughly unconvincing as an argument for why I (a “working programmer”, as Milewski puts it) would want to learn category theory; and the next chapter (the Introduction) seems to be packed with some of the most absurd analogies I have ever seen, anywhere. (One of which is outright insulting—what, so the reason we need static type systems is that programmers are nothing more than monkeys, hitting keys at random, and with static typing, that randomly generated code will not compile if it’s wrong? But I am not a monkey; how does this logic apply to a human being who is capable of thought, and who writes code with a purpose and according to a design? Answer: it doesn’t.)

I will report back when I’ve read more (or finally given up), I suppose…

[-]gjm40

Immediately after the bit about monkeys there's this

The usual goal in the typing monkeys thought experiment is the production of the complete works of Shakespeare. Having a spell checker and a grammar checker in the loop would drastically increase the odds. The analog of a type checker would go even further by making sure that, once Romeo is declared a human being, he doesn’t sprout leaves or trap photons in his powerful gravitational field.

which feels like a bit of an own goal to me, because I suspect the analogue of a type checker would actually make sure that once Romeo is declared a Montague it's a type error for him to have any friendly interactions with a Capulet, thus preventing the entire plot of the play.

That’s an interesting (and amusing) point—I didn’t even think of that when reading it! (I was too busy shaking my head at the basic absurdity of the analogy: what human playwright, when writing a play, would accidentally have one of their main characters turn into a plant or a stellar object or any such thing? If we take the analogy at face value, doesn’t it show that type checking is manifestly unnecessary if your code is being written by humans…?)

Furthermore I don't get how type checking would help monkeys write code any better. They would just have less code compile (and the same is true of adding a spelling and grammar checker to their Shakespeare plays)

To be clear, I recommend the "teaching category theory to programmers" aspect of it, which I remember being effective at teaching me. I have no particular memories of the "convincing programmers to learn category theory" aspect.

I support this idea. Especially if it incorporates some motivation for topology, which weirdly seems to hang out by itself until it suddenly becomes critical.

[-]gjm20

It's more usual for topology to motivate category theory than the other way around. (That's where category theory originally came from, historically.)

As someone who doesn't know topology yet, that sounds amazing!

Interested!

What are some good discussions of "ideology" from a rationalist perspective? E.g., what it is, what causes people to have them, what's the best way to fight harmful ideologies, how to prevent harmful ideologies from forming in one's own social movement, etc. From what I've been able to find myself, it seems to be a rather neglected topic on LW:

I'd also be interested in good discussions of it from outside the rationalist community.

I have always understood this to be a consequence of the Politics is the Mindkiller custom. The most relevant pieces outside the Craft and the Community on LessWrong are Raemon's The Relationship Between the Village and the Mission, and The Schelling Choice is Rabbit, not Stag.

I can think of a couple relevant-but-not-specific areas outside the rationalist community:

multivocality - the fact that single actions can be interpreted coherently from multiple perspectives simultaneously, the fact that single actions can be moves in many games at once, and the fact that public and private motivations cannot be parsed.

This leads to something they call robust action, which basically means "hard to interfere with." So my prior for successful movements is a morally multivocal ideology for hunting stag robustly.

Might be useful to taboo ideology.

It seems like a few sequence posts touch on this (Guardians of Ayn Rand, Guardians of Truth, and other pieces of the craft and the community sequence). I'm not sure if they seemed irrelevant to the question you meant to be orienting around, or you were looking for newer things, or just forgot.

I guess by ideology I mean a set of ideas or beliefs that are used to rally a social movement around, which tend to become unquestionable "truths" once the social movement succeeds in gaining power. So for example theism, Communism, Aryan "master race". The "Guardian" posts you cite do seem somewhat relevant but don't really address the main questions I have, which I list below. (Also I didn't find them because I was searching for "ideology" as the keyword.)

  1. Eliezer's posts don't seem to address the "rallying flag" function of ideology. Given that ideologies are useful as rallying flags for people to coordinate / build alliances around but can also become increasingly harmful as they become more embedded (into e.g. education and policy) and unquestionable, what should someone trying to build a social movement do?
  2. What to do if one observes some harmful ideology growing in influence? If you try to argue against it, you become an enemy of the movement and might suffer a lot of personal consequences. If you try to build a counter-movement, you probably end up creating its own ideology which might not be less harmful.
  3. What to do if the harmful ideology has already taken over a whole society?

Given that ideologies are useful as rallying flags for people to coordinate / build alliances around but can also become increasingly harmful as they become more embedded (into e.g. education and policy) and unquestionable, what should someone trying to build a social movement do?

One idea is to have some sort of timed auto-destruct mechanism for the ideology. For example, have the founders and other high-status members of the movement record a video asking people to question the ideology, and giving a bunch of reasons for why the ideology might be false or people shouldn't be so certain about it, to be released after the movement succeeds in gaining power. People concerned about ideologies could try to privately talk the leaders into doing this. But with deepfakes being possible, this might not work so well in the future (and also the timing mechanism seems tricky to get right) so I wonder what else can be done.

My guess is that there are fragments of things addressing at least part of this, just not oriented around ideology as a keyword (belief as attire, professing and cheering, fable of science and politics). I guess one thing is that much of the sequence are focused on "here is a way for beliefs to be wrong" rather than examining more closely why having this way-of-treating-beliefs might be useful. (Although Robin Hanson's work I think often explores that more directly)

What to do if you spot a harmful ideology is a political question, and in some cases the answer might be pretty orthogonal to rationality. (although you might mean the more specific subquestion of "how to stop harmful ideologies while maintaining/raising the sanity waterline." i.e. many people fight harmful ideologies with counter ideologies).

some random additional thoughts (this might also be part of what you were already thinking of, it's just what my brain had easily available)

I think I see the word ideology as a bit more neutral than you're phrasing here. Or at least, your examples are 'generally accepted around here as false/bad'. But, LessWrong has an overall ideology of beliefs-that-we-coordinate around, complete with "those beliefs being object-level useful" and "some people using those beliefs as attire, sometimes for reasons that are plausibly virtuous and sometimes for reasons that seem like exactly the sort of thing Eliezer wrote the sequences to complain about.

Science also has an ideology (similar but different from Yudkowskianism). The sequences also cover "how to address wrongness in the science ideology", I think. For example in Science or Bayes:

In physics, you can get absolutely clear-cut issues.  Not in the sense that the issues are trivial to explain.  But if you try to apply Bayes to healthcare, or economics, you may not be able to formally lay out what is the simplest hypothesis, or what the evidence supports.  But when I say "macroscopic decoherence is simpler than collapse" it is actually strict simplicity; you could write the two hypotheses out as computer programs and count the lines of code. Nor is the evidence itself in dispute.
I wanted a very clear example—Bayes says "zig", this is a zag—when it came time to break your allegiance to Science. [emphasis mine]
"Oh, sure," you say, "the physicists messed up the many-worlds thing, but give them a break, Eliezer!  No one ever claimed that the social process of science was perfect.  People are human; they make mistakes."
But the physicists who refuse to adopt many-worlds aren't disobeying the rules of Science.  They're obeying the rules of Science.
The tradition handed down through the generations says that a new physics theory comes up with new experimental predictions that distinguish it from the old theory.  You perform the test, and the new theory is confirmed or falsified.  If it's confirmed, you hold a huge celebration, call the newspapers, and hand out Nobel Prizes for everyone; any doddering old emeritus professors who refuse to convert are quietly humored.  If the theory is disconfirmed, the lead proponent publicly recants, and gains a reputation for honesty.

(Paul Graham's "What you can't say" is also relevant)

So, one way to fight bad/wrong/incomplete ideology is... well, to argue against it, if you're in an environment where that sort of thing works. If you're not in an environment conducive to clear argument, the obvious choices are "first try to make the environment conducive to argument" or, well, various dark-artsy rhetorical flourishes that work symmetrically whether your ideas are good or not.

It seems like you have more specific questions in mind (would be curious what your motivating examples are).

The way I'd have carved up your question space is less like "how to stop/fight ideologies" and more like "what to do about the general fact of some sets of beliefs becoming sticky over time?"

The sequences also touch upon, in response to the claim "Death is good because it kills old scientists that are stuck in their ways, which allows science to march forward", to which Eliezer replies "Jesus Christ sure, but you can just make scientists retire without killing them." But, you do still need to implement the part where you actually make them retire as public figures.

What to do if you spot a harmful ideology is a political question, and in some cases the answer might be pretty orthogonal to rationality. (although you might mean the more specific subquestion of “how to stop harmful ideologies while maintaining/raising the sanity waterline.” i.e. many people fight harmful ideologies with counter ideologies).

Right, politics as usual seems to imply a sequence of ideologies replacing each other, and it might just be a random walk as far as how beneficial/harmful the ideologies are. My question is how to do better than that.

It seems like you have more specific questions in mind (would be curious what your motivating examples are).

My original motivating examples came from contemporary US politics, so it's probably better not to bring them up here, but I'm now also worried about the implications for the "long reflection" / "great deliberation".

first try to make the environment conducive to argument

By doing what? I mean it seems possible to build environments conducive to argument for a relatively small group of people, like LW, but I don't know what can be done to push a whole society in that direction, so that's part of my question.

The way I’d have carved up your question space is less like “how to stop/fight ideologies” and more like “what to do about the general fact of some sets of beliefs becoming sticky over time?”

I think I'm still more inclined to use the first framing, because if we make beliefs less sticky, it might just speed up the cycles of ideologies replacing each other, and it seems like the bigger problem is "beliefs as rallying flags" (i.e., beliefs can selected for because they are good rallying flags instead of for epistemic reasons).

(btw, I think this comment would work well as a question, which might make it easier to reference in the future)

I'd have no problem with turning it into a top-level question post, if that's something you can do. (I posted it in Open Thread in case there was already some sequence of posts that directly addressed my questions, that I simply missed.) It not, I may write a question post after I do some more research and think/talk things over.

Hi there! I've stumbled upon this forum on and off while reading up on Effective Altruism, which I first got acquainted with December 2018 (less than a year ago). I'm interested in learning how to think and act more rationally, both for my own personal development as well as within EA issues. I look forward to binging on all the interesting articles and discussions on here and possibly meeting up RL with people in Stockholm, Sweden.

Question: does my username show up alright? No font break or weird symbols?

Edit: I had it changed from Siljamäki to Siljamaeki. Just in case.

Welcome!

Username looks good to me (Chrome + Mac OS).

Hey people!

I'm "new" here, having spent the last 2 years reading and following the "core" material. I have no clue how I got here. I remember following an idle path of exploration and suddenly finding this beautiful place where it seems like there are true adults.

I'm a young adult constantly amazed at the scope of this world, a composition student that has somehow gained (mild) success (read: gotten paid anything at all), I've used basic rationality tools when deciding if my
(then) relationship was worth it, found out that it wasn't, I didn't accept the answer, tried again when everything in that relationship was on fire and managed to leave relatively unharmed. I hope to gain some friendships from this community, and I'm looking for people willing to do some betting to help train a beginner rationalist mind.

I think it's time for me to engage in the community, now that I'm young and able to change my habits "easily" for the better. I've been thinking of starting some sort of rationalist hangout/meetup in Malmö, Sweden (where i currently live). I'm slightly unsure where I could check if there's any interest at all, pointers would be welcome :)

Adding yourself to the map is the first thing I would do, and then most likely I would see whether there have been any meetups historically anywhere close to your area. It appears there is a meetup in 10 days in Copenhagen, which seems pretty close to Malmo.

https://lesswrong.com/events/A7LAzFXD79pgFtsdA/cph-meetup-10-10-19

At that meetup you might also be able to figure out whether there are any people directly from your area interested in a meetup.

Rationality is basically therapy [citation needed]. A common type of therapy is couples therapy. As such, you'd think that 'couples rationality' would exist. I guess it partially does (Double Crux, Againstness, "group rationality" when n=2, polyamory advocacy), but it seems less prevalent than you'd naively think. Maybe because rationalists tend to be young unmarried people? Still, it seems like a shame that it's not more of a thing.

Aumann's agreement theorem: two agents acting rationally cannot agree to disagree.

Share Models, Not Beliefs

For bigger groups: Voting Theory Primer for Rationalists

Aumann's agreement theorem is an extremely bad basis for any kind of couples therapy...

...if less than two of them are rationalists?

Literally no one is rational enough to actually reach Aumann agreement on anything but a simple toy problem. See https://www.lesswrong.com/posts/JdK3kr4ug9kJvKzGy/probability-space-and-aumann-agreement

Therapy is a specific setting. You have a therapist and you have a client (or two). Most rationality technique on the other hand seem to be designed to be able to be done by a single person.

I found out about LessWrong via this community session on the 35. Chaos Communication Congress. It was by far the best talks I had while on congress. And that says something because during congress I usually have lot and lots of good talks.

Personally I feel like there are rather-emotional and rather-rational people. Personally I'm far into the rather-rational territory and I look forward to meeting new people, learning about new ideas and generally advancing my decision making.

I study computer science and I read one or another grand philosophical book so far... I'd personally consider myself "GIT/GP/GO" which is Geek Code V3 for "Geek of Information Technology / Geek of Philosophy / Geek of Other".

I've noticed I navigate my entertainment largely by things to avoid. I hate coming of age tales in general and anything involving a school in particular. I despise children-of-destiny stories, which is weird because I've always liked prophecies. I avoid books when people talk about the worldbuilding.

This strikes me as strange considering how much of my reading when I was young consisted of a child of destiny who comes of age amid crappy worldbuilding. Maybe it is an acquired sensitivity or something.



What do you like?

Lately short stories, action, and good prose. Short stories are an excellent antidote to the glut of long book series; they don't allow enough space for fluff, so I find they are consistently better reads. Also lower investment, which is nice. And good prose is good prose, like always.

A year or so ago I read some of Ursula K. Le Guin's short stories, and that was when I really noticed that there were levels to the whole business. I don't recall the story, but the scene which struck me was walking down a road in the autumn. I now suspect that depicting banal events well is a mark of craft in the same way as drawing a circle or squaring an edge.

Has anyone written a summary of all organizations that work on AI alignment? If not, what is the best way to keep track of that?

This post has a discussion of every major alignment organization, and summarizes their mission to some extent.

Question/feature request: does cross-posting automatically add a canonical URL element pointing to the original content? If not, would it be possible to do so? (Google doesn't necessarily penalise duplicate content, but it does effect search rankings etc.)

We already implemented this!

When we set up crossposting we can set a flag on whether to have the canonical URL point towards its original source (this doesn't always make sense, for example for things like the AI Alignment Newsletter), but if you want to automatically crosspost while preserving the canonical URL we can set that up for you.

neat! thanks.