If it’s worth saying, but not worth its own post, then it goes here.

Notes for future OT posters:

  1. What accomplishments are you celebrating from the last month?
  2. What are you reading?
  3. What reflections do you have for yourself or others from the last month?
  4. What have you tried out this month?
  5. (Teaser for my next post) What is your relationship with yourself?
New Comment
72 comments, sorted by Click to highlight new comments since: Today at 10:04 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]Vaniver6yModerator Comment330

Mod notice: There's a discussion going on in the Bay Area rationality community involving multiple users of LW that includes allegations of serious misconduct. We don't think LW is a good venue to discuss the issue or conduct investigations, but we think it's important for the safety and health of the LW community that we host links to a summary of findings once the discussion has concluded. If you'd like to discuss this policy, please send a private message to me and I'll talk it over with the mod team. [Comments on this comment are disabled.]

A moderator has deactivated replies on this comment until 12/12/2024
The discussion concluded without a summary of findings, in part because ialdabaoth went into exile. Moving forward, ialdabaoth is banned from LessWrong, for reasons discussed there (that are related to, but not directly, the allegations).

I've been thinking a lot lately about where I want to live long-term. I'm currently in Madison Wi, which is really nice, but kinda small and has an unfortunately hot/humid summer. Financially I can live pretty much anywhere I want, except maybe Monaco.

Things I want, not in order of importance:

1. A nice house. In an ideal world, the house would house several of my closest friends, be walkable to parks, shops, and restaurants, and be close enough to other friends that they drop by regularly. I am also very interested in running a public space or a semi-public space adjacent or close to the house, possibly a makerspace, possibly a cafe, or something else. This is one of the reasons it's not instantly obvious that I should move to Berkeley or Manhattan or something. I'm financially well-off but there's like, an order of magnitude in difference in cost of having a nice big place to live. On the other hand, I'm also pretty flexible about living in apartment or something, but for the long term I much prefer having a space I own and can modify and build up to become better and better over the years.

2. People. My best friend and one of my partners lives in Ma... (read more)

A major consideration / uncertainty here seems to be "is a hub in Madison something remotely practical?", and you might want to specifically test that with something kickstarter-esque (i.e "I will try this if and only if at least X people commit to moving here if and only if X other people commit to moving here, etc")

(Testing this unfortunately is fair bit of work, but relatively small compared to the work involved in the actual project, so maybe also a good test of "can drethelin pull this off?")

Something I've been thinking of doing is asking a lot of specific people I like who else would have to move somewhere before they would move and seeing if there's a smallish cluster
Makes sense.

Seems like some other people want to bring back the Sabbath.

Folk values -- the qualities of the "I love science" crowd as contrasted to the qualities of actual, exceptional scientists -- matter too. The common folk outnumber the epic heroes.

This holds true even if you believe that everyone can become an epic hero! People need to know, rather than guess and hope, that walking the path to becoming an epic hero might look and feel rather different than doing active epic heroing. In theory one ought to be able to derive the appropriate instrumental goals from the terminal goal, but in practice people very frequently mess this up.

The general crowd has a different job than the inner circle, and treating this difference as orthogonal propagates fewer errors than treating it as a matter of degree.

Folk rationality needs to strongly protect against infohazards until one gets a chance to develop less vulnerable internal habits. Folk rationality needs to celebrate successfully satisficing goals and identifying picas rather than going for hard optimization because amateur min-maxing just spawns Goodhart demons every which way. Folk rationality needs to prize keeping social commitments and good conflict mediation tools; it needs to honor social... (read more)


I think it would be benefitial to always link the last open thread in a new open thread in the main text.

The EU seems to get rid of the habit of changing the clocks around twice a year, in an exercise of listening to public feedback.

He said that the decision was taken after a vast majority of EU citizens — primarily from Germany — who took part in a survey on the issue called for an end to biannual clock changes.
Massive support for halting daylight saving time
Over 80 percent of respondents supported abolishing changing the clocks in summer and winter in a survey that ran between July 4 and August 16, according to media reports on the results.

It's interesting that the EU seems to be able to coordinate currently on an issue like this where the right answer is more or less obvious but where the coordination problem is massive.

Do we have other similar problems with obvious answers that are just a matter of getting enough people coordinated?

2Rafael Harth6y
I wouldn't call the answer obvious. I'm not even sure if I could have guessed the majority view on this beforehand. Why do you think it's obvious? Are there no upsides to changing or are the downsides too significant?

The main argument in favor of changing time zones that it supposedly saves energy doesn't seem to be true these days.

Two examples of issues: People seem to work 16 minutes less the Monday after daylight savings. It also increases heart attacks.

We can detect that when we switch between "normal" and "daylight saving" time bad things happen at the transitions. But that doesn't mean that switching is worse than not switching. We don't know what bad things would happen when if we didn't switch. (E.g., one reason for the bad things is that people's sleep pattern is disturbed and that has bad health effects. But it might also be bad for sleep to have dawn as early relative to the hours people want to sleep as it would be in the middle of summer without dayli

I remember reading on LessWrong about a study a while back that compared trained psychologists to lay-people and found that the trained psychologists didn't do any better. Does anybody know the study or LessWrong post?

Eliezer made this attempt at naming a large number computable by a small Turing machine. What I'm wondering is exactly what axioms we need to use in order to prove that this Turning machine does indeed halt. The description of the Turing machine uses a large cardinal axiom ("there exists an I0 rank-into-rank cardinal"), but I don't think that assuming this cardinal is enough to prove that the machine halts. Is it enough to assume that this axiom is consistent? Or is something stronger needed?

I used to be quite good at math at high school, but I haven't studied it afterwards. This seems like a good opportunity to ask: Which book(s) should I read in order to fully understand that post?

Assume great knowledge of high-school math, but almost nothing beyond that. I want to get from there to... understanding the cardinals and ordinals. I have a vague impression of what they likely are, but I'd like to have a solid foundation, i.e. to know the definitions and to understand the proofs (in ideal case, to be able to prove some things independently).

Bonus points if the books you mention are available at Library Genesis. ;)

As well as ordinals and cardinals, Eliezer's construction also needs concepts from the areas of computability and formal logic. A good book to get introduced to these areas is Boolos' "Computability and Logic".

Thank you!

Two good first books on set theory (with a similar scope) are

  • H. B. Enderton, Elements of Set Theory
  • Karel Hrbacek, Thomas Jech, Introduction to Set Theory

(Though they might be insufficient to parse the post.)

Keep in mind that set theory has a very different character from most math, so it might be better to turn to something else first if "studying math" is more of a motivation.

Thank you!
The only step in which his machine can fail to halt is "Run all programs such that a halting proof exists, until they halt.". A program would have to have a halting proof, yet not halt. T̶h̶e̶r̶e̶f̶o̶r̶e̶,̶ ̶b̶e̶y̶o̶n̶d̶ ̶w̶h̶a̶t̶ ̶w̶e̶ ̶n̶e̶e̶d̶ ̶t̶o̶ ̶t̶a̶l̶k̶ ̶a̶b̶o̶u̶t̶ ̶t̶u̶r̶i̶n̶g̶ ̶m̶a̶c̶h̶i̶n̶e̶s̶ ̶a̶t̶ ̶a̶l̶l̶,̶ ̶t̶h̶e̶ ̶o̶n̶l̶y̶ ̶e̶x̶t̶r̶a̶ ̶a̶x̶i̶o̶m̶ ̶n̶e̶e̶d̶e̶d̶ ̶i̶s̶ ̶"̶T̶ ̶i̶s̶ ̶c̶o̶n̶s̶i̶s̶t̶e̶n̶t̶.̶"̶.̶
Consistency of T isn't enough, is it? For example the theory (PA + "The program that searches for a contradiction in PA halts") is consistent, even though that program doesn't halt.
I don't follow. I agree that (PA + "PA is inconsistent") is consistent. How does it follow that consistency of T isn't enough? The way I use consistency there is "If T proves that a program halts, then that program does halt and we can safely run it.".
I'm arguing that, for a theory T and Turing machine P, "T is consistent" and "T proves that P halts" aren't together enough to deduce that P halts. And as I counter example I suggested T = PA + "PA is inconsistent" and P = "search for an inconsistency in PA". This P doesn't halt even though T is consistent and proves it halts. So if it doesn't work for that T and P, I don't see why it would work for the original T and P.
Right. Perhaps the axiom schema "If T proves φ, then φ."?
Yeah, I think that's probably right. I thought of that before but I was a bit worried about it because Löb's Theorem says that a theory can never prove this axiom schema about itself. But I think we're safe here because we're assuming "If T proves φ, then φ" while not actually working in T.

Happy Petrov Day!

I'm a bit confused by the rationalWiki. Is that maintained by anyone? I saw page for EY, and it seemed either be genuinely harsh/scathing/dismissive, or a poorly executed inside joke.

RationalWiki is maintained by people who really dislike Less Wrong in general and Eliezer personally.

My own view is that RationalWiki is a terrible, terrible source for anything.

Thanks, will take that into account. Did it start as a LW project, and then shifted, or was it that way from the beginning?

RationalWiki is older than LW, and their definition of "rationality" is quite different from the one used here.

To put it simply, their "rationality" means science as taught at universities + politically correct ideas; and their "irrationality" means pseudoscience + religion + politically incorrect ideas + everything that feels weird (such as many-world hypothesis).

Also, their idea of rational discussion is that after you have decided that something is irrational, you should describe it in a snarky way, and feel free to exaggerate if it makes the page more funny. So when later anyone points out a factual error in your article, you can defend it by saying "it was obviously a joke, moron".

In my understanding, this is how they most likely got obsessed with Eliezer and LessWrong:

1) How does a high-school dropout dare to write a series of articles about quantum physics? Only university professors are allowed to have opinions on such topic. Obviously, he must be a crackpot. And he even identifies as a libertarian, which makes him a perfect target for RationalWiki: attack pseudoscience and right-wing politics in the same strike!

2) Oops, a debate at S... (read more)

It's worth noting that David Gerard did contribute a lot on LessWrong in it's early days as well, so he's not really someone who's simply an outsider.

The Wikipedia page on LW doesn't seem particularly awful at the moment. (And in particular it does in fact mention effective altruism.)

Slightly better than the last time I saw it.

Still, the "Neoreaction" section is 3x longer than the "Effective Altruism" section. Does anyone other than David Gerard believe this impartially describes Less Wrong? (And where are the sections for the other political alignments mentioned in LW surveys? Oh, we are cherry picking, of course.)

No mention of the Sequences, other than "seed material to create the community blog". I guess truly no one reads them anymore. :(

I guess truly no one reads them anymore. :(

Not true!

ReadTheSequences.com has gotten a steady ~20k–25k monthly page views (edit: excluding bots/crawlers, of course!) for 11 months and counting now, and I am aware of a half-dozen rationality reading groups around the world which are doing Sequence readings (and that’s just those using my site).

(And that doesn’t, of course, count people who are reading the Sequences via LW/GW, or by downloading and reading the e-book.)

We are getting about 20k page hits per month on the /rationality page on LessWrong, and something in the 100k range on all sequences posts combined.
Cherry-picking indeed! The NRx section is about 2.5x the length of the EA section (less if you ignore the citations) and about 1/4 of it is the statement "Eliezer Yudkowsky has strongly repudiated neoreaction". Neoreaction is more interesting because in most places there would be (to a good approximation) zero support for it, rather than the rather little found on LW. I mean, I don't want to claim that the WP page is good, and I too would shed no tears if the section on neoreaction vanished, but it's markedly less terrible than suggested in this thread.

If Jehovah Witnesses come to my door, I spend a few minutes talking with them, and then ask them to leave and never return, will I also get a subsection "Jehovah Witnesses" on Wikipedia? I wouldn't consider that okay even if the subsection contained words "then Viliam told them to go away". Like, why mention it at all, if that's not what I am about?

I suppose if there was a longer article about LW, I wouldn't mind spending a sentence or two on NR. It's just that in current version, the mention is disproportionately long -- and it has its own subsection to make it even more salient. Compare with how much place the Sequences get; actually, not mentioned at all. But there is a whole paragraph about the purpose of Less Wrong. One paragraph about everything LW is about, and one paragraph mentioning that NR was here. Fair and balanced.

What if a bunch of JWs camped out in your garden for a month, and that was one of the places where more JWs congregated than anywhere else nearby? I think then you'd be in danger of being known as "that guy who had the JWs in his garden", and if you had a Wikipedia page then it might well mention that. It would suck, it would doubtless give a wrong impression of you, but I don't think you'd have much grounds for complaint about the WP page. LW had neoreactionaries camped out in its garden for a while. It kinda sucked (though some of them were fairly smart and interesting when they weren't explaining smartly and interestingly all about how black people are stupid and we ought to bring back slavery; it's not like there was no reason at all why they weren't all just downvoted to oblivion and banned from the site) and the perception of LW as a hive of neoreaction is a shame -- and yes, there are people maliciously promoting that perception and I wish they wouldn't -- but I'm not convinced that that WP article is either unfair or harmful. It says "neoreactionaries have taken an interest in LW" rather than "LW has taken an interest in neoreaction" and the only specific LW attitude to neoreaction mentioned is that the guy who founded the site thinks NRx is terrible. I don't think anyone is going to be persuaded by the WP article that LW is full of neoreactionaries, and if someone who has that impression reads the article they might even be persuaded that they're wrong. Again, for the avoidance of doubt, I'm not claiming that the WP article is good. But it's hardly "as bad as possible" either. That's all.

I mostly agree, except for:

I don't think anyone is going to be persuaded by the WP article that LW is full of neoreactionaries, and if someone who has that impression reads the article they might even be persuaded that they're wrong.

I believe this is not how most people think. The default human mode is thinking in associations. Most people will read the article and remember that LW is associated with something weird right-wing. Especially when "neoreaction" is a section header, which makes it hard to miss. The details about who took interest in whom, if they notice them at all, will be quickly forgotten. (Just like when you publicly debunk some myths, it can actually make people believe them more, because they will later remember they heard it, and forget it was in the context of debunking.)

If the article would instead have a section called "politics on LW" mentioning the 'politics is the mindkiller' slogan, and how Eliezer is a libertarian, and then complete results of a political poll (including the NR)... most people would not remember that NR was mentioned there.

Similarly, the length of sections is instinctively perceived as a degree of... (read more)

Fair point about association versus actual thinking. (Though at least some versions of the backfire effect are doubtful...) I don't think this is all David Gerard's fault (at least, not the fault of his activities on Wikipedia). Wikipedia is explicitly meant to be a summary of information available in "reliable sources" elsewhere, and unfortunately I think it really is true that most of the stuff about LW in such sources is about things one can point at and laugh or sneer, like Roko's basilisk and neoreaction. That may be a state of affairs that David Gerard and RationalWiki have deliberately fostered -- it certainly doesn't seem to be one they've discouraged! -- but I think the Wikipedia article might well look just the way it does now if there were some entirely impartial but Wikipedia-rules-lawyering third party watching it closely instead of DG. E.g., however informative the LW poll results might be, it's true that they're not found in a "reliable source" in the Wikipedia sense. And however marginal Roko's basilisk might be, it's true that it's attracted outside attention and been written about by "reliable sources".
This is a good point. The Wikipedia pages for other sites, like Reddit, also focus unduly on controversy.
So there seems to be an upstream problem that the line between "reliable sources" and "clickbait" is quite blurred these days. This is probably not true for things that are typically written about in textbooks; but true for things that are typically written about in mainstream press.
Have you noticed that most writings by laypeople on QM actually are crackpottery? RW's priors are in the right place, at least.
RW's priors are in the right place, at least.

I fully agree (about the priors on QM). The problem is somewhere else. I see two major flaws:

First, the "rationality" of RW lacks self-reflection. They sternly judge others, but consider themselves flawless. To explain what I mean, imagine that I would know nothing about QM other than the fact that 99% of online writings about QM are crackpottery; and then I would find an article about QM that sounds weird. -- Would I trust the article? No. That's what the priors are for. Would I write my own article denouncing the author of the other article as a crackpot? No. Because I would be aware that I know nothing about QM, and that despite the 99% probability of crackpottery, there is also the 1% probability it is correct; and that my lack of knowledge does not allow me to update after reading the article itself, so I am stuck with my priors. I would try to leave writing the denunciation to someone who actually understands the topic; to someone who can say "X is wrong, because it is actually Y", instead of merely "X is wrong, because, uhm, my priors" or even "X is wrong, trust me, I am the expert"... (read more)

Are you quite sure "they" are a cohesive goup? Are you quite sure "they" couldn't possibly include any actual physicists? So LW never makes sweeping denunciations?
I suppose that people who disagree with the snarky way of looking at political opponents will not remain for long time. There is also a difference between a forum and a wiki. (Medium is the message, kind of.) In a forum, you can write an article expressing your opinions, and then I can write an article about why I disagree with your opinions. In a wiki, I will simply revert your edit. Thus, wikis are more likely to converge to a unified view.
9Said Achmiz6y
No, RationalWiki never had anything to do with Less Wrong.
It started as the leftist alternative to Conservapedia.
4Ben Pace6y
Really? Do you have any links on that? I wasn’t aware.
The Wikipedia article on it does.
4Ben Pace6y
You're right, literally says it in the second line.

I think given that we seem to have settled on Open Threads being stickied, we can get rid of the first bullet point.


How do people organize their long ongoing research projects (academic or otherwise)? I do a lot of these but think I would benefit from more of a system than I have right now.

I write notes in a single plain text file, using the dates they are made to cite them in newer notes. There are two types of notes, brainstorming throw-away ones that maintain the process of thinking about a problem or of learning something (such as carefully reading a paper), and more lucid ones, with some re-reading value, which are marked differently and have a one-sentence summary. The notes are intended to never be made public, so that I feel free to use them to resolve any silly confusions.

Just finished reading Yuval Noah Harari's new book 21 Lessons for the 21st Century. Primary reaction: even if you already know all the things being presented in the book, it is worth a read just because of the clarity into the discussion the book offers.

Without saying anything about the content, I don't find this comment valuable.
Here is a review: https://www.economist.com/books-and-arts/2018/09/01/big-data-is-reshaping-humanity-says-yuval-noah-harari
Interesting because I do. Slightly updated towards reading this.

This article seems to have some bearing on decision theory, but I don't know enough about it or quantum mechanics to say what that bearing might be.

I'd be interested to know others' take on the article.

It's a minor new quantum thought experiment which, as often happens, is being used to promote dumb sensational views about the meaning or implications of quantum mechanics. There's a kind of two-observer entangled system (as in "Hardy's paradox"), and then they say, let's also quantum-erase or recohere one of the observers so that there is no trace of their measurement ever having occurred, and then they get some kind of contradictory expectations with respect to the measurements of the two observers. Undoing a quantum measurement in the way they propose is akin to squirting perfume from a bottle, then smelling it, and then having all the molecules in the air happening to knock all the perfume molecules back into the bottle, and fluctuations in your brain erasing the memory of the smell. Classically that's possible but utterly unlikely, and exactly the same may be said of undoing a macroscopic quantum measurement, which requires the decohered branches of the wavefunction (corresponding to different measurement outcomes) to then separately evolve so as to converge on the same state and recohere. Without even analyzing anything in detail, it is hardly surprising that if an observer is subjected to such a highly artificial process, designed to undo a physical event in its totality, then the observer's inferences are going to be skewed somehow. So, you do all this and the observers differ in their quantum predictions somehow. In their first interpretation (2016), Frauchiger and Renner said that this proves many worlds. Now (2018), they say it proves that quantum mechanics can't describe itself. Maybe if they try a third time, they'll hit on the idea that one of the observers is just wrong.
Someone made a post on it.

Should the mind projection fallacy actually be considered a fallacy? It seems like being unable to imagine a scenario where something is possible is in fact Bayesian evidence that it is impossible, but only weak Bayesian evidence. Being unable to imagine a scenario where 2+2=5, for instance, could be considered evidence that 2+2 ever equaling 5 is impossible.

This isn't an accurate description of the mind projection fallacy. The mind projection fallacy happens when someone thinks that some phenomenon occurs in the real world but in fact the phenomenon is a part of the way their mind works. But yes, it's common to almost all fallacies that they are in fact weak Bayesian evidence for whatever they were supposed to support.
Accusations that something or other is a mind-projection generally lack rigorous criteria. The conclusion of a mind-projection argument generally end up supporting the intuitions of the person making it.