LESSWRONG
LW

646
Linch
3796Ω2293580
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
10Linch's Shortform
5y
140
Richard Ngo's Shortform
Linch2d127

The main reason I disagree with both this comment and the OP is that you both have the underlying assumption that we are in a nadir (local nadir?) of connectedness-with-reality, whereas from my read of history I see no evidence of this, and indeed plenty of evidence against. 

People used to be confused about all sorts of things, including, but not limited to, the supernatural, the causes of disease, causality itself, the capabilities of women, whether children can have conscious experiences, and so forth. 

I think we've gotten more reasonable about almost everything, with a few minor exceptions that people seem to like highlighting (I assume in part because they're so rare). 

The past is a foreign place, and mostly not a pleasant one.

Reply
Linch's Shortform
Linch5d30

In both programming and mathematics, there’s a sense that only 3 numbers need no justification (0,1, infinity). Everything else is messier.

Unfortunately something similar is true for arguments as well. This creates a problem.

Much of the time, you want to argue that people underrate X (or overrate X). Or that people should be more Y (or less Y).

For example, people might underrate human rationality. Or overrate credentials. Or underrate near-term AI risks. Or overrate vegan food. Or underrate the case for moral realism. Or overrate Palestine’s claims. Or underrate Kendrick Lamar. (These are all real discussions I’ve had).

Much of the time, if a writer thinks their readers are underrating X, they’ll make an argument in favor of X. (Sounds obvious, I know).

But X and Y are usually not precise things that you can measure, never mind ascertain a specific value to it.

So if a writer argues for X, usually they don’t have a good sense of what value the reader assigns X (in part because of a lack of good statistics, and in part because a specific reader is a specific person with their own idiosyncratic views). Nor does a writer have a precise sense of what the optimal value of X ought to be, just that it’s higher (or lower) than what others think.

This creates major problems for both communication and clarity of thought!

One solution of course is to be an extremist. But this is a bad solution unless you actually think maximal (or minimal) X is good.

Sometimes either the structure of reality, or the structure of our disagreements, create natural mid-points while we can explicate their disagreements. For example, in my debate with BB, a natural midpoint is (we believe[1]) whether bees have net positive or net negative welfare. “0” is a natural midpoint. In my second post on the “rising premium of life”, I can naturally contrast my preferred hypothesis (premium of life rising) against the null hypothesis that the premium of life is mostly unchanged, or against the alternate hypothesis that it’s falling.

But reality often doesn’t give us such shortcuts! What are natural midpoints to argue for in terms of appropriate levels of credentialism? Or appropriate faith in human rationality? Or how much we should like Kendrick Lamar?

I don’t want to give people the illusion of an answer here, just presenting the problem as-is.

[1] This is disputed, see here.

Reply
henryaj's Shortform
Linch5d42

Sounds right to me too but it's an empirical experiment that I'd be keen on people trying!

Reply
Linch's Shortform
Linch9d*134

https://linch.substack.com/p/the-puzzle-of-war

I wrote about Fearon (1995)'s puzzle: reasonable countries, under most realistic circumstances, always have better options than to go to war. Yet wars still happen. Why?

I discuss 4 different explanations, including 2 of Fearon's (private information with incentives to mislead, commitment problems) and 2 others (irrational decisionmakers, and decisionmakers that are game-theoretically rational but have unreasonable and/or destructive preferences)

Reply
Before LLM Psychosis, There Was Yes-Man Psychosis
Linch18d80

I disagree with a lot of John's sociological theories, but this is one I independently have fairly high credence in. I think it elegantly explains poor decisions by seemingly smart people like Putin, SBF, etc, as well as why dictators often perform poorly (outside of a few exceptions like LKY). 

Reply
Banning Said Achmiz (and broader thoughts on moderation)
Linch19d*60

The other complaint I had about that segment is that I do not believe microeconomics-informed reading of criminal punishment (as exemplified by Gary Becker's work) has held up well. 

I think it's often given as an example of where microeconomics-informed reasoning has led policymakers astray (as criminals are often bad at expected value calculations, even intuitively), and certainty of punishment >> expected cost of punishment. I don't have a direct source for this but I think it's a common position among economists.

Reply
Linch's Shortform
Linch19d*40

Beneath Psychology sequence too long. Sorry!

Religion for Rationalists -- very low, maybe 1/10? It just doesn't seem like the type of thing that has an easy truth-value to it, which is frustrating. I definitely buy the theoretical argument that religion is instrumentally rational for many people[1], what's lacking is empirics and/or models. But nothing in the post itself is woo-y. 

Symmetry Theory of Valence -- 5-6/10? I dunno I've looked into it a bit over the years but it's far from any of the things I've personally deeply studied. They trigger a bunch of red flags; however I'd be surprised but not shocked if it turns out I'm completely wrong here. I know Scott (whose epistemics I broadly trust) and somebody else I know endorses them.

But tbc I'm not the arbiter of what is and is not woo lol.

  1. ^

    And I'm open to the instrumental rationality being large enough that it even increases epistemic rationality. Analogy: if you're a scientist who's asked to believe a false thing to retain your funding, it might well be worth it even from a purely truth-seeking perspective, though of course it's a dangerous path.

Reply
Linch's Shortform
Linch19d*72

I don't feel strongly about what the specific solutions are. I think it's easier to diagnose a problem than to propose a fix. 

In particular, I worry about biases in proposing solutions that favor my background and things I'm good at.

I think the way modern physics is taught probably gives people a overly clean/neat understanding of how most of the world works, and how to figure out problems in the world, but this might be ameliorated by studying the history of physics and how people come to certain conclusions. Though again, this could easily be because I didn't invest in the points to learn physics that much myself, so there might be major holes in what I don't know and my own epistemics.

I think looking at relevant statistics (including Our World In Data) is often good, though it depends on the specific questions you're interested in investigating. I think questions you should often ask yourself for any interesting discovery or theory you want to propose is something like "how can I cheaply gather more data?" and "Is the data already out there?" Some questions you might be interested in are OWID-shaped, and most probably will not be.

I found forecasting edifying for my own education and improving my own epistemics, but I don't know what percentage of LessWrongers currently forecast, and I don't have a good sense of whether it's limiting LessWrongers. Forecasting/reading textbooks/reading papers/reading high-quality blogposts all seem like plausible contenders for good uses of time.

Reply
Linch's Shortform
Linch19d*15-1

My problems with the EA forum (or really EA-style reasoning as I've seen it) is the over-use of over-complicated modeling tools which are claimed to be based on hard data and statistics, but the amount and quality of that data is far too small & weak to "buy" such a complicated tool. So in some sense, perhaps, they move too far in the opposite direction.

Interestingly I think this mirrors the debates of "hard" vs "soft" obscurantism in the social sciences: hard obscurantism (as is common in old-school economics) relies on over-focus on mathematical modeling and complicated equations based on scant data and debatable theory, while soft obscurantism (as is common in most of the social sciences and humanities, outside of econ and maybe modern psychology) relies on complicated verbal debates, and dense jargon. I think my complaints about LW (outside of AI) mirror that of soft obscurantism, and your complaints of EA-forum style math modeling mirror that of hard obscurantism. 

To be clear I don't think our critiques are at odds with each other.

In economics, the main solution over the last few decades appears mostly to be to limit their scope and turn to greater empiricism ("better data beats better theory"), though that of course has its own downsides (streetlight bias, less investigation into the more important issues, replication crises). I think my suggestions to focus more on data is helpful in that regard.

Reply1
Linch's Shortform
Linch20d305

My own thoughts on LessWrong culture, specifically focused on things I personally don't like about it (while acknowledging it does many things well). I say this as someone who cares a lot about epistemic rationality in my own thinking, and aspire to be more rational and calibrated in a number of ways.

Broadly, I tend not to like many of the posts here that are not about AI. The main exception are posts that are focused on objective reality, with specific, tightly focused arguments (eg). 

  • I think many of the posts here tend to be overtheorized, and not enough effort being spent on studying facts and categorizing empirical regularities about the world (in science, the difference between a "Theory" and a "Law").
    • My premium of life post is an example of the type of post I wish other people write more of.
  • Many of the commentators also some seem to have background theories about the world that to me seem implausibly neat (eg a lot of folk evolutionary-psychology, a common belief regulation drives everything)
  • Good epistemics is built on a scaffolding of facts, and I do not believe that many people on LessWrong spent enough effort checking whether their load-bearing facts are true.
  • Many of the posts here have a high verbal tilt; I think verbal explanations are good for explaining concepts you already understand very well to normal people, but verbal reasoning is not a reliable guide to discovering truth. In contrast, they tend to be lighter on statistics, data and simple mathematical reasoning.
  • The community overall seems more tolerant of post-rationality and "woo" than I would've expected the standard-bearers of rationality to be.
  • The comments I like the most tend to be ones that are a) tightly focused, b) easy to understand factual or logical claims, c) either/both focused on important load-bearing elements of the original arguments and/or easy to address, and d) crafted in a way to not elicit strong emotional reactions from either your debate partner or any onlookers.
    • Here are some comments of mine I like in this vein.
    • I think the EA Forum is epistemically better than LessWrong in some key ways, especially outside of highly politicized topics. Notably, there is a higher appreciation of facts and factual corrections.
      • Relatedly, this is why I don't agree with broad generalizations of LW in general having "better epistemics"
  • Finally, I dislike the arrogant, brash, confident, tone of many posts on LessWrong. Plausibly, I think a lot of this is inherited from Eliezer, who is used to communicating complex ideas to people less intelligent and/or rational than he is. This is not the experience of a typical poster on LessWrong, and I think it's maladaptive for people to use Eliezer's style and epistemic confidence in their own writings and thinking.

I realize that this quick take is quite hypocritical in that it displays the same flaws I criticized, as are some of my recent posts. I'm also drafting a post arguing against hypocrisy being a major anti-desiderata, so at least I'm not meta-hypocritical about the whole thing.

Reply4211
Load More
No wikitag contributions to display.
9Against Epistemic Democracy: A Epistemic Tier List of What Actually Works
1mo
3
23A Precocious Baby's Guide to Anthropics
2mo
0
42Why Reality Has A Well-Known Math Bias
2mo
18
19The Rising Premium of Life, Part 2
2mo
0
10The Rising Premium of Life, Or: How We Learned to Start Worrying and Fear Everything
2mo
10
35Eating Honey is (Probably) Fine, Actually
2mo
0
55My "infohazards small working group" Signal Chat may have encountered minor leaks
5mo
0
36Announcing the Q1 2025 Long-Term Future Fund grant round
9mo
2
64A Qualitative Case for LTFF: Filling Critical Ecosystem Gaps
9mo
2
40Long-Term Future Fund: May 2023 to March 2024 Payout recommendations
1y
0
Load More