This is a special post for quick takes by Sherrinford. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
96 comments, sorted by Click to highlight new comments since: Today at 9:15 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

1.) Conflict theory in practice: you see conflicts of interest, explain them to your ingroup, and if they don't agree, they are corrupted by the enemy.

2.) Mistake theory in practice: you identify behavior as bad, explain that to everybody and if they don't agree either move to 1.) or note that people are very stupid.

It would be great if people first did some literature research before presenting their theory of life, universe and everything. If they did not find any literature, they should say so.

I considered looking for any studies or documentation about whether blog and website posts are improved by prior research or references.  But then I got distracted, so I just wrote this comment instead.

3Sherrinford3y
At least you didnt write a long longform post :)

German Jokes translated by ChatGPT

Me:

Here's a German joke, could you please translate it to English such that the joke is still funny? Zwei Kerzen unterhalten sich. Sagt die eine zur anderen: "Ist Wasser eigentlich gefährlich?" Die andere antwortet: "Ja, davon kannst du ausgehen!"

ChatGPT: 

Here's a translation of the German joke into English:

Two candles are talking to each other. One says to the other, "Is water dangerous?" The other answers, "Yes, you can count on it!"

The joke still retains its humor in English, as it relies on a simple play on words ... (read more)

3ChristianKl1y
This seems to be pretty useless for people who don't know both languages to judge.  For those who don't know German, basically, all the jokes lose their original humor but ChatGPT pretends that they keep the original humor. 
1Sherrinford1y
Thanks, Christian! Of course you are right! In this case, the shortform entry was meant to be funny for people who would understand the languages, like you. But I see that can also be misleading for people who do not understand them.

Saying that "the control system" does something is about as informative as saying that "emergence" is the cause of something.

5Measure3y
"Control system" means something a bit more specific than "whatever it is that causes the behavior of the system". A control system has an input, an output, and a set point. It varies the output based on the difference (error) between the input and the desired set point (sometimes it has terms for the derivative or for the accumulation of the error, but not always). In practice this means that it is hard to move the output away from the set point since the control system will respond by pushing in the opposite direction.
1Sherrinford3y
In the context in which I have been seeing the statement that "the" control system moves a certain behavior, there is nothing but the claim that the control system does exactly do what it is claimed to do. No precise explanation. No precise prediction (sure, the claim is that the output moves towards the set point, but nothing about the time dimension). If anything, the term is always used to "explain" behavior ex-post.

Currently reading Fooled by Randomness, almost 20 years after it was published. By now I have read about a third of it. Up to now, it seems neither very insightful nor dense; all the insights (or observations) seem to be what you can read in the (relatively short) wikipedia article. It is also not extremely entertaining.

I wonder whether it was a revealing, revolutionary book back in the days, or whether it is different to people with a certain background (or lack thereof), such that my impression is, in some sense, biased. I also wonder whether the other b... (read more)

2Rudi C4y
I read Black Swan early in my introduction to heuristics and biases, in my teens. I remember that the book was quite illuminating for me, though I disliked Taleb's narcissism and his disrespect for the truth. I don't think it was so much "insightful" as helping me internalize a few big insights. The book's content definitely overlaps a lot with beginner rationality, so you might not find it worthwhile after all. I read a bit of FbR and about half of Antifragile as well, but I found those much less interesting. An aside: Taleb talks about general topics. It's hard to say new things in that market (it's saturated), and the best parts of his new insights have already become part of the common lexicon.

New results published in Cell suggest that Sars-Cov 2 gets into the body via the nasal mucosa and then gets into deep parts of the lung via body fluids, and possibly into the brain. A second part of the same study suggests that there may be a partial immunity against Sars-Cov 2 of people who had Sars or Mers. (Disclaimer: I only read a newspaper summary.)

People worried about AI alignment: At some point, an AI might succeed convincing people to give it access to the internet!

Big internet company seeing an AI: Hm we could just include the AI into our search engine.

A list of some beliefs of which I changed my subjective probabilities because of FTX (in one direction or the other):

  • "When rich EA community people give money to political candidates, fear that they just want to buy influence so as to avoid regulation is misplaced."
  • "The EA community is very good at understanding and managing risks, including low-probability high-effect risks"
  • "when success stories are too good to be true, they aren't true."
  • "crypto is a good thing."
  • "People in general understand risk diversification."

Seeing a blog post from 2019 that called UK and Australia fascistic because Melatonin is a prescription medicine makes me update slightly in the direction that discourse norms in the ea/lw communities did not decline due to corona as much as I had thought.

2ChristianKl3y
Link?
1Sherrinford3y
https://medium.com/@robertwiblin/things-i-recommend-you-buy-and-use-rob-edition-1d7b2ce27d68

Having put some thoughts into the 80,000 hours career planning document, I think it is time for the next "some weeks of thinking" projects.

Either it's gonna be similar planning processes:

  • the life of my kids, 20 years in the future
  • where should I spend the next 5 years? And where the next 20?
  • a plan for personal finances
  • a health plan
  • a sports plan
  • whom to spend time with
  • personal volunteering/politics/ea
  • writing projects

Or it's gonna be concrete learning projects:

  • gtd
  • a language
  • python or r
  • project management
  • ... (Some of them more like refreshers)

Y... (read more)

6Zac Hatfield-Dodds3y
Kids, location, finances, and health are all extraordinarily high-leverage to think about - at least if you act on your plans. Personally I'd start with personal finance, mostly because it should be pretty quick and simple to sort out (not always easy, to stick to, but simple). The personalfinance reddit has good flowcharts to follow, and I wrote a list of investing resources here if you want more detail than "buy index funds and get on with the rest of your life".
1Jan Czechowski3y
Did you consider looking at it rather from "options" than "goals" perspective? Rather than defining goals and looking for the optimal path to get there, you can look at /brainstorm exploitable options that you have available and seem to have high returns. And then prioritize them. I recently spent half a day writing down cool ideas for things to do, then collected them in todoist, and since then, whenever I have time I go through them. And add something new.
1Sherrinford3y
That seems like an interesting approach that may potentially make good use of personal energy.

If lesswrong.com had to delete itself for some reason, where would you go instead?

5Viliam3y
I suppose many people would then move to astralcodexten.substack.com , and there I would ask the same question in an Open Thread.
3Sherrinford3y
Do you think that commenting in Open Threads is very similar to posting and commenting here?
4Viliam3y
More like, I would ask in the ACX Open Thread "what is the place you go to now that LW is gone?" And then I would follow the crowd. Posting in ACX Open Threads as such... well, you get a smart audience, but there is simply too much content. Reading those threads is a full-time job.
1Sherrinford3y
Interesting focal point, though I wonder how strong the overlap is.

The results of Bob Jacob's LessWrong survey are quite interesting. It's a pity the sample is so small.

The visualized results (link in his post) are univariate, but I would like to highlight some things:

49 out of 56 respondents identifying as "White",
53 out of 59 respondents born male and 46 out of 58 identifying male cisgender
47 of 59 identifying as heterosexual (comparison: https://en.wikipedia.org/wiki/Demographics_of_sexual_orientation)
1 out of 55 working in a "blue collar" profession
Most people identify as "left of c... (read more)

4Viliam4y
If you have 9 people who identify as left-wing and 1 person who identifies as right-wing, many people will hysterically denounce the entire group as "extreme right", based on the fact that the 1 person wasn't banned. Furthermore, if you have people who identify as left-wing, but don't fully buy the current Twitter left-wing orthodoxy, they too will be denounced by some as "extreme right". This skews the perception.
1Sherrinford4y
I don't think that fits what I am talking about: 1. The survey was non-binary. Your first claim does not distinguish extremes and moderates. 2. The survey was anonymous. You cannot ban anonymous people. 3. I see no reason why people should have overstated their leftishness. 4. If your statement is meant to explain why my perception differs from the result, it does not fit. My perception based on posts and comments would have been relatively more rightwing, less liberal / social democratic / green etc. 5. I don't see where leftwing lesswrongers are denounced as rightwing extremists. In particular, I don't see where this explains people identifying as leftwing in the survey.

My model is that in USA most intelligent people are left-wing. Especially when you define "left-wing" to mean the 50% of the political spectrum, not just the extreme. And there seem to be many Americans on Less Wrong, just like on most English-speaking websites.

(Note that I am not discussing here why this is so. Maybe the left-wing is inherently correct. Or maybe the intelligent people are just more likely to attend universities where they get brainwashed by the establishment. I am not discussing the cause here, merely observing the outcome.)

So, I would expect Less Wrong to be mostly left-wing (in the 50% sense). My question is, why were you surprised by this outcome?

I don't see where leftwing lesswrongers are denounced as rightwing extremists.

For example, "neoreaction" is the only flavor of politics that is mentioned in the Wikipedia article about LessWrong. It does not claim that it is the predominant political belief, and it even says that Yudkowsky disagrees with them. Nonetheless, it is the only political opinion mentioned in connection with Less Wrong. (This is about making associations rather than making arguments.) So a reader who does not know how ... (read more)

3Sherrinford4y
I agree. (I assume that by political spectrum you refer to something "objective"?) Given the whole Bay-area thing, I would have expected a higher share. In the survey, 37 out of 60 say they are residing in the US. Having been in this forum for a while, my impressions based on posts and comments led me to believe that less than 50% of people on lessrong would say of themselves that they are on values 1-5 of 1-10 scale from left-wing to right-wing. In fact, 41/56 did so. The wikipedia article, as far as I can see, explains in that paragraph where the neoreactionary movement originated. I don't agree on the "intended outcome", or rather, I do not see why I should believe that.

The wikipedia article, as far as I can see, explains in that paragraph where the neoreactionary movement originated.

It's not true, though! The article claims: "The neoreactionary movement first grew on LessWrong, attracted by discussions on the site of eugenics and evolutionary psychology".

I mean, okay, it's true that we've had discussions on eugenics and evolutionary psychology, and it's true that a few of the contrarian nerds who enthusiastically read Overcoming Bias back in the late 'aughts were also a few of the contrarian nerds who enthusiastically read Unqualified Reservations. But "first grew" (Wikipedia) and "originated" (your comment) really doesn't seem like a fair summary of that kind of minor overlap in readership. No one was doing neoreactionary political theorizing on this website. Okay, I don't have a exact formalization of what I mean by "no one" in the previous sentence because I haven't personally read and remembered every post in our archives; maybe there are nonzero posts with nonnegative karma that could be construed to match this description. Still, in essence, you can only make the claim "true" by gerrymandering the construal of those words.

And yet the cha... (read more)

7jimrandomh3y
Now might be a good time to revisit that belief. He just got a topic ban on "editing about Scott Siskind, broadly construed". I make the case on the LessWrong article talk page that this topic ban could be construed as extending to the LessWrong article, and also that there is a separate case against him editing that article based on similar behavior to the behavior that he got a topic ban for.
3Sherrinford3y
It would probably really take a lot of time to even understand what is and what is not considered to be in accordance with wikpedia rules. I note that, as in any other volunteer organization, a lot depends on who wants to put more time and effort into fighting for his/her convictions.
8Viliam3y
The Wikipedia rules are... meaningful, if you interpret them in good faith. But if are using arguments as soldiers, then pretty much in every situation for any side you can find a rule that can be used in its favor. The key is to find it, and express it using words familiar to other admins. For example, if a person edits an article they are somehow related to, it is either a good thing (we want to encourage experts to edit Wikipedia) or a bad thing (conflict of interest). Depending on whether you agree with the person or not, you choose the relevant rule, and insist that it applies. Similarly, most content can be removed as not important (Wikipedia is an encyclopedia, not a collection of everything) or kept as important to some people (Wikipedia is not on paper, we do not need to worry about number of pages). Short articles can be removed (as useless) or kept (because even a short article encourages people to extend it). Then there is a debate about what sources are considered "reliable" scientifically (demand higher rigor if you disagree with the conclusion, anything goes if you agree) and politically (conservative or neutral). Shortly, the rules do not enforce themselves; they need people to enforce their interpretation. As you said, if you visibly volunteer a lot, you gain status within the community. When a conflict escalates, the higher status person has much better chance to win. The more experiences people can use more sophisticated techniques, for example if you are high status and you break enough rules that there is a realistic chance you might get banned, say: "guys, I honestly believe I did nothing wrong, but I value our friendship and peace so much that I decided to stop editing the article, because I love this community so much". Then everyone rejoices that the problem was resolved without having to ban a high-status person. Two weeks later you change your mind and start editing the article again. If anyone proposes a ban again, your friends will di
1Sherrinford3y
I had to sigh when I read "it can be hard to find editors who don't have a strong opinion about the person. But this is very far from that, likely one reason why the NYT actually used David Gerard as a source".
3Sherrinford4y
Interesting. I had maybe read the Wikipedia article a long time ago, but it did not leave any impression in my memory. Now rereading it, I did not find it dramatic, but I see your point. Tbh, I stilĺ do not fully understand how Wikipedia works (that is, I do not have a model who determines how an article develops). And the "originated" (ok maybe that is only almost and not fully identical to "first grew") is just what I got from the article. The problem with the association is that it is hard to definitely determine what even makes things mentionable, but once somebody publibly has to distance himself from something, this indicates a public kind of association. Further reading the article, my impression is that it indeed cites things that in Wikipedia count as sources for its claims. If the impression of lesswrong is distorted, then this may be a problem of what kinds of thing on lesswrong are covered by media publications? Or maybe it is all just selective citing, but then it should be possible to cite other things.
3Viliam4y
In theory, Wikipedia strives to be impartial. In practice, the rules are always only as good as the judges who uphold them. (All legal systems involve some degree of human judgment somewhere in the loop, because it is impossible to write a set of rules that covers everything and doesn't allow some clever abuse. That's why we talk about the letter and the spirit of the law.) How to become a Wikipedia admin? You need to spend a lot of time editing Wikipedia in a way other admins consider helpful, and you need to be interested in getting the role. (Probably a few more technical details I forgot.) The good thing is that by doing a lot of useful work you send a costly signal that you care about Wikipedia. The bad thing is that if certain political opinion becomes dominant among the existing admins, there is no mechanism to fix this bias; it's actually the other way round, because edits disagreeing with the consensus would be judged as harmful, and would probably disqualify their author from becoming an admin in the future. I don't assume bad faith from most of Wikipedia editors. Being wrong about something feels the same from inside as being right; and if other people agree with you, that is usually a good sign. But if you have a few bad actors who can play it smart, who can pretend that their personal grudges are how they actually see the world... considering that other admins already see them as part of the same team, and the same political bias means they already roughly agree on who are the good guys and who are the bad guys... it is not difficult to defend their decisions in front of jury of their peers. An outsider has no chance in this fight, because the insider is fluent with local lingo. Whatever they want to argue, they can find a wiki-rule pointing in that direction; of course it would be just as easy for them to find a wiki-rule pointing in the opposite direction (e.g. if you want to edit an article about something you are personally involved with, you have
1Sherrinford4y
Thanks for the history overview! Very interesting. Concerning the wikipedia dynamics, I agree that this is plausible, as it is a plausible development of nearly every volunteer organization, in particular if they try to be grassroots-democratic. The wikipedia-media problem is known (https://xkcd.com/978/) though in this particular case I was a bit surprised about the "original research" and "reliable source" distinction. Many articles there did not seem very "serious". On the other hand, during this whole "lost in hyperspace", I also found "A frequent poster to LessWrong was Michael Anissimov, who was MIRI’s media director until 2013." (https://splinternews.com/the-strange-and-conflicting-world-views-of-silicon-vall-1793857715) which was news to me. In internet years, all this is so long ago that I did not have any such associations. (I would rather have expected lesswrong to be notable for demanding the dissolution of the WHO, but probably that is not yet clickbaity enough.)
2Dirichlet-to-Neumann3y
My model is that what is called "left of center" in the USA is "far right, at least economically"* in Europe (and what the USA call "socialism" is "what everyone agrees with". *"economically" does a fair bit of work here - on issues like immigration for example the left right divide is the same as in the US.

You would hope that people actually saw steelmanning as an ideal to follow. If that was ever true, the corona pandemic and the policy response seem to have killed the demand for this. It seems to become acceptable to attribute just any kind of seemingly-wrong behavior to either incredible stupidity or incredible malice, both proving that all institutions are completely broken.

4Dagon4y
I like the word "institurions". Some mix of institutions, intuitions, and centurions, and I agree that they're completely broken.
1Sherrinford4y
:-) Thanks. But I corrected it.

I remember reading a post about how the US navy went down in number of ships and average age of the ships went up a lot over time. It was about how the US military in general spends a lot of their money very inefficiently.

Unfortunately, it might have been a decade since I read it and I don't find it. Does anybody here know the post I'm referring to?

1Sherrinford3y
I assume you put that as a comment my shortform by accident? Or am I misunderstanding something? Anyway, maybe you mean this: https://www.defensenews.com/naval/2018/06/07/the-us-navys-ships-are-getting-old-they-might-be-getting-a-lot-older/
2ChristianKl3y
Yes, it was on your shortform by accident. I wasn't even aware that it's possible to top level post on others shortform.

It's funny that in the interview episode "Rob Wiblin on how he ended up the way he is" of the 80,000 hours podcast, Misha Saul says that parents don't have much of an influence on the development of their own children (biodeterminism), but at the same time the whole interview is about important, formative experiences.

3Elizabeth3y
The thing the available evidence can show us is that variations in parenting choices don't explain much variation in outcomes, among the populations studied, which tend to be WEIRD and in the case of adoption studies have passed an explicit bar for estimated parenting quality. This is not the same as "parents don't have much of an influence on the development of their own children", which is easily disproven by looking at the 5th percentile parents.
1Sherrinford3y
I am not sure I exactly understand whst that says. Something like "parenting choices are only important if they are really bad"?
7Elizabeth3y
Not quite. The point is that the studies are mostly done on parents with pretty similar styles, so there isn't enough variation in parenting style to lead to detectable variation in outcomes. It's the equivalent of studying how water affects plant growth, but giving every plant in the range of 5% of what you think their ideal is, while varying other factors (light, nutrients, plant genetics) tremendously.  Your results don't show that water doesn't matter for plants, they show that the impact of that small variation in water is lost amongst the noise.
1Sherrinford3y
Very interesting, thanks.
2Jan Czechowski3y
The best book I have ever only read the review of: Selfish Reasons to Have More Kids Seems to have a good summary of evidence for upbringing if you're interested in the subject.
1Sherrinford3y
Thanks. Which evidence for upbringing do you mean in this case? I don't doubt that everybody is in some sense is "biodetermined", but it's ironic that the podcast episode then mainly talks about certain formative experiences the two who talk had when they met each other in highschool, and how Rob was influenced by his mother and his father als role models.

The "Snake cult of consciousness" theory sounds extremely fascinating. Qt the same time, it also sounds like the explanations why the pyramids were built by aliens. For laypeople, it is hard to distinguish between Important insights and clever nonsense.

2Mitchell_Porter2mo
What's your p(thulsa doom)?

Why do nuclear-energy fan articles often mention France as a positive example without discussing the drastic reduction of French nuclear power generation in 2022?

In the recent Econtalk podcast with Tyler Cowen, Cowen explicitly and strictly demands a mathematical model of AI risk, claiming something like that does not exist.

At the same time, he sees "Hayekian" arguments as a kind of benchmark. As far as I know, there is no mathematical benchmark model of the classical Hayek argument.

The same is true for Cowen's demand for loyalty to the US constitution. There is no mathematical model for that.

All claims and demands of Cowen are asymmetrical. The doomers are emotional, he says. The non-doomers are just reasonable.

One thing of which it might be helpful if powerful beings could learn it: "It's in general not okay to enforce your wishes on others. "

However, ethics is complicated and you will probably find many cases where enforcing your wishes on others is actually okay.

Moreover, if the learning dataset is humanity's behavior, then it's probably a problem that enforcing takes place all the time.

Typical fiction has probably framed thinking about the development of intelligence in non-humans in bad ways.

  • C3PO from Star Wars seems like a nervous language nerd.
  • Data from Star Trek mostly seems like a human with a built-in calculator.
  • The Planet of the Apes suggests that getting intelligence means getting a draw from a human-population IQ distribution.
  • All other artificial intelligences can be outsmarted. Otherwise the movie would not work.
  • Data, C3PO etc never have wifi.
2Dagon11mo
I'm not sure the causality is that clear - typical fiction pulls from cultural norms at least as much as it pushes them.  I fully agree with your underlying belief that common thinking about intelligence is massively oversimplified.  Note that this applies to HUMAN intelligence as well.

I sometimes read claims like "doing strength training for more than [insert small number of times per week] is useless, see scientific evidence". Another impression is: people doing lots of exercise per week get big muscles. How do these observations relate to reality? What is the fitness production function, or the optimal schedule, if one really wanted to become something like a Schwarzenegger? (I don't. This is just curiosity.)

Several posts in this or the ea forum seem to suggest that it's kind of standard among people in the community to have invested a large share of their personal wealth into crypto, and in ftx in particular.

Why should this be assumed? Are there statistics on that?

I used to believe that most people have a diversified portfolio, both inside and outside of the communities. Isn't that the case?

3Dagon1y
I think it's ... not common nor standard, but not uncommon that especially younger people in techie communities have an outsized share of their net worth in crypto.  I doubt they INVESTED the majority, but if they invested a small-ish amount very early, it will have become the lion's share in the explosion of a few years ago.   Depending on the population/segment you're talking about, most people have no significant portfolio, just savings and debt.  Many will add real-estate before financial investments.  Then there are a fair subset of (again, mostly younger) people who are RobinHood-style traders.  Buy for fun, maybe make a lot, maybe lose most of what you put in.  It's a minority who have enough assets and long-term focus to actually design and implement a portfolio.  Of that minority, a majority is diversified via mutual funds, often in retirement accounts.
1Sherrinford1y
So if I understand you correctly, some techies in the lw and ea communities have put some money in there and due to the strong increase of crypto prices, the worth of their crypto assets had increased a lot. Now that a lot of it is dust in the wind, these people have lost a lot, but if it had only grown from a small-ish amount, they are not really worse off than they would have been with conservative investements, right? I am asking because I still don't fully understand posts like this. I think I understand the "job security" part and also would understand if it was about the monetary loss to important EA causes, both because SBF gave money to EA. But the text explicitly mentions "personal savings" even before job security, and does not explicitly mention EA causes.
2Dagon1y
Ah, misunderstood "people in the community", and answered for a much larger community of rationalist and adjacent techies. I don't really follow the EA side of things that closely (I support the stated mission behind the name, but it seems to have become rather cultish on many topics).  I presume the post you reference is mostly about Sam Bankman-Fried and his circle - he's the CEO/founder for FTX, and also a fairly vocal/visible EA ... Participant?  Donor?  Unsure of exact descriptor, but I can imagine that many in that group are afraid for him and his loss of wealth (and loss of status due to seemingly bad behavior), as well as many of them being personally invested in that particular branch of crypto.

What are the properties that make substack so successful? At first glance, substack blogs seem less structured than e.g. wordpress.com. In Substack, the "Archive" of a blog is just a long list. Distributing new articles via email does not seem like a spectacular feature, but in any case it should be possible on other blog platforms as well. What am I missing?

4Gordon Seidoh Worley2y
I think it's less about tech and more about the social aspect. Why is substack succeeding? Because they paid a bunch of people to use it, and so far some of those people are continuing to make good money writing on the platform. Then others want to use it because maybe they can make the same kind of money, or at least be on the same platform where all the other cool writers they like are.
1Sherrinford2y
Interesting perspective, and a bit disappointing.
2Gordon Seidoh Worley2y
Maybe? Succeeding in business is about more than the tech. I've worked in stuff that was definitely "better" than the compression but lost because we didn't get other stuff right: sales, marketing, etc. I guess it's be cool if the best tech always won, but the world doesn't usually work that way.

There may be a certain risk that downvoting culture replaces "comment and discussion" culture (at least at the margins). A reason for that may be that there is no clear idea of what a downvote (or an upvote) actually means, such that possibly some people just upvote if the content of a comment / post confirms their worldview (and vice versa).

3Vladimir_Nesov3y
A lot of things people talk about are not at all about "their worldview" in the sense of beliefs and values, this characterization is often enough noncentral. I'm arguing use of words in this comment, is that an element of my worldview? Perhaps I value accurate use of words, and can't find a suitable counterexample.
1Sherrinford3y
Maybe worldview is a word that comes along with too many associations? What about "prior belief"?
3Vladimir_Nesov3y
Some hypotheses are not beliefs (they are beliefs-in-development that aren't yet ready for making predictions), and many constructions are not even hypotheses in this sense (they are not about the real world). I don't believe there is a unifying concept behind the things people talk about, different concepts are salient for different statements.
1Sherrinford3y
I don't think that contradicts my original statement strongly. The statement is itself a hypothesis, but I wrote it down because I find it likely that it describes behavior. However, I don't have a strong degree of confidence about it.  Some comments may not be in the worldview / belief category, and in this case it may be the case that the people I hypothesized about may just neither upvote nor downvote. It is also possible that in this case voting on posts or comments may be motivated by different things.
2Vladimir_Nesov3y
I don't think it contradicts it at all, it's unrelated to your original statement, only to the use of a word in it that can be steelmanned away in the obvious manner.

I guess this is a really bad time to write book reviews for lesswrong.

When people write articles containing wrong statements and statements without evidence or source, you can use your knowledge of the wrong statements to update the probability that the statements without evidence or source are true.

3Dagon3y
Kind of the reverse of Gell-Mann Amnesia (https://en.wikipedia.org/wiki/Michael_Crichton#GellMannAmnesiaEffect).  Arguably, it should be applied to editorial units (sites, publications, etc.), not just to individual authors.
1Sherrinford3y
Yes. I hope certain forums and sites I regularly read don't continue developing into a direction of not demanding evidence and sources for claims. By the way, there is also the danger that someone at some point just exploits his/her own reputation to push an agenda.

More articles on the supposed Astra Zeneca bloodclot mechanism, adding to this:

(All in German, but I think that in general, automated translation has become really good.)

I would love to see examples of contributions with actual steelmanning instead of just seeing people who pay lipservice to it.

1niplav3y
I believe that steelmanning has mostly been deprecated and replaced with ideological turing tests.
6Kaj_Sotala3y
ITTs and steelmanning feel like they serve different (though overlapping) purposes to me. For example, if I am talking with people who are not X (libertarians, socialists, transhumanists, car-owners...), we can try to steelman an argument in favor of X together. But we can't do an ITT of X, since that would require us to talk to someone who is X.
1Sherrinford3y
Yes, though I assume the best test for whether you really steelman someone would be if you can take a break and ask her whether your representation fits.
4Matt Goldenberg3y
I don't think an ITT is a good test of a steelman. Often you're coming from a different frame from someone else, and strong arguments to you will be framed completely differently from strong arguments for someone else.
1Sherrinford3y
Yes maybe an ITT tests a fleshman instead of a steelman or a strawman...
4Sherrinford3y
What I mean is:  I would like to see that people who write articles about what the supposed actions or motivations of other people - or government agencies, firms, or whatever - are to actually try to present their actions and motivations in a way that at least assumes that they are not completely dumb or evil or pathetic. It seems to be fashionable that when people do not see the sense behind actions, they do not try hard but jump to the conclusion that it must be due to some despicable, stupid, or at least equilibrium-inefficient behavior (e.g. some claims about "signalling"; no proper analysis whether the claim makes sense in a given situation required). This may feel very insightful; after all, the writer seemingly has a deeper insight into social structures than the social agents. But supposed insights that feel too good can be dangerous. And that a model is plausible does not mean that it applies to every situation.

Among EA-minded people interested in preventing climate change, it seems Clean Air Task Force (CATF) is seen very favorably. Why? The "Climate Change Cause Area Report" by Founders Pledge (PDF) gives an overview.

CATF's work is  introduced as follows:

"It was founded in 1996 with the aim of enacting federal policy reducing the air pollution caused by American coal-fired power plants. This campaign has been highly successful and has been a contributing factor to the retirement of a large portion of the US coal fleet." (p. 5)

On p. 88, you will read:

"Do th... (read more)

In the latest episode, the Bayesian conspiracy podcast people discuss moral behavior in the context of FTX and in that context treat the claim that the FTX thing is evidence that you shouldn't trust people who look kind of weird as a totally normal part of a conversation. Could we maybe just slightly slow down all these discussions?

Still waiting for a change of the general attitude in certain rationality etc circles concerning Elon Musk, a change that would also take into account what kind of news sources Musk promotes on X.

3Dagon6mo
Interesting.  Would you share your current measurement, and prediction of change over what timescale?  In my circles (which are not mostly self-identified as "rationality etc"), most of the change happened already - a shift from "annoying crazy person, with a genius for getting cool stuff into production" to "complete maniac who had lucky results when his lunacy was constrained, but that's no longer the case".  This change happened mostly before and during the Twitter/X runup, not since the actual sale and changes.  
-1Sherrinford6mo
Measurement would be an exaggeration. I see fandom. I hardly see people discussing that Elon Musk retweets and promotes very problematic things, which seem like he builds his political worldviews on very low-quality sources. And I hardly see people noticing that such concentration of power can in itself be a problem.
2ChristianKl6mo
Concentration of power is problematic but it's also necessary for things getting done. If you take robert Moses, the concentration of power around him was problematic but he got things build.  Fear of concentration of power on individual people is one core feature of the Great Stagnation. As far as Musk's retweeting goes, it's impact is not very large compared to the the effects of projects like Starship. 
1Sherrinford6mo
"Concentration of power is problematic but it's also necessary for things getting done." Sure some amount of power may be productive, but very high concentration of power can be problematic, for example as it puts people on the powerful person's Mercy. "Fear of concentration of power on individual people is one core feature of the Great Stagnation." I assume "feature" in this cases means correlate, not cause.  "As far as Musk's retweeting goes, it's impact is not very large compared to the the effects of projects like Starship." This is hard to compare. Influencing opinions is relevant.  I would also say that Musk's tweets are informative in forming expectations about what he might use control of strategically important technologies for in the future. In general, I would prefer if a person did not have infinite power to determine the ability of societies to act, and I guess there is an amount of power lower than infinite at which this becomes problematic.
2Dagon6mo
It's always surprising (which means it should be never surprising) how much of a bubble I (and you, it seems) am in.  I have multiple partially-overlapping groups, NONE of which view Musk as overall admirable, though some are more balanced in evaluating his successes and offenses.