Blog posts as epistemic trust builders

by adamzerner1 min read27th Sep 20207 comments

18

Intellectual Progress (Society-Level)Community
Frontpage

I've really been enjoying Zvi's weekly posts on the coronavirus. Keeping up with what's going on is something I want to do, but not badly enough to put in the time myself. I'm not even sure how capable I would be even if I wanted to put in the time myself; it seems difficult to sift through all of the information out there.

Reading Zvi's posts works out perfectly for me though. 20-30 minutes a week and I get just what I need.

But all of this only works because I trust Zvi.


Like most people nowadays, I spend a lot of time online. In particular, reading blog posts. LessWrong, Overcoming Bias, Slate Star Codex, Hacker News, FiveThirtyEight, etc. When I need a break, I have my little routine of websites that I click through.

Sometimes I reflect on how much value I get out of reading all of these blog posts. Nothing against the authors, but when I finish an article, I usually am not left with the feeling that I've gained much. I see it as a numbers game: most of the time I don't gain much, but once in a while I come across something that really influences me.

But even when I'm not left off feeling particularly inspired by a post, I think that there is something more subtle that I gain by reading it: epistemic trust.

By reading the same authors over and over again, I start to get a feel for how much I can trust their reasoning ability. The more I trust them, the more I update in response to what they say. And when I reflect on the updates I perform, a surprisingly large proportion of them are of the (rough) form "I'll take your word for it".


The ultimate example of this is probably with respect to AI safety. I think AI safety is a huge deal, but the reason why I think so largely comes from me saying "I'll take your word for it". I have a very amateurish understanding of it all and wouldn't really be able to come to the conclusion "this is by far the most important thing in the world" via gears level reasoning.

But fortunately, I have a very high level of epistemic trust for the rationalist community. I've been able to cash in on this trust and update my beliefs about something that is very, very important.


I want to be careful about what I'm implying here. I'm not trying to imply that epistemic trust building is the main purpose of blog posts. I'm not even trying to be at all precise about how important I think that function is. My point is just that I think it's a function important enough to take note of.

18

7 comments, sorted by Highlighting new comments since Today at 4:36 AM
New Comment

But fortunately, I have a very high level of epistemic trust for the rationalist community.

No! Not fortunately! Speaking from personal experience, succumbing to the delusion that there is any such thing as "the rationalist community" worthy of anyone's trust has caused me an enormous amount of psychological damage, such that I'm still (still?!) not done recovering from the betrayal trauma more than three years later—and I'm probably not the only one.

(Uh, can't say I wasn't warned.)

(I thought I was done recovering as of (specifically) 13 September, but the fact that I still felt motivated to write a "boo 'rationalists'" comment on Friday and then went into an anxiety spiral for the next 36 hours—and the fact that I'm drafting this comment in a paper notebook when I should be spending a relaxing network-free Sunday studying math—suggest that I'm still (somehow—still?!) not done grieving. I think I'm really close, though!)

There is no authority regulating who's allowed to use the "rationalist" brand name. Trusting "the rationalist community" leaves you open to getting fucked over[1] by any bad idea that can successfully market itself to high-Openness compsci nerds living in Berkeley, California in the current year. The craft is not the community. The ideology is not the movement. Don't revere the bearer of good info. Every cause—every cause—wants to be a cult. At this point, as a guard against my earlier mistakes, I've made a habit of using the pejorative "robot cult" to refer to the social cluster, reserving "rationalist" to describe the methodology set forth in the Sequences—and really, I should probably phase out "rationalist", too. Plain rationality is already a fine word for cognitive algorithms that create and exploit map–territory correspondences—maybe it doesn't have to be an -ism.

Real trust—trust that won't predictably blow up in your face and take three and a half years to recover from—needs to be to something finer-grained than some amorphous self-recommending "community." You need to model the specific competencies of specific people and institutions, and model their incentives to tell you the truth—or to fuck with you.

(Note: people don't have to be consciously fucking with you in order for modeling them as fucking with you to be useful for compressing the length of the message needed to describe your observations. I can't speak to what the algorithms of deception feel from the inside—just that the systematic production of maps that don't reflect the territory for any reason, even mere "bias", should be enough for you to mark them as hostile.)

COVID-19 is an unusually easy case, where people's interests are, for the most part, aligned. People may use the virus as a prop in their existing political games, but at least no one is actually pro-virus. Under those circumstances, sure, trusting an amorphous cluster of smart people who read each other's blogs can legitimately be a better bet than alternative aggregators of information. As soon as you step away from the unusually easy cases—watch your step!

If you learned a lot from the Sequences, I think that's a good reason to trust what Eliezer Yudkowsky in particular says about AI in particular, even if you can't immediately follow the argument. (There's a prior that any given nonprofit claiming you should give it money in order to prevent the destruction of all value in the universe is going to just be a scam—but you see, the Sequences are very good.) That trust does not bleed over (except at a very heavy quantitative discount) to an alleged "community" of people who also trust Yudkowsky—and I don't think it bleeds over to Yudkowsky's off-the-cuff opinions on (let's say, picking an arbitrary example) the relative merits of polyamory, unless you have some more specific reason to trust that he actually thought it through and actually made sure to get that specific question right, rather than happening to pick up that answer through random cultural diffusion from his robot cult. (Most people get most of their beliefs from random cultural diffusion; we can't think fast enough to do otherwise.) Constant vigilance!


  1. I (again) feel bad about cussing in a Less Wrong comment, but I want to be very emphatic here! ↩︎

Eh, it's pretty obvious that there is a thing that corresponds to "beliefs of the rationality community" or "broad consensus of the rationality community", and also pretty obvious that those broadly get a lot of things more right than many other sources of ideas one could listen to. Of course, it might still be fine advice to try really hard to think through things for yourself, but like, calling the existence of such a thing as something that one could even hypothetically assign trust to a "delusion" just seems straightforwardly wrong.

While I agree that it’s a part of shared mapmaking that ‘exists’ (i.e. is a common referent people coordinate around), I do think that the process that determines what’s publicly considered “the beliefs of the rationality community” is fairly different from the actual consensus positions of those LessWrongers and MIRI/CFAR/LW staff (and others) who have shown themselves to be the most surprisingly correct thinkers, and it seems accurate for Zack to make the point that you’ll be subject to systematic error if you make the two things identical in your map of the world.

Oh, yeah, totally. I had understood Zack to make an ontological argument in the first paragraph that such an entity cannot coherently exist, or alternatively that "it is not deserving of anyone's trust", both of which seem like statements that are too strong to me, and I think neither correspond to the thing you are saying here. The rest of the comment seems pretty good and I agree with most of it. 

Trusting "the rationalist community" leaves you open to getting fucked over[1] by any bad idea that can successfully market itself to high-Openness compsci nerds living in Berkeley, California in the current year. 

That seems to be true to the extend that you see "the rationalist community" as being the cluster of people in Berkeley. It's my impression that it becomes less true when you speak about the more global community on LessWrong. The particular idea towards which you point as having caused you huge damage doesn't seem to have strong expressed support on LessWrong. 

Building reputation by repeated interaction.

But it needs to be the type of interaction where you notice and remember the author. For example, if you go to LessWrong, you are more likely to associate "I read this on LessWrong" with the information, than if you just visited LessWrong articles from links shared on social networks. (And it is probably easier to remember Zvi than an average author at LessWrong, because Zvi recently posted a sequence of articles, which is easier to remember than an equal number of articles on unrelated topics.) You need to notice "articles by Zvi" as a separate category first, and only then your brain can decide to associate trust with this category.

(Slate Star Codex takes this a bit further, because for my brain it is easier to remember "I read this on SSC" than to remember the set of articles written by Scott on LessWrong. This is branding. If your quality is consistently high, making the fact "this was written by me" more noticeable increases your reputation.)

The flip side of the coin is that the culture of sharing hyperlinks on social networks destroys trust. If you read hundred articles from hundred different sources every day, your brain has a problem to keep tabs. Before internet, when you regularly read maybe 10 different journals, you gradually noticed that X is reliable and Y is unreliable. Because sometimes you read ten reliable stories on one day, and ten unreliable stories on a different day, and it felt differently. But on internet, there are hundred websites, and you switch between them, so even if a few of the are notoriously bad, it is hard to notice. Even harder, because the same website can have multiple authors with wildly different quality. A scientist and a crackpot can have a blog on the same domain. With paper sources, the authors within one source were more balanced. (LessWrong is also kinda balanced, especially if you only consider the upvoted articles.)

Interesting points about social networks and link aggregators. I think you're right.

But at the same time, after years of reading Hacker News, I start to notice and come across the same authors, and I find myself going "Oh I remember you" when I browse HN. It's possible that this experience is rare, but my impression is that I'm a pretty "middle of the pack" reader, and so I expect that others have similar experiences. So then, it seems to me that the effect is still large enough to be worth noting.