While I think LW’s epistemic culture is better than most, one thing that seems pretty bad is that occasionally mediocre/shitty posts get lots of upvotes simply because they’re written by [insert popular rationalist thinker].
Of course, if LW were truly meritocratic (which it should be), this shouldn’t matter — but in my experience, it descriptively does.
Without naming anyone (since that would be unproductive), I wanted to know if others notice this too? And aside from simply trying not to upvote something because it’s written by a popular author, anyone have good ideas for preventing this?
The effect seems natural and hard to prevent. Basically, certain authors get reputations for being high (quality * writing), and then it makes more sense for people to read their posts because both the floor and ceiling are higher in expectation. Then their worse posts get more readers (who vote) than posts of a similar quality by another author, who's floor and ceiling is probably lower.
I'm not sure the magnitude of the cost, or that one can realistically expect to ever prevent this effect. For instance, ~all Scott Alexander blogposts get more readership than the best post by many other authors who haven't built a reputation and readership, and this kind of just seems part of how the reading landscape works.
Of course, it can be frustrating as an author to sometimes see similar quality posts on LW get different karma. I think part of the answer here is to do more to celebrate the best posts by new authors. The main thing that comes to mind here is curation, where we celebrate and get more readership on the best posts. Perhaps I should also have a term here for "and this is a new author, so I want to bias toward curating them for the first time so that they're more invested in writing more good content".
Yes, but you'd naively hope this wouldn't apply to shitty posts, just to mediocre posts. Like, maybe more people would read, but if the post is actually bad, people would downvote etc.
That's right. One exception: sometimes I upvote posts/comments written to low standards in order to reward the discussion happening at all. As an example I initially upvoted Gary Marcus's first LW post in order to be welcoming to him participating in the dialogue, even though I think the post is very low quality for LW.
(150+ karma is high enough and I've since removed the vote. Or some chance I am misremembering and I never upvoted because it was already doing well, in which case this serves as a hypothetical that I endorse.)
One thing you could do is give users relatively more voting power if they vote without seeing the author of the post. I.e., you can enable a mode which hides post authors until you give a vote on the anonymized content. After that, you can still vote like normal.
Obviously there are ways author identity can leak through this, but it seems better than nothing.
You should probably link some posts, it's hard to discuss this so abstractly. And popular rationalist thinkers should be able to handle their posts being called mediocre (especially highly-upvoted ones).
This was also on my mind after seeing Jesse’s short form yesterday. Ryan’s “this is good” comment was above Louis’ thorough explanation of an alternative formal motivation for IFs. That would still be the case if I hadn’t heavy upvoted and weak downvoted.
I personally cast my comment up/downvotes as an expression of my preference ordering for visibility. I would encourage others to also do so. For instance, I suggest Ryan’s comment should’ve been agreement voted rather than upvoted by others. This stance has as a corollary to not vote if you haven’t read the other comments whose rankings you are affecting—or rather vote with any other marker of which LW has many.
This ‘upvotes as visibility preferences’ policy isn’t tractable for posts, so I suspect the solution there—if one is needed—would have to be done on the backend by normalization. Not sure whether this is worth attempting.
Link here since I don’t particularly want to call out Ryan, his comment was fine. https://www.lesswrong.com/posts/7X9BatdaevHEaHres/jesse-hoogland-s-shortform
A salient example to me: This post essentially consists of Paul briefly remarking on some mildly interesting distinctions about different kinds of x-risks, and listing his precise credences without any justification for them. It's well-written for what it aims to be (a quick take on personal views), but I don't understand why this post was so strongly celebrated.
idea: popular authors should occasionally write preregistered intentionally subtly bad posts as an epistemic health check
Duncan Sabien once ran the inverse experiment. He made a separate account to see how his posts would do without his reputation. The account only has one post still up, but iirc there used to be many more (tens). They performed similarly well to posts under his own name. Cool idea!
[plausibly I'm getting parts of the story wrong and someone who was around then will correct me]
Of course, if LW were truly meritocratic (which it should be), this shouldn’t matter — but in my experience, it descriptively does.
That's not really true if people are popular rationalist thinkers because of skill at good rationalist writing. Meritocracy does not imply that people get judged based on individual pieces of their work, a meritocracy where people are primarily judged on their total output would still be a meritocracy.
I think the problem is more that post that make points that are popular and fit neatly into the world view of the reader are more likely to get upvotes than posts that challenge the world view of the reader and would require the reader to update their world view. Jimmy's introductionary post to his sequence is currently as I'm writing it at 12 karma.
While writing quality might be improved, it's a post that sets out to challenges the readers conception about how human reasoning works in practical contexts and that's why it's at low karma. I would love to see more posts like Jimmy that are about the core of how rationality works over posts that feel good to read and that get lot of upvotes without really changing minds much.
I think if someone is very well-known their making a particular statement can be informative in itself, which is probably part of the reason it is upvoted.
I had similar thoughts. And indeed I found funny that a forum of rationalists falls for this.
But this is human nature, and we have finite time and energy to focus on things.
You will find the same for the attention papers written by people at famous institutions get, even if low quality.
Making names, recently, I had a similar feeling when looking at "AI as Normal Technology". I did not know the authors, and I was skimming it because someone talked about it.
It is also a reason for which sometimes celebrities are called to do propaganda.
A strategy would be to make make posts anonymous for a limited amount of time (perhaps with some text randomization that would slightly change the style of the author/s, even though this might affect quality).
People: “Ah, yes. We should trust OpenAI with AGI.” OpenAI: https://www.nytimes.com/2024/07/04/technology/openai-hack.html “But the executives decided not to share the news publicly because no information about customers or partners had been stolen, the two people said. The executives did not consider the incident a threat to national security because they believed the hacker was a private individual with no known ties to a foreign government. The company did not inform the F.B.I. or anyone else in law enforcement.”
US AISI will be 'gutted,' Axios reports: https://t.co/blQY9fGL1v. This should have been expected, I think, but it still seems worth sharing,
Someone I trust on this says:
AFAICT what's going on here is just that AISI and CHIPS are getting hit especially hard by the decision to fire probationary staff across USG, since they're new and therefore have lots of probationary staff - it's not an indication (yet) that either office is being targeted to be killed
To be frank though, that basically guts any chance of institutional capability, so it's probably going to be at best a hollow organization.
You do not randomly fire large swathes of people and expect most institutional knowledge to survive, because there's no reason for anyone to work there rather than be in the private sector.
True, but that's a different problem than them specifically targeting the AISI (which, based on Vance's comments, wouldn't be too surprising). Accidentally targeting the AISI means it's an easier decision to revert than if the government actively wanted to shut down AISI-like efforts.
I think people should know that this exists (Sam Harris arguing for misaligned AI being an x-risk concern on Big Think YouTube channel):
Thanks. For people that aren't likely to watch, I imagine it might also be worth saying he reports his view as being that we're in an arms race we can't opt out of (and that he's changed his mind regarding -- I think -- the overall appropriateness of such a race, though from what to what I'm not sure) due to insufficient political sanity and part of what constitutes sanity, he says, would be the US and China being able to create a climate whereby we don't fear each other, though it's not totally obvious whether he thinks a sufficient condition for the race to ASI being abandoned would be if people ended up no longer predicting their rivals might develop it first or whether he thinks some other fears need to be allayed as well (I get some sense that it's very much the latter, but it definitely wasn't clear).
I'm no expert on Albanian politics, but I think it's pretty obvious this is just a gimmick with minimal broader significance.
Tyler Cowen often has really good takes (even some good stuff against AI as an x-risk!), but this was not one of them: https://marginalrevolution.com/marginalrevolution/2024/10/a-funny-feature-of-the-ai-doomster-argument.html
Title: A funny feature of the AI doomster argument
If you ask them whether they are short the market, many will say there is no way to short the apocalypse. But of course you can benefit from pending signs of deterioration in advance. At the very least, you can short some markets, or go long volatility, and then send those profits to Somalia to mitigate suffering for a few years before the whole world ends.
Still, in a recent informal debate at the wonderful Roots of Progress conference in Berkeley, many of the doomsters insisted to me that “the end” will come as a complete surprise, given the (supposed) deceptive abilities of AGI.
But note what they are saying. If markets will not fall at least partially in advance, they are saying the passage of time, and the events along the way, will not persuade anyone. They are saying that further contemplation of their arguments will not persuade any marginal investors, whether directly or indirectly. They are predicting that their own ideas will not spread any further.
I take those as signs of a pretty weak argument. “It will never get more persuasive than it is right now!” “There’s only so much evidence for my argument, and never any more!” Of course, by now most intelligent North Americans with an interest in these issues have heard these arguments and they are most decidedly not persuaded.
There is also a funny epistemic angle here. If the next say twenty years of evidence and argumentation are not going to persuade anyone else at the margin, why should you be holding this view right now? What is it that you know, that is so resistant to spread and persuasion over the course of the next twenty years?
I would say that to ask such questions is to answer them.
Yes, that was a pretty terrible take. Markets quite clearly do not price externalities well, and never have done. So long as any given investor rates their specific investment as being unlikely to tip the balance into doom, they get the upside of directly financially benefiting from major economic growth due to AI, and essentially the same downside risk as if they didn't invest. Arguments like "short some markets, or go long volatility, and then send those profits to Somalia to mitigate suffering for a few years before the whole world ends" are obviously not even trying to seriously reflect the widespread investment decisions that affect real markets.
I saw this good talk on the Manifest youtube channel about using historical circumstances to calibrate predictions - this seems better for training than regular forecasting because you have faster feedback loop between the prediction and the resolution.
I wanted to know if anyone had recommendations on where to find some software or site where I can do more examples of this (I already know about the estimation game). I would do this myself, but it seems like it would be pretty difficult to do the research on the situation without learning the outcome. I would also appreciate people giving takes about why this might be a bad way to get better at forecasting.
This is a really good debate on AI doom -- I thought the optimistic side was a good model that I (and maybe others) should spend more time thinking about (mostly about the mechanistic explanation vs extrapolation of trends and induction vs empiricist framings), even though I think I disagreed with a lot of it on an object level:
https://marginalrevolution.com/marginalrevolution/2024/11/austrian-economics-and-ai-scaling.html
A good short post by Tyler Cowen on anti-AI Doomerism.
I recommend taking a minute to steelman the position before you decide to upvote or downvote this. Even if you disagree with the position object level, there is still value to knowing the models where you may be most mistaken.