LessWrong developer, rationalist since the Overcoming Bias days. Jargon connoisseur. Currently working on auto-applying tags to LW posts with a language model.
Don't jump to conclusions—maintain at least two hypotheses consistent with the available information.... or be ready and willing to generate a real alternative to your main hypothesis, if asked or if it seems like it would help another user.
Don't jump to conclusions—maintain at least two hypotheses consistent with the available information.
... or be ready and willing to generate a real alternative to your main hypothesis, if asked or if it seems like it would help another user.
Most of these seem straightforwardly correct to me. But I think of the 10 things in this list, this is the one I'd be most hesitant to present as a discourse norm, and most worried about doing damage if it were one. The problem with it is that it's taking an epistemic norm and translating it into a discourse norm, in a way that accidentally sets up an assumption that the participants in a conversation are roughly matched in their knowledge of a subject. Whereas in my experience, it's fairly common to have conversations where one person has an enormous amount of unshared history with the question at hand. In the best case scenario, where this is highly legible, the conversation might go something like:
A: I think [proposition P] because of [argument A]B: [A] is wrong; I previously wrote a long thing about it. [Link]
In which case B isn't currently maintaining two hypotheses, and is firmly set in a conclusion, but there's enough of a legible history to see that the conclusion was reached via a full process and wasn't jumped to.
But often what happens is that B has previously engaged with the topic in depth, but in an illegible way; eg, they spent a bunch of hours thinking about the topic and maybe discussed it in person, but never produced a writeup, or they wrote long blog-comments but forgot about them and didn't keep track of the link. So the conversation winds up looking like:
A: I think [proposition P] because of [argument A]B: No, [A] is wrong because [shallow summary of argument for not-A]. I'm super confident in this.A: You seem a lot more confident about that than [shallow summary of argument] can justify, I think you've jumped to the conclusion.
A misparses B as having a lot less context and prior thinking about [P] than they really do. In this situation, emphasizing the virtue of not-jumping-to-conclusions as a discourse norm (rather than an epistemic norm) encourages A to treat the situation as a norm violation by B, rather than as a mismodeling by A. And, sure, at a slightly higher meta-level this would be an epistemic failure on A's part, under the same standard, and if A applied that standard to themself reliably this could keep them out of that trap. But I think the overall effect of promoting this as a norm, on this situation, is likely ot be that A gets nudged in the wrong direction.
A dynamic which I think is somewhat common, which explains some of what's going on in general, is conversations which go like this (exagerrated):Person: What do you think about [controversial thing X]?
Rationalist: I don't really care about it, but pedantically speaking, X, with lots of caveats.
Person: Huh? Look at this study which proves not-X. [Link]
Rationalist: The methodology of that study is bad. Real bad. While it is certainly possible to make bad arguments for true conclusions, my pedantry doesn't quite let me agree with that conclusion. More importantly, my hatred for the methodological error in that paper, which is slightly too technical for you to understand, burns with the fire of a thousand suns. You fucker. Here are five thousand words about how an honorable person could never let a methodological error like that slide. By linking to that shoddy paper, you have brought dishonor upon your name and your house and your dog.
Person: Whoa. I argued [not-X] to a rationalist and they disagreed with me and got super worked up about it. I guess rationalists believe [X] really strongly. How awful!
TAI = Transformative AI
I think you're missing too many prerequisites to follow this post, and that you're looking for something more introductory.
Not sure why you're linking to that comment here, but: the reason that link was broken for niplav is because your shortform-container post is marked as a draft, which makes it (and your shortform comments) inaccessible to non-admins. You can fix it by editing the shortform container post and clicking Publish, which will make it accessible again.
We're looking into it.
It's only an argument against the EMH if you take the exploitability of AI timeline prediction as axiomatic. If you unpack the EMH a little, traders not analyzing transformative AI can also be interpreted as evidence of inexploitability.
Lots of the comments here are pointing at details of the markets and whether it's possible to profit off of knowing that transformative AI is coming. Which is all fine and good, but I think there's a simple way to look at it that's very illuminating.
The stock market is good at predicting company success because there are a lot of people trading in it who think hard about which companies will succeed, doing things like writing documents about those companies' target markets, products, and leadership. Traders who do a good job at this sort of analysis get more funds to trade with, which makes their trading activity have a larger impact on the prices.
Now, when you say that:
the market is decisively rejecting – i.e., putting very low probability on – the development of transformative AI in the very near term, say within the next ten years.
I think what you're claiming is that market prices are substantially controlled by traders who have a probability like that in their heads. Or traders who are following an algorithm which had a probability like that in the spreadsheet. Or something thing like that. Some sort of serious cognition, serious in the way that traders treat company revenue forecasts.
And I think that this is false. I think their heads don't contain any probability for transformative AI at all. I think that if you could peer into the internal communications of trading firms, and you went looking for their thoughts about AI timelines affecting interest rates, you wouldn't find thoughts like that. And if you did find an occasional trader who had such thoughts, and quantified how much impact they would have on the prices if they went all-in on trading based on that theory, you would find their impact was infinitesimal.
Market prices aren't mystical, they're aggregations of traders' cognition. If the cognition isn't there, then the market price can't tell you anything. If the cognition is there but it doesn't control enough of the capital to move the price, then the price can't tell you anything.
I think this post is a trap for people who think of market prices as a slightly mystical source of information, who don't have much of a model of what cognition is behind those prices.
(Comment cross-posted with the EA forum version of this post)
People who are keeping their prices up to date gain little, while people whose prices were last negotiated awhile ago lose a little. But this effect is small, and slightly misses the point.
What inflation means, fundamentally, is that there is more money chasing less stuff. Or rather, the money-to-stuff ratio has shifted. That means either the amount of money going around has increased, or the amount of of stuff to buy with it has decreased.
Usually the amount of stuff-to-buy is increasing, since we keep building more and better factories and generally increasing GDP. So in the typical case, inflation means the amount of money in circulation is increasing even more than that. You can think of this as the government levying a tax on dollar-denominated accounts, equal to the amount of money printed. In the case of the US government, there are a bunch of extra steps involved, which mostly just serve to obfuscate this fact.
In some cases, though, inflation reflects a collapse in supply. Eg at the start of the Russia/Ukraine war, a bunch of trade was disrupted in ways that meant actual total production fell. Eg Russian natural gas exports to Europe stopped and the gas was flared off instead. In that case, no one is profiting off the inflation; there is simply a loss, and the shifting prices are simply part of the mechanism that determines who bears that loss.
This historical incident report fails to mention the true root cause, which has since been addressed: Wolves were not yet locally driven to extinction.
I've seen it happen with Roko's Basilisk (in both directions: falsely inferring that the basilisk works as-described, and falsely inferring that the person is dumb for thinking that it works as-described). I've seen it happen with AGI architecture ideas (falsely inferring that someone is too credulous about AGI architecture ideas, which nearly always turn out to not work).