steven0461

steven0461's Comments

Open & Welcome Thread - February 2020

I don't know what the reasons are off the top of my head. I'm not saying the probability rise caused most of the stock market fall, just that it has to be taken into account as a nonzero part of why Wei won his 1 in 8 bet.

Open & Welcome Thread - February 2020

If the market is genuinely this beatable, it seems important for the rationalist/EA/forecaster cluster to take advantage of future such opportunities in an organized way, even if it just means someone setting up a Facebook group or something.

(edit: I think the evidence, while impressive, is a little weaker than it seems on first glance, because my impression from Metaculus is the probability of the virus becoming widespread has gotten higher in recent days for reasons that look unrelated to your point about what the economic implications of a widespread virus would be.)

Have epistemic conditions always been this bad?

Probably it makes more sense to prepare for scenarios where ideological fanaticism is widespread but isn't wielding government power.

Have epistemic conditions always been this bad?

I think it makes sense to take an "epistemic prepper" perspective. What precautions could one take in advance to make sure that, if the discourse became dominated by militant flat earth fanatics, round earthers could still reason together, coordinate, and trust each other? What kinds of institutions would have made it easier for a core of sanity to survive through, say, 30s Germany or 60s China? For example, would it make sense to have an agreed-upon epistemic "fire alarm"?

Preliminary thoughts on moral weight

As usual, this makes me wish for UberFact or some other way of tracking opinion clusters.

Are "superforecasters" a real phenomenon?

From participating on Metaculus I certainly don't get the sense that there are people who make uncannily good predictions. If you compare the community prediction to the Metaculus prediction, it looks like there's a 0.14 difference in average log score, which I guess means a combination of the best predictors tends to put e^(0.14) or 1.15 times as much probability on the correct answer as the time-weighted community median. (The postdiction is better, but I guess subject to overfitting?) That's substantial, but presumably the combination of the best predictors is better than every individual predictor. The Metaculus prediction also seems to be doing a lot worse than the community prediction on recent questions, so I don't know what to make of that. I suspect that, while some people are obviously better at forecasting than others, the word "superforecasters" has no content outside of "the best forecasters" and is just there to make the field of research sound more exciting.

Less Wrong Poetry Corner: Walter Raleigh's "The Lie"
Would your views on speaking truth to power change if the truth were 2x less expensive as you currently think it is? 10x? 100x?

Maybe not; probably; yes.

Followup question: have you considered performing an experiment to test whether the consequences of speech are as dire as you currently think? I think I have more data than you! (We probably mostly read the same blogs, but I've done field work.)

Most of the consequences I'm worried about are bad effects on the discourse. I don't know what experiment I'd to to figure those out. I agree you have more data than me, but you probably have 2x the personal data instead of 10x the personal data, and most relevant data is about other people because there are more of them. Personal consequences are more amenable to experiment than discourse consequences, but I already have lots of low-risk data here, and high-risk data would carry high risk and not be qualitatively more informative. (Doing an Experiment here doesn't teach you qualitatively different things here than watching the experiments that the world constantly does.)

Can you be a little more specific? "Discredited" is a two-place function (discredited to whom).

Discredited to intellectual elites, who are not only imperfectly rational, but get their information via people who are imperfectly rational, who in turn etc.

"Speak the truth, even if your voice trembles" isn't a literal executable decision procedure—if you programmed your AI that way, it might get stabbed. But a culture that has "Speak the truth, even if your voice trembles" as a slogan might—just might be able to do science or better—to get the goddamned right answereven when the local analogue of the Pope doesn't like it.

It almost sounds like you're saying we should tell people they should always speak the truth even though it is not the case that people should always speak the truth, because telling people they should always speak the truth has good consequences. Hm!

I don't like the "speak the truth even if your voice trembles" formulation. It doesn't make it clear that the alternative to speaking the truth, instead of lying, is not speaking. It also suggests an ad hominem theory of why people aren't speaking (fear, presumably of personal consequences) that isn't always true. To me, this whole thing is about picking battles versus not picking battles rather than about truth versus falsehood. Even though if you pick your battles it means a non-random set of falsehoods remains uncorrected, picking battles is still pro-truth.

If we should judge the Platonic math by how it would be interpreted in practice, then we should also judge "speak the truth even if your voice trembles" by how it would be interpreted in practice. I'm worried the outcome would be people saying "since we talk rationally about the Emperor here, let's admit that he's missing one shoe", regardless of whether the emperor is missing one shoe, is fully dressed, or has no clothes at all. All things equal, being less wrong is good, but sometimes being less wrong means being more confident that you're not wrong at all, even though you are wrong at all.

(By the way, I think of my position here as having a lower burden of proof than yours, because the underlying issue is not just who is making the right tradeoffs, but whether making different tradeoffs than you is a good reason to give up on a community altogether.)

Less Wrong Poetry Corner: Walter Raleigh's "The Lie"

Would your views on speaking truth to power change if the truth were 2x as offensive as you currently think it is? 10x? 100x? (If so, are you sure that's not why you don't think the truth is more offensive than you currently think it is?) Immaterial souls are stabbed all the time in the sense that their opinions are discredited.

Since figuring out human values is hard, what about, say, monkey values?

Given that animals don't act like expected utility maximizers, what do you mean when you talk about their values? For humans, you can ground a definition of "true values" in philosophical reflection (and reflection about how that reflection relates to their true values, and so on), but non-human animals can't do philosophy.

Don't Double-Crux With Suicide Rock

Honest rational agents can still disagree if the fact that they're all honest and rational isn't common knowledge.

Load More