AIS student, self-proclaimed aspiring rationalist, very fond of game theory.
"The only good description is a self-referential description, just like this one."
A lot of your AI-risk reason to support Harris seems to hinge on this, which I find very shaky. How wide are your confidence intervals here?
My own guesses are much more fuzzy. According to your argument, if my intuition was .2 vs .5, then it's an overwhelming case for Harris but I'm unfamiliar enough with the topic that it could easily be the reverse.
I would greatly appreciate more details on how you reach your numbers (and if they're vibes, reason whether to trust those vibes).
Alternatively, I feel like I should somehow discount the strength of the AI-risk reason based on how likely I think these numbers are to more or less hold true, but I don't know a principled way to do it.
Seems like you need to go beyond arguments of authority and stating your conclusions and instead go down to the object-level disagreements. You could say instead "Your argument for ~X is invalid because blah blah" and if Jacob says "Your argument for the invalidity of my argument for ~X is invalid because blah blah" then it's better than before because it's easier to evaluate argument validity than ground truth.
(And if that process continues ad infinitam, consider that someone who cannot evaluate the validity of the simplest arguments is not worth arguing with.)
It's thought-provoking.
Many people here identify as Bayesians, but are as confused as Saundra by the troll's questions, which indicates that they're missing something important.
It wasn't mine. I did grow up in a religious family, but becoming a rationalist came gradually, without sharp divide with my social network. I always figured people around me were making all sorts of logical mistakes though, and noticed very early deep flaws in what I was taught.
It's not. The paper is hype, the authors don't actually show that this could replace MLPs.
This is very interesting!
I did not expect that Chinese would be more optimistic about benefits than worried about risks and that they would rank it so low as an existential risk.
This is in contrast with posts I see on social media and articles showcasing safety institutes and discussing doomer opinions, which gave me the impression that Chinese academia was generally more concerned about AI risk and especially existential risk than the US.
I'm not sure how to reconcile this survey's results with my previous model. Was I just wrong and updating too much on anecdotal evidence?
How representative of policymakers and of influential scientists do you think these results are?
About the Christians around me: it is not explicitly considered rude, but it is a signal that you want to challenge their worldview, and if you are going to predictably ask that kind of question often, you won't be welcome in open discussions.
(You could do it once or twice for anecdotal evidence, but if you actually want to know whether many Christians believe in a literal snake, you'll have to do a survey.)
I disagree – I think that no such perturbations exist in general, rather than that we have simply not had any luck finding them.
I have seen one such perturbation. It was two images of two people, one which was clearly male and the other female, though I wasn't be able to tell any significant difference between the two images on 15s of trying to find one except for a slight difference in hue.
Unfortunately, I can't find this example again on a 10mn search. It was shared on Discord; the people in the image were white and freckled. I'll save it if I find it again.
The pyramids and Mexico and the pyramids in Egypt are related via architectural constraints and human psychology.
I agree with the broad idea, but I'm going to need a better implementation.
In particular, the 5 criteria you give are insufficient because the example you give scores well on them, and is still atrocious: if we decreed that "black people" was unacceptable and should be replaced by "black peoples", it would cause a lot of confusion on account of how similar the two terms are and how ineffective the change is.
The cascade happens because of a specific reason, and the change aims at resolving that reason. For example, "Jap" is used as a slur, and not saying it shows you don't mean to use a slur. For black people/s, I guess the reason would be something like not implying that there is a single black people, which only makes sense in the context of a specialized discussion.
I can't adhere to the criteria you proposed because they don't work, and I don't want to bother thinking that deep about every change of term on an everyday basis, so I'll keep on using intuition to choose when to solve respectability cascades for now.
For deciding when to trigger a respectability cascade, your criteria are interesting for having any sort of principled approach, but I'm still not sure they outperform unconstrained discussion on the subject (which I assume is the default alternative for anyone who cares enough about deliberately triggering respectability cascades to have read your post in the first place).