AIS student, self-proclaimed aspiring rationalist, very fond of game theory.
"The only good description is a self-referential description, just like this one."
It's thought-provoking.
Many people here identify as Bayesians, but are as confused as Saundra by the troll's questions, which indicates that they're missing something important.
It wasn't mine. I did grow up in a religious family, but becoming a rationalist came gradually, without sharp divide with my social network. I always figured people around me were making all sorts of logical mistakes though, and noticed very early deep flaws in what I was taught.
It's not. The paper is hype, the authors don't actually show that this could replace MLPs.
This is very interesting!
I did not expect that Chinese would be more optimistic about benefits than worried about risks and that they would rank it so low as an existential risk.
This is in contrast with posts I see on social media and articles showcasing safety institutes and discussing doomer opinions, which gave me the impression that Chinese academia was generally more concerned about AI risk and especially existential risk than the US.
I'm not sure how to reconcile this survey's results with my previous model. Was I just wrong and updating too much on anecdotal evidence?
How representative of policymakers and of influential scientists do you think these results are?
About the Christians around me: it is not explicitly considered rude, but it is a signal that you want to challenge their worldview, and if you are going to predictably ask that kind of question often, you won't be welcome in open discussions.
(You could do it once or twice for anecdotal evidence, but if you actually want to know whether many Christians believe in a literal snake, you'll have to do a survey.)
I disagree – I think that no such perturbations exist in general, rather than that we have simply not had any luck finding them.
I have seen one such perturbation. It was two images of two people, one which was clearly male and the other female, though I wasn't be able to tell any significant difference between the two images on 15s of trying to find one except for a slight difference in hue.
Unfortunately, I can't find this example again on a 10mn search. It was shared on Discord; the people in the image were white and freckled. I'll save it if I find it again.
The pyramids and Mexico and the pyramids in Egypt are related via architectural constraints and human psychology.
In practice, when people say "one in a million" in that kind of context, it's much higher than that. I haven't watched Dumb and Dumber, but I'd be surprised if Lloyd did not, actually, have a decent chance of ending together with Mary.
On one hand, we claim [dumb stuff using made up impossible numbers](https://www.lesswrong.com/posts/GrtbTAPfkJa4D6jjH/confidence-levels-inside-and-outside-an-argument) and on the other hand, we dismiss those numbers and fall back on there's-a-chancism.
These two phenomena don't always perfectly compensate one another (as examples show in both posts), but common sense is more reliable that it may seem at first. (I'm not saying it's the correct approach nonetheless.)
Epistemic status: amateur, personal intuitions.
If this were the case, it makes sense to hold dogs (rather than their owners, or their breeding) responsible for aggressive or violent behaviour.
I'd consider whether punishing the dog would make the world better, or whether changing the system that led to its breeding, or providing incentives to the owner or any combination of other actions would be most effective.
Consequentialism is about considering the consequences of actions to judge them, but various people might wield this in various ways.
Implicitly, with this concept of responsibility, you're considering a deontological approach to bad behavior: punish the guilty (perhaps using consequentialism to determine who's guilty though that's unclear from your argumentation afaict).
In an idealized case, I care about whether the environment I evolve in (including other people's and other people's dogs' actions) is performing well only insofar as I can change it, or said otherwise, I care only about how I can perform better.
(Then, because the world is messy, and I need to account for coordination with other people whose intuitions might not match mine, and society's recommendations, and my own human impulses etc... My moral system is only an intuition pump for lack of satisfactory metaethics.)
Seems like you need to go beyond arguments of authority and stating your conclusions and instead go down to the object-level disagreements. You could say instead "Your argument for ~X is invalid because blah blah" and if Jacob says "Your argument for the invalidity of my argument for ~X is invalid because blah blah" then it's better than before because it's easier to evaluate argument validity than ground truth.
(And if that process continues ad infinitam, consider that someone who cannot evaluate the validity of the simplest arguments is not worth arguing with.)