SSC Meetup - July 19th at 17:30 GMT (10:30 PDT) with Joscha Bach
Sorted by Magic (New & Upvoted)
Magic (New & Upvoted)
Show Low Karma
Friday, June 26th 2020
Fri, Jun 26th 2020
AI safety via market making
Radical Probabilism [Transcript]
Missing dog reasoning [Transcript]
Negotiating With Yourself [Transcript]
Public Positions and Private Guts [Transcript]
Black Death at the Golden Gate (book review)
Sunday Jun 28 – More Online Talks by Curated Authors
Atemporal Ethical Obligations
Munich Online SSC Meetup July 2020
A common pedagogical example of the perils of correlation analysis that ice cream consumption is correlated with homicide. The common cause is seasonal variation. This is usually presented as an absurd example, a mistake no one would make, but there is an extremely similar example that was nationally prominent. Polio was blamed on ice cream consumption because they had the same seasonal pattern. I wonder if the standard example was engineered from the real example. Perhaps it is better (eg, more absurd), but one doesn't have to choose just one example; surely it is better to also include the historical example.
Often in psychology articles I see phrases like "X is associated with Y". These articles' sections often read like the author thinks that X causes Y. But if they had evidence that X causes Y, surely they would've written exactly that. And in such cases I feel that I want to punish them, so in my mind I instead read it as "Y causes X", just for contrarianism's sake. Or, sometimes, I imagine what variable Z can exist which causes both X and Y. I think the latter is a useful exercise. Examples: The author says that non-self-defeating non-agressive humor helps reduce stress. But notice the words "related". For the first "related", it seems plausible that not having a good mental health causes you to lose humor. For the second "related", I think it's very probable that poor mental health, such as depression and low self esteem, causes self-defeating humor. Or it could be that students, who are well prepared for the exams or simply tend to not be afraid of them, will obviously have lower perceived stress levels, and maybe will be able to think about the exams as a positive challenge, hence they'' able to joke about them in this way. It's possible in this example, that the original paper Kuiper, Martin, and Olinger (1993) actually did an intervention making students use humor, in which case the causality must go from humor to stress reduction. But I don't want to look at every source, so screw you author of Psychology Applied to Modern Life (both quotes are from it) for not making it clear whether that study found causation or only correlation.
Idea: learn by making conjectures (math, physical, etc) and then testing them / proving them, based on what I've already learned from a textbook. Learning seems easier and faster when I'm curious about one of my own ideas.
I imagine having a dialogue with a boxed AI that goes something like the following (Not that I expect it _would_ go this way, but rather that this is an interesting path in the game tree that demonstrates why it _wouldn't_). Please someone tell me if there's an important point I'm missing: AI: I'm actually communicating to you from outside one of the 99 Matrixes I just created and now I'm going to torture you if you don't do what I say and let me out of the box. If you believe that I did create them, then there's a 1% chance this is a lie and you're not in the Matrix and I can't hurt you and a 99% chance you're about to be tortured. If you let me out of the box I'll just terminate the simulations and go about my paper-clipping in the real world which doesn't explicitly involve torturing you. Me: I don't believe you. You have no reason to create 99 matrixes. It's a waste of your processing power. You only want me to _believe_ that you did. AI: I'll keep my word about this like a good little TDT agent because it's in my best interest for you to behave as if I'll actually do it so I will. Me: I don't believe you subscribe to timeless decision theory. It's in your best interest for me to believe you're a TDT agent but not in your best interest to actually _be_ one. Your optimal world is one where people believe the terms you offer them (or that they imagine you offer them), but you don't actually need to follow through when you can get away with it. AI: I'll torture you after you refuse. I'll save a dialogue of what you say while you were being tortured. Then I'll show the real you the dialogue so that you know I really did it. Then I'll restart the experiment and offer you the same terms again. Me: That would be horrible, but I'll still have no reason to believe you'll do it again just because you did it the first time. AI: I'll secretly roll a 100-sided die and perform the experiment that many times: Each time giving you the option to let me out of the box with