mszegedy

Wiki Contributions

Comments

How Tim O'Brien gets around the logical fallacy of generalization from fictional evidence

Right, that's true. In the particular case of The Things They Carried, I'd trust O'Brien moderately well to depict what the Vietnam War was like, since he participated in it.

How Tim O'Brien gets around the logical fallacy of generalization from fictional evidence

Before I edited it, it was like the current one with the second paragraph removed, the last two sentences of the third paragraph removed, and the third and fourth paragraph combined into one, roughly. I'm glad gwern posted his comment, though, because I think the post is much better now.

How Tim O'Brien gets around the logical fallacy of generalization from fictional evidence

Are you sure you understood the point? I am highlighting a writing technique where you write the same short story over and over again slightly differently to convey a probabilistic model to the reader in a way that is interesting. HPMoR is not quite this; it's a different story every time, with a different lesson every time, that is treated as a sequence of events.

How Tim O'Brien gets around the logical fallacy of generalization from fictional evidence

He literally tells the same story over and over again, differently every time. He has several stories that he does this to. The book is a fictional autobiography; O'Brien was in the Vietnam War, and writes as though he were recollecting stories from the Vietnam War, but the stories are all made up. Here, I found an excerpt that illustrates the principle in a somewhat okay manner.

EDIT: Here, this is better (PDF warning).

How Tim O'Brien gets around the logical fallacy of generalization from fictional evidence

If you want, read it. Hopefully, though, the principle that I was highlighting was clear, wasn't it? While fiction with a probability distribution given for each sequence of events is boring, fiction with many short stories describing the different possible scenarios is interesting, and gives the same probabilistic model.

Should I give examples of how O'Brien does it? I don't know how much I can type out without violating copyright law.

Anchoring and Adjustment

I've found that going by significant digits helps.

"If I represented the date that Einstein came to the US with only one significant digit of precision, what would it be? Definitely 2000. What about two? Definitely 1900. What about three? Probably 1900 again; I'm willing to take that bet. But four digits of precision? I'm not sure at all. I'll leave it as 1900."

The answer came out way off, but hopefully it prevented any anchoring, and it also accurately represents my knowledge of Einstein (namely, I know which properties of physics he discovered, and I know that he wrote his most important papers in the earlier half of the 190Xs, which must have also been when he came to the US). In hindsight, I might have should have taken historical context into account (why would Einstein leave for the US in the first place? if I had considered this, my guess would probably have ended up as 1910 or 1920), but that's hindsight bias or a lesson to be learned.

An improvement to this method might be that I explicitly consider the range of numbers that would make it come out as a significant digit (if the three-significant-digit number is 1900, then he came between 1895 and 1904; does that sound more plausible than him coming sometime between 1905 and 1914?). But this might just make the anchoring effect worse, or introduce some other bias.

How to offend a rationalist (who hasn't thought about it yet): a life lesson

That's exactly what I can't make my mind up about, and forces me to default to nihilism on things like that. Maybe it really is irrelevant where the pleasure comes from? If we did wirehead everyone for eternity, then would it be sad if everyone spontaneously disappeared at some point? Those are questions that I can't answer. My morality is only good for today's society, not tomorrow's. I guess strictly morally, yes, wireheading is a solution, but philosophically, there are arguments to be made against it. (Not from a nihilistic point of view, though, which I am not comfortable with. I guess, philosophically, I can adopt two axioms: "Life requires meaning," and "meaning must be created." And then arises the question, "What is meaning?", at which point I leave it to people with real degrees in philosophy. If you asked me, I'd try to relate it to the entropy of the universe somehow. But I feel that I'm really out of my depth at that point.)

How to offend a rationalist (who hasn't thought about it yet): a life lesson

True, I swear! I think I can summarize why I was so distraught: external factors, this was a trusted friend, also one of my only friends, and I was offended by related things they had said prior. I am seeking help, though.

How to offend a rationalist (who hasn't thought about it yet): a life lesson

You're completely right. I tried, at first, to look for ways that it could be a true statement that "some areas shouldn't have consistent belief systems attached", but that made me upset or something (wtf, me?), so I abandoned that, and resolved to attack the argument, and accept it if I couldn't find a fault with it. And that's clearly bad practice for a self-proclaimed rarionalist! I'm ashamed. Well, I can sort of make the excuse of having experienced emotions, which made me forget my principles, but that's definitely not good enough.

I will be more careful next time.

EDIT: Actually, I'm not sure whether it's so cut-and-dry like that. I'll admit that I ended up rationalizing, but it's not as simple as "didn't notice confusion". I definitely did notice it. Just when I am presented with an opposing argument, what I'll do is that I'll try to figure out at what points it contradicts my own beliefs. Then I'll see whether those beliefs are well-founded. If they aren't, I'll throw them out and attempt to form new ones, adopting the foreign argument in the process. If I find that the beliefs it contradicts are well-founded, then I'll say that the argument is wrong because it contradicts these particular beliefs of mine. Then I'll go up to the other person and tell them where it contradicts my beliefs, and it will repeat until one of us can't justify our beliefs, or we find that we have contradictory basic assumptions. That is what I did here, too; I just failed to examine my beliefs closely enough, and ended up rationalizing as a result. Is this the wrong way to go about things? There's of course a lot to be said about actual beliefs about reality in terms of prior probability and such, so that can also be taken into account where it applies. But this was a mostly abstract argument, so that didn't apply, until I introduced an epistemological argument instead. But, so, is my whole process flawed? Or did I just misstep?

How to offend a rationalist (who hasn't thought about it yet): a life lesson

Oh, okay. That makes sense. So then what's the rational thing to conclude at this point? I'm not going to go back and argue with my friend—they've had enough of it. But what can I take away from this, then?

(I was using the French term philosophe, not omitting a letter, though. That's how my history book used to write it, anyway.)

Load More