NoisyEmpire
NoisyEmpire has not written any posts yet.

NoisyEmpire has not written any posts yet.

Imagine that you and I are sitting at a table. Hidden in my lap, I have a jar of beans. We are going to play the traditional game wherein you try to guess the number of beans in the jar. However, you don’t get to see the jar. The rule is that I remove beans from the jar and place them on the table for as long as I like, and then at an arbitrary point ask you how many beans there are in total. That’s all you get to see.
One by one, I remove a dozen beans. As I place the twelfth bean on the table in front of you, I... (read 543 more words →)
Surveyed.
An odd technique, which I'll rate at +5: whilst already locked into some mundane but necessary task (e.g. grocery shopping, dishes, wading through work e-mails), consciously forcing my brain to complete the "Man, I wish I could be doing [blank] instead" template with some other mundane task that I would normally procrastinate - then immediately switching to that other task when the first task is done.
For example: "These dishes are taking so long - I really wish I could be... [hijack the train of thought by picking something else on my to-do list]... doing research for that article." I'll then make my brain, while still doing dishes, concretely imagine working on the... (read more)
This is a really excellent technique in a lot of contexts.
I offer a word of caution about actually using it with theists, even those less Biblically literate than Yvain's friend: the catch-all excuse that many (not all) theists make for Biblical atrocities is precisely that they were commanded by God, and thus on some version of Divine Command Theory are rendered okay - not that the atrocities are in some observable way actually less bad than those committed by other groups or religions.
Thanks! At the risk of falling prey to the planning fallacy, I should have some draft-worthy stuff next month.
I'm kind of thrilled to find this discussion occurring. I've just managed to actually start writing my long-planned, akrasia-blocked series of rationalist adventures for kids (say, smart-7-year-olds through 12-year-olds). It's a fantasy-adventure, a little bit zany, a little bit dark, and will be intended to promote basic virtues like curiosity, empiricism, changing your mind, and admitting when you don't know.
If and when I have drafts of a few stories, would there be interest in me writing a post explaining the project in more depth and requesting criticism/feedback?
I will note that though consequentialism is a fine ideal theory, at some point you really do have to implement a procedure, which means in practice, all consequentialists will be deontologists.
Agreed. This is usually called “rule utilitarianism” – the idea that, in practice, it actually conserves utils to just make a set of basic rules and follow them, rather than recalculating from scratch the utility of any given action each time you make a decision. Like, “don’t murder” is a pretty safe one, because it seems like in the vast majority of situations taking a life will have a negative utility. However, its still worth distinguishing this sharply from deontology, because if you ever did calculate and find a situation in which your rule resulted in lesser utility – like pushing the fat man in front of the train – you’d break the rule. The rule is an efficiency-approximation rather than a fundamental posit.
Much for the same reasons that people can be mistaken about their own desires, people can be mistaken about what they would actually consider awesome if they were to engage in an accurate modeling of all the facts. E.g. People who post flashing images to epileptic boards or suicide pictures to battered parents are either 1) failing to truly envision the potential results of their actions and consequently overvaluing the immediate minor awesomeness of the irony of the post or whatever vs. the distant, unseen, major anti-awesomeness of seizures/suicides, or 2) they’re actually socio- or psychopaths. Given the infrequency of real sociopathy, it’s safe to assume a lot of the former happens, especially over the impersonal, empathy-sapping environment of the Internet.
I’m Taylor Smith. I’ve been lurking since early 2011. I recently finished a bachelor’s in philosophy but got sort of fed up with it near the end. Discovering the article on belief in belief is what first hooked me on LessWrong, as I’d already had to independently invent this idea to explain a lot of the silly things people around me seemed to be espousing without it actually affecting their behavior. I then devoured the Sequences. Finding LessWrong was like finding all the students and teachers I had hoped to have in the course of a philosophy degree, all in one place. It was like a light switching on. And it made... (read more)
Good point; you're right that his reasoning would be correct if he knew that, e.g., I had used a random number generator to randomly-generate a number between 1 and (total # of beans) and resolved to ask him, only on that numbered bean, to guess the upper bound on the total.
Perhaps to make the bean-game more similar to the original problem, I ought to ask for a guess on the total number after every bean placed, since every bean represents an observer who could be fretting about the Doomsday Argument.
Analogously, it would be misleading to imagine that You the Observer were placed in the human timeline at a single randomly-chosen point by, say, Omega, since every bean (or human) is in fact an observer.
Unfortunately I'm getting muddled and am not clear what consequences this has. Thoughts?