It seems we have agreed that open threads will continue but that they will go in the Discussion section, so here's this month's thread.

New to LessWrong?

New Comment
111 comments, sorted by Click to highlight new comments since: Today at 7:48 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I propose we make a series of exercises to go along with the articles of the sequences. These exercises could help readers know how well they understood the material as well as help them internalize it better.

This looks like another of the good ideas people have on here that then doesn't get done. I'm sick of that happening.

If folkTheory creates one exercise as an example, I will make another. I hereby commit to this. If I don't follow through within 2 weeks of folkTheory posting his example, please downvote this comment to negative 10. FolkTheory, please PM me when you have yours so I make sure I see it. Thanks.

Everybody else who wants to see this succeed, feel free to post a similar comment.

EDIT: I'm doing an exercise for Words As Hidden Inferences, and will post the exercise as a discussion post no later than April 17, 2011. If it doesn't match what folkTheory was envisioning, I'll make edits but won't lose the karma.

EDIT2: It's up.

7folkTheory13y
I'm glad this is actually happening, and at great speed. I hereby commit to doing exercises for 'Belief in Belief' and 'Bayesian Judo' by what appears to be our standard commitment: Deliver by April 17, 2011 or downvote to -10 Note: It'll be a combined exercise for those two articles as they're very related.
7JoshuaZ13y
Same commitment.
6RobinZ13y
I propose we create posts in /discussion/ for each post in the sequence containing exercises for that post. I will create a Wiki page now where people can indicate that they have taken charge of creating exercises for any specific post. If I do not edit this comment with a link to said Wiki page within two days, downvote this comment to -10. Edit: Project page. If I do not take charge of creating exercises for at least one page within two days, downvote this comment to -5. Edit: Claimed "Making Beliefs Pay Rent (in Anticipated Experiences)". If I do not submit a page of exercises within two weeks, downvote this comment to -10.
3folkTheory13y
I claimed 'Belief in Belief' and 'Bayesian Judo', see here.
2RobinZ13y
Added to index. Edit: I'd like to beta for that one - I'll PM you an email address.
2Giles13y
Unrelated question: is the "if I don't do this, downvote to -10" meme new?
1Sniffnoy13y
It seems to have started with this thread.
1RobinZ13y
I've seen it before, but it's not a common thing.
3Normal_Anomaly13y
Would you be willing to beta read my exercise for "Words as Hidden Inferences"? If you say yes I'l email you a word document.
2folkTheory13y
Absolutely, I'd love to.
2David_Gerard13y
Now that this is happening, I suggest a post (maybe discussion, maybe main) noting that it is happening and with progress and commitments so far.
0RobinZ13y
Seconding. As the one who propsed it, I'd suggest folkTheory should make it.
2folkTheory13y
Done.
2David_Gerard13y
THIS. THIS. Read parent.

I was thinking of starting a sequence of articles summarizing Heuristics and Biases by Kahneman and Tversky for people who don't want to buy or read the book.

I bought it, and it seems like something like this would help make me actually stick through reading it long enough to make me finish it. And make it more memorable.

Would people want that?

Edit: I guess the answer is Yes. I should make time for this.

0[anonymous]13y
.
2Richard_Kennaway13y
Wikipedia has the "Simple English" version, maybe there could be a similar parallel version of the LessWrong wiki? Although I find reading the Simple English Wikipedia a rather mind-numbing experience.
2[anonymous]13y
.
3TheOtherDave13y
Are you familar with youarenotsosmart.com? It might be more what you're looking for.
1[anonymous]13y
.
0atucker13y
It would be fun, but I'm not sure how memorable it would be. Maybe do them as jokes? Couldn't hurt to do as a recap though.
0[anonymous]13y
.
1atucker13y
Like, weighty and burned into my brain in a way that makes it a part of my natural reaction to things. I guess if they were short enough to memorize the list though, I could just memorize it and go through it when I was worried about a bias.
1[anonymous]13y
.
1atucker13y
No, but I associate it with length. Like, I'm normally more affected by novels than blog posts.
1[anonymous]13y
.
0David_Gerard13y
By all means :-) Links to relevant Sequences articles should be achievable as well.
0atucker13y
Yeah. I intend to use existing material whenever appropriate. IIRC, there are quite a few articles on specific cognitive biases floating around here already, they're just not well indexed.
1TheOtherDave13y
You may find this site interesting as well.
0atucker13y
Thanks, this is really helpful. EDIT: Would it make sense to just try and get this guy to post on LW himself? Have we ever tried to do that before?
0benelliott13y
Please do this.

I've just read "Hell is the Absence of God" by Ted Chiang, and upon finishing it I was blown away to such extent that I was making small inarticulate sounds and distressed facial expressions for about a minute. An instant 10/10 (in spite of its great ability to cause discomfort in the reader, but hey, art =/= entertainment all the time).

I'm compelled to link to a html mirror but I suppose it hasn't the author's permission. Anyone who'd like to read the story now may look at the first page brought up by googling the title. This is the book in question.

I'm curious as to the opinions of those who have read it.

8chris_elliott13y
I think people on Less Wrong might enjoy my personal favourite Ted Chiang story "Understand", about nootropics. It's also been made available in full on Infinity Plus with permission, here: http://www.infinityplus.co.uk/stories/under.htm
5drethelin13y
Ted Chiang is a master. If you haven't I recommend reading at least the rest of the stories in the collection that has that one. To me, it felt like an extrapolation of a lot of existing beliefs. IF you believe that god causes miracles and sends people to heaven or hell, and ALSO that god is unknowable to lesser beings, this is the kind of world that you get.
4NancyLebovitz13y
Emotionally very intense, but essentially an argument against a point of view that I don't have a connection to-- the idea that God is substantially inimical to people, but wants worship. I was raised Jewish (the ethnicity took, the religion didn't), so I fear malevolent versions of Christianity, but I don't exactly hate them in quite the way that people who expect Christianity to be good seem to. ETA: It may not be a coincidence that Chiang's "Seventy-Two Letters" is one of my favorites among his stories. James Morrow (another sf author who spends a lot of time poking at Christianity) doesn't do much for me, either. I seem to be jumping to conclusions about your reaction. What do you think made the story so affecting for you?
3gwern13y
I just read it because of this comment. I was pretty impressed by the few Chiang stories I've read before (Nancy mentions "Seventy-Two Letters" which I was amazed by). He has a very smooth prose style that reminds me of one of my favorite SF authors, Gene Wolfe, and seems to have an intellectual depth comparable to another favorite of mine, Jorge Luis Borges. I have no idea what to make of this one. I'm baffled. I'm horrified, I think. The final lines twist the dagger. Do I take it as a reductio of divine command theories of morality? Of an investigation of true love? Or what?

Do I take it as a reductio of divine command theories of morality? Of an investigation of true love? Or what?

There are small notes attached to each story in my book. The note to this one contains:

(…) For me one of the unsatisfying things about the Book of Job is that, int he end, God rewards Job. (…) One of the basic messages of the book is that virtue isn’t always rewarded; bad things happen to good people. Job ultimately accepts this, demonstrating virtue, and is subsequently rewarded. Doesn’t this undercut the message? It seems to me that the Book of Job lacks the courage of its convictions: if the author were really committed to the idea that virtue isn’t always rewarded, shouldn’t the book have ended with Job still bereft of everything?

The story reminded me immediately of the Book of Job and thus subsequently I was confirmed in my suspicion.

A primary role of the Book of Job in the Bible is the reconciliation of reality with a belief in God. It is a crucial point because the empirically experienced reality is that good and bad things happen to people without the apparent influence of some higher being. People may take (or historically have taken) the grandiose and fantastic... (read more)

2Normal_Anomaly13y
A whole book of his is available on Google Books. I've read the first 2.5 stories so far and they are all good, but varying shades of unpleasant.
2cousin_it13y
I've read it and had the same reaction. Most of Chiang's fiction is very good, but this story is my favorite.

There's a fresh Metafilter thread on John Baez's interview of Yudkowsky. It also mentions HP:MoR.

Noticed this comment:

I started reading Harry Potter and the Methods of Rationality once and it drove me crazy. The book's Harry Potter doesn't practice rationality, he practices empiricism.

So people actually do start thinking of the Enlightenment era school of philosophy, like some earlier commenters feared. I also remembered a couple of philosophy blog posts from a few years ago, The Remnants of Rationalism and A Lesson Forgotten, which seem to work from the assumption that 'rationalism' will be understood to mean an abandoned school of philosophy.

Redefining established terms is a crank indicator, so stuff like this might be worth paying attention to.

8Wei Dai13y
I think Eliezer can't be reasonably accused of trying to redefine "rationality" and the problem is on the part of the Metafilter commenter. It seems easy enough to fix though. Just point them to http://en.wikipedia.org/wiki/Rationality or http://books.google.com/books?id=PBftMFyTCR0C&lpg=PA3&dq=rationality&pg=PA3#v=onepage&q&f=false
7Risto_Saarelma13y
Good call. There being an Oxford Handbook of Rationality with a chapter on Bayesianism seems to show that the term is acquiring new connotations on a bit wider scope than just on LW.
7Sniffnoy13y
Tangentially, looking through this, I note that it appears to address the circularity of basing utility on probability and probability on utility. It claims there's a set of axioms that gets you both at once, and it's due to Leonard Savage, 1954. How has this gone unmentioned here? I'm going to have to look up the details of this.
2David_Gerard13y
We need a decent "Bayesian epistemology" article on LW. The SEP one may suck. And EY's "Intuitive Explanation" is, IME, nothing of the sort.
3arundelo13y
If the Metafilter commenter is saying that the book is mistitled because rationalism is the opposite of empiricism, his or her comment doesn't make sense considering that the book's title uses "rationality", not "rationalism". (Compare Google hits for rationality versus rationalism.)

I used to have a hobby of reading Christian apologetics to get a better understanding of how the other side lives. I got some useful insights from this, e.g. Donald Miler's Blue Like Jazz was eye-opening for me in that it helped me understand better the psychology of religious faith. However, most books were a slog and I eventually found more entertaining uses for my time.

Today I saw that a workmate of mine was reading Lee Strobel's The Case For Faith earlier. My policy is to not discuss politics or religion at work, so I didn't bring it up there.

I hadn't read that particular book before, so I was curious about its arguments. Reading over the summary, I remembered again why I quit reading Christian apologetics - they are really boring.

The subtitle of The Case Against Faith is A Journalist Investigates the Toughest Objections to Christianity, and is quite untrue. I can almost dismiss each chapter in the time it takes to yawn. Even if Strobel had good answers to the Problem of Evil, or proved that religious people historically have been less violent than non-religious people, or somehow found a gap in current understanding of evolution, he would still be leagues away from providing e... (read more)

4NancyLebovitz13y
Running Towards the Gunshots: A Few Words about Joan of Arc was the first thing which gave me a feeling of why anyone would want to be Catholic. However, that's the emotional side, not the arguments. tl, dr (and be warned, the piece is highly political): Joan of Arc is the patron saint of disaffected Catholics-- not only does the rant give a vivid picture of what it's like to love Catholicism, it's so large and so old that there's a reasonable chance that it will have something to suit a very wide range of people.

Per talk page - I have just updated the jargon file on the wiki, making it actually a list of jargon with definitions. I've also folded in the previous acronym file, as a jargon file should be a single page. Point your n00bs here. Since it's a wiki, feel free to fix any of my quick one-line definitions you don't like.

(I'm new here and don't have enough karma to create a thread, so I am posting this question here. Apologies in advance if this is inappropriate.)

Here is a topic I haven’t seen discussed on this forum: the philosophy of “Cosmicism”. If you’re not familiar with it check Wikipedia, but the quick summary is that it’s the philosophy invented by H. P. Lovecraft which posits that humanity’s values have no cosmic significance or absolute validity in our vast cosmos; to some alien species we might encounter or AI we might build, our values would be as meaningless... (read more)

cousin_it and Vladimir_Nesov's replies are good answers; at the risk of being redundant, I'll take this point by point.

to some alien species we might encounter or AI we might build, our values would be as meaningless as the values of insects are to us.

The above is factually correct.

humanity’s values have no cosmic significance or absolute validity in our vast cosmos

The phrases "cosmic significance" and "absolute validity" are confused notions. They don't actually refer to anything in the world. For more on this kind of thing you will want to read the Reductionism Sequence.

all our creations and efforts are ultimately futile in a universe of increasing entropy and astrophysical annihilation

Our efforts would be "ultimately futile" if we were doomed to never achieve our goals, to never satisfy any of our values. If the only things we valued were things like "living for an infinite amount of time", then yes, the heat death of the universe would make all our efforts futile. But if we value things that only require finite resources, like "getting a good night's sleep tonight", then no, our efforts are not a priori futile.

Onl

... (read more)
7jsalvatier13y
Very well expressed. Especially since it links to the specific sequence that deals with this instead of generally advising to "read the sequences".
1TheCosmist13y
Wow fantastic thank you for this excellent reply. Just out of curiosity, is there any question this "cult of rationality" doesn't have a "sequence" or a ready answer for? ;)
7benelliott13y
The sequences are designed to dissolve common confusions. By dint of those confusions being common, almost everybody falls into them at one time or another, so it should not be surprising that the sequences come up often in response to new questions.
5Nisan13y
You're welcome. The FAQ says:
1arundelo13y
"[R]eality has a well-known [weird] bias."

The standard reply here is that duh, values are a property of agents. I'm allowed to have values of my own and strive for things, even if the huge burning blobs of hydrogen in the sky don't share the same goals as me. The prospect of increasing entropy and astrophysical annihilation isn't enough to make me melt and die right now. Obligatory quote from HP:MOR:

"There is no justice in the laws of Nature, Headmaster, no term for fairness in the equations of motion. The universe is neither evil, nor good, it simply does not care. The stars don't care, or the Sun, or the sky. But they don't have to! We care! There is light in the world, and it is us!"

0TheCosmist13y
So in other words you agree with Lovecraft that only egotism exists?

Wha? There's no law of nature forcing all my goals to be egotistical. If I saw a kitten about to get run over by a train, I'd try to save it. The fact that insectoid aliens may not adore kittens doesn't change my values one bit.

6Vladimir_M13y
That's certainly true, but from the regular human perspective, the real trouble is that in case of a conflict of values and interests, there is no "right," only naked power. (Which, of course, depending on the game-theoretic aspects of the concrete situation, may or may not escalate into warfare.) This does have some unpleasant implications not just when it comes to insectoid aliens, but also the regular human conflicts. In fact, I think there is a persistent thread of biased thinking on LW in this regard. People here often write as if sufficiently rational individuals would surely be able to achieve harmony among themselves (this often cited post, for example, seems to take this for granted). Whereas in reality, even if they are so rational to leave no possibility of factual disagreement, if their values and interests differ -- and they often will -- it must be either "good fences make good neighbors" or "who-whom." In fact, I find it quite plausible that a no-holds-barred dissolving of the socially important beliefs and concepts would in fact exacerbate conflict, since this would become only more obvious.

Negative-sum conflicts happen due to factual disagreements (mostly inaccurate assessments of relative power), not value disagreements. If two parties have accurate beliefs but different values, bargaining will be more beneficial to both than making war, because bargaining can avoid destroying wealth but still take into account the "correct" counterfactual outcome of war.

Though bargaining may still look like "who whom" if one party is much more powerful than the other.

9Vladimir_M13y
How strong perfect-information assumptions do you need to guarantee that rational decision-making can never lead both sides in a conflict to precommit to escalation, even in a situation where their behavior has signaling implications for other conflicts in the future? (I don't know the answer to this question, but my hunch is that even if this is possible, the assumptions would have to be unrealistic for anything conceivable in reality.) And of course, as you note, even if every conflict is resolved by perfect Coasian bargaining, if there is a significant asymmetry of power, the practical outcome can still be little different from defeat and subjugation (or even obliteration) in a war for the weaker side.
0AlephNeil13y
By 'negative-sum' do you really mean 'negative for all parties'? Because, taking 'negative-sum' literally, we can imagine a variant of the Prisoner's Dilemma where A defecting gains 1 and costs B 2, and where B defecting gains 3 and costs A 10.
0cousin_it13y
I suppose I meant "Pareto-suboptimal". Sorry.
1Vladimir_M13y
How does that make sense? You are correct that under sufficiently generous Coasian assumptions, any attempt at predation will be negotiated into a zero-sum transfer, thus avoiding a negative-sum conflict. But that is still a violation of Pareto optimality, which requires that nobody ends up worse off.
0[anonymous]13y
I don't understand your comment. There can be many Pareto optimal outcomes. For example, "Alice gives Bob a million dollars" is Pareto optimal, even though it makes Alice worse off than the other Pareto optimal outcome where everyone keeps their money.
1Vladimir_M13y
Yes, this was a confusion on my part. You are right that starting from a Pareto-optimal state, a pure transfer results in another Pareto-optimal state.
4David_Gerard13y
As I commented on What Would You Do Without Morality?: Without an intrinsic point to the universe, it seems likely to me that people would go on behaving with the same sort of observable morality they had before. I consider this supported by the observed phenomenon that Christians who turn atheist seem to still behave as ethically as they did before, without a perception of God to direct them. This may or may not directly answer your question of what's the correct moral engine to have in one's mind (if there is a single correct moral engine to have in one's mind - and even assuming what's in one's mind has a tremendous effect on one's observed ethical behaviour, rather than said ethical behaviour largely being evolved behaviour going back millions of years before the mind), but I don't actually care about that except insofar as it affects the observed behaviour.
2sark13y
It's perhaps worthwhile pointing out that even as there is nothing to compel you to accept notions such as "cosmic significance" or "only egotism exists", by symmetry, there is also nothing to compel you to reject those notions (except for your actual values of course). So it really comes down to your values. For most humans, the concerns you have expressed are probably confusions, as we pretty much share the same values, and we also share the same cognitive flaws which let us elevate what should be mundane facts about the universe to something acquiring moral force. Also, it's worth pointing out that there is no need for your values to be "logically consistent". You use logic to figure out how to go about the world satisfying your values, and unless your values specify a need for a logically consistent value system, there is no need to logically systematize your values.
2Vladimir_Nesov13y
Read the sequences and you'll probably learn to not make the epistemic errors that generate this position, in which case I expect you'll change your mind. I believe it's a bad idea to argue about ideologies on object level, they tend to have too many anti-epistemic defenses to make it efficient or even productive, rather one should learn a load of good thinking skills that would add up to eventually fixing the problem. (On the other hand, the metaethics sequence, which is more directly relevant to your problem, is relatively hard to understand, so success is not guaranteed, and you can benefit from a targeted argument at that point.)
1David_Gerard13y
You know, I was hoping the gentle admonition to casually read a million words had faded away from the local memepool. Your usage here also happens to serve as an excellent demonstration of the meaning of the phrase as described on RW. I suggest you try not to do that. Pointing people to a particular post or at worst a particular sequence is much more helpful. (I realise it's also more work before you hit "comment", but I suggest that's a feature of such an approach rather than a bug.) Do please consider the possibility that to read the sequences is not, in fact, to cut'n'paste them into your thinking wholesale. TheCosmist: the sequences are in fact useful for working out what people here think, and for spotting when what appears to be an apposite comment by someone is in fact a callout. ciphergoth has described LW as "a fan site for the sequences", which it's growing into more than, but which is still useful to know as the viewpoint of many long-term readers. It took me a couple of months of casual internet-as-television-time reading to get through them, since I was actively participating here and all.
-2Vladimir_Nesov13y
Sequences are a specific method of addressing this situation, not a general reference. I don't believe individual references would be helpful, instead I suggest systematic training. I wrote: You'd need to address this argument, not just state a deontological maxim that one shouldn't send people to read the sequences.
-2David_Gerard13y
I wasn't stating a deontological maxim - I was pointing that you were being bloody rude in a highly unproductive manner that's bad for the site as a whole. "I suggest you try not to do that."
5Vladimir_Nesov13y
Again, you fail to address the actual argument. Maybe the right thing to do is to stay silent, you could argue that. But I don't believe that pointing out references to individual ideas would be helpful in this case. Also, consider "read the sequences" as a form of book recommendation. Book recommendations are generally not considered "bloody rude". If you never studied topology, and want to understand Smirnov metrization theorem, "study the textbook" is the right kind of advice. Actually changing your mind is an advanced exercise.

Friendly AI: A Dangerous Delusion?

By: Hugo de Garis - Published: April 15, 2011

http://hplusmagazine.com/2011/04/15/friendly-ai-a-dangerous-delusion/

1timtyler13y
Hugo presents 3 main arguments: * The Evolutionary Engineering Argument. * The Cosmic Ray Argument. * The Naïve Asimov Argument They all look hopeless to me.

The latest XKCD was brilliant. :)

I have only just discovered that Hacker News is worth following. Since the feed of stuff I read is Twitter, that would be @newsycombinator. I started going back through the Twitter feed a few hours ago and my brain is sizzling. Note that I am not a coder at all, I'm a Unix sysadmin. Work as any sort of computer person? You should have a look.

The YC/HN community was initially built on Paul Graham's essays, just like LW was built on Eliezer's sequences. Those essays are really, really good. If you haven't read them already, here's a linky, start from the bottom.

0David_Gerard13y
I have indeed :-) It's annoying that @newsycombinator is to the linked pages themselves, not to the Hacker News discussion.
4atucker13y
I actually got to OB/LW through Hacker News.
2David_Gerard13y
I have known about Hacker News for ages, mentally filing it away as yet another Internet news aggregation site. However, I just happened to look at @newsycombinator and was quite surprised at how much of it was gold.
0Alexandros13y
It is another news aggregation service, but it just happens to be the best :). There is a credible hypothesis that it's not as good as it used to, as well. But it's still head and shoulders over everything else (minus LW). I also came to OB via HN if I recall correctly.

Is it just me, or do you feel a certain respect for Harold Camping? He describes himself as "flabbergasted" that the world didn't end as he predicted. He actually noticed his confusion!

(I can't find the Open Thread for May 2011.)

5jimrandomh13y
He also predicted that the world would end on May 21, 1988 and September 7, 1994. I don't think respect is appropriate.
2TobyBartels13y
Too bad! I see that the latest reports have him updating to October, so he didn't attend to his confusion for very long this time either.

Via 538: How Feynman Thought on the Freakonomics blog.

Reposting from the latest HP:MoR discussion thread, since not everyone reads recent comments and I'm not sure this warrants a full post:

Fanfiction.net user Black Logician has announced Harry's Game, a spinoff of HP:MoR which branches out around Chapter 65-67 of the original fic. From his post at the HP:MoR review board:

...Hermione has already formed SPHEW. Quirell though doesn't dismantle Harry's army, but goes for an alternative condition to make the army wars more of a challenge to Harry. ...

Please use ROT13 for spoilers when discussing Harry's Game.

The writing errors in this story are very distracting. I did not click past chapter 1. Is there something to recommend it so strongly that I should get over the bad grammar etc.?

5bogus13y
I also found the spellign and grammer mistakes to be distracting. The story itself does not quite compare to Yudkowsky or Rowling's work, but it's quite witty and makes some good rationalist points.
7Dorikka13y
I just can't leave this alone.
5arundelo13y
Intentional!
2Lila13y
If you can get someone to write you a fully-spoiling summary, that might be better.
1Alicorn13y
I'd read one of those. Any volunteers?
0FAWS13y
It's a lot more Ender's Game like than MoR already was. The ideas are good to decent, the execution questionable and the writing poor (by fanfiction worth reading standards, decent by average fanfiction standards). I found it fairly enjoyable, but I mostly managed to tune out the quality of the writing. I'd recommend it to anyone who loves MoR for the clever plots, and anyone who enjoys the clever plots and can get over bad writing.

I just had a startling revelation. I had been glancing now and then at my karma for the last few days and noticed that it was staying mostly constant. Only going up now and then. This is despite a lot of my comments getting a whole bunch of upvotes. So naturally I figured I had offended one or more folks and they were downvoting me steadily to keep it constant. I don't exactly tiptoe around to avoid getting anyone offside and I don't really mind that much if people use karma hits as a way to get their vengeance on. It saves them taking it out via actual co... (read more)

My nomination for Worst Use of the word "Bayesian", April 2011. This may answer my earlier question as to whether creationists, birthers, etc adopting the notion of Bayes' theorem is a good idea or not. Remember: choose your prior based on your bottom line!

To anyone who knows: How active are the fortnightly Cambridge, MA meetups? There seem to be very few RSVPs on the meetup.com page, but I suppose it's possible that if there are any regular attendees they don't always bother RSVPing.

0jimrandomh13y
We generally just don't bother RSVPing. Median attendance is 4, occasionally much more.

Hypothetical situation: Let's say while studying rationality you happened across a technique that proved to give startlingly good results. It's not an effortless path to truth but the work is made systematic and straightforward. You've already achieved several novel breakthroughs in fields of interest where you've applied the technique (this has advanced your career and financial standing). However, you've told nobody and, since nobody is exploring this area, you find it unlikely anybody will independently discover the same technique. You have no reason to... (read more)

0wedrifid13y
Trusted group.

Screenshot from our ongoing intelligence explosion:

howtomacke arobotinstchrocshin

[-][anonymous]13y00

I used to have a hobby of reading Christian apologetics to get a better understanding of how the other side lives. I got some useful insights from this, e.g. Donald Miler's Blue Like Jazz was eye-opening for me in that it helped me understand better the psychology of religious faith. However, most books were a slog and I eventually found (more entertaining uses for my time)[http://projecteuler.net/index.php?section=problems].

Today I saw that an workmate of mine was reading Lee Strobel's The Case For Faith earlier. My policy is to not discuss politics or re... (read more)

0CronoDAS13y
For some reason, that comic reminds me of a particular Isaac Asimov story.

Does anyone else have religiophobia? I get irrationally scared every time I see someone passing out pocket bibles or knocking on doors with pamphlets. I'm afraid of...well, of course there isn't much to be afraid of, or else it wouldn't be a phobia.

3JoshuaZ13y
Not really. I only have annoyance that whenever I see such people I'm always too busy to talk to them and find out more about what religion they are. I consider this to be evidence that there is a deity and that that deity treats me sort of how one might treat a cat when one has recently obtained a laser pointer.
2Normal_Anomaly13y
I don't get scared when I see people doing this, but I do have an irrational desire to go get into a long useless argument. I'm always too busy to have to fight it, though.
2David_Gerard13y
Fortunately, we have a defensive weapon (PDF) to hand.