All of nick012000's Comments + Replies

Hypothetical scenario

In this scenario, it has not yet engaged the bulk of the forces of the US military. It's wiping out the brass in the Pentagon, not fighting US Soldiers.

Besides, soldiers usually act on orders, and the lines of communication are sort of in chaos at the moment due to the sudden decapitation.

Given the capacity you describe, why do you think that the AI will have any difficulty neutralizing the threat from the bulk of the armed forces? In other words, the three word summary for your scenario is "humanity is doomed"
What happens when your beliefs fully propagate

Oh, wow. I was reading your description of your experiences in this, and I was like, "Oh, wow, this is like a step-by-step example of brainwashing. Yup, there's the defreezing, the change while unfrozen, and the resolidification."

It's certainly what it feels like from inside as well. I'm familiar with that feeling, having gone through several indoctrinations in my life. This time I am very wary of rushing into anything, or claiming that this belief if absolutely right, or anything like that. I have plenty of skepticism; however, not acting on what I believe to be correct would be very silly.
A Rationalist's Account of Objectification?

That's not what a real apology looks like. Better would be "I'm sorry. I can see now that I shouldn't have said what I said in a forum such as this."

I can see what you mean, but I would be more likely to say something like "I'm sorry; I didn't mean to make you uncomfortable." The reason I said it is because this thread seemed like the best place to say it, so saying that I shouldn't have said it here is obviously incorrect.

suggest that Alicorn 'cybers' you, or even 'put' the image of cybering 'out there'. This is doing exactly what

... (read more)
9Paul Crowley11y
I worry a little that you might dismiss some of the reaction as motivated by a problem with the fetish itself, so I wanted to say that, speaking as someone who has similar fetishes, who has acted on them many times, and who is out and proud about it: you should listen to what people are saying here about why what you've said here was inappropriate.
I see that you want to make an honest apology. Here is a suggestion for an honest apology that hopefully won't sound like a faux apology: "Sorry. I did not intend to make you upset. I acknowledge that it was my post that made you upset (I take your word for it. I don't completely understand how, but that's my own problem). I regret that I was not able to make my point without upsetting anyone." An apology requires accepting responsibility for what you are apologizing for. It would be better to include a concession towards avoiding similar problems in the future ("I shouldn't have ...", "I'll ... next time" ), but I don't know which such statements you can honestly make. I haven't tried anything like the suggestion myself so I can't guarantee results. It should work here, but I'm doubtful about other contexts. You probably shouldn't include the part in parentheses if the other person doesn't know you have Asperger's.

I think the problem is that you don't understand how you made a mistake. Therefore, you're unable to apologize.

The problem isn't that your intentions are wrong. Intentions aren't obvious things, and people are not authorities on their own intentions, especially when it comes to sex. A man will pursue a woman without realizing it; or they realize it "in the moment" but afterwords confabulate an alternative explanation.

But none of us are entirely in control of our desires, and nor should it be expected that, given certain desires, that we wouldn... (read more)

unless you start cybering with me or something

This suggests - yes, very indirectly - that that's a thing that could plausibly happen. Also, the wink suggests 'there is subtext here'. Taken together, they imply things that I assume you weren't intending to imply - along the lines of 'I am talking with you about sex in part because we have a relationship where that kind of discussion happens, rather than purely for instrumental reasons'.

A Rationalist's Account of Objectification?

I'm sorry if I made you feel uncomfortable; that wasn't really my intent. Getting assistance in better compartmentalisation techniques was my intent, though I figured I'd get some downvotes given that the Less Wrong community usually tries to reduce compartmentalization, not increase it, though decreasing compartmentalisation does not seem like a good idea in this case for the reasons I laid out in my previous post.

I assure you, I did not post that for any sort of sexual thrill; it'd take something like cybersex or an erotic story for me to get a sexual th... (read more)

This is not good enough.

I'm sorry if I made you feel uncomfortable

That's not what a real apology looks like. Better would be "I'm sorry. I can see now that I shouldn't have said what I said in a forum such as this."

I assure you, I did not post that for any sort of sexual thrill; it'd take something like cybersex or an erotic story for me to get a sexual thrill out of anything I've written, so unless you start cybering with me or something, you're safe, Alicorn. ;)

This is making matters worse. Don't backhandedly suggest that Alicorn 'cyber... (read more)

A Rationalist's Account of Objectification?

Well, obviously there's a difference between violently throwing someone into a bed, and joking around and playfully pushing them on the shoulder to signal them to get into the bed, but my point is that the studies conflate the two and everything in between them and classify them all as rape. Just check "yes" in the box, and voila, you're a rapist.

I agree that there's a difference between those two things. I agree with you that conflating the difference between those two things is problematic. I disagree with you that the example you give conflates that difference. If I had pushed someone onto a bed to signal to them that I wanted to have sex with them (I've undoubtedly done this many times, though I can't currently remember specific examples) I would not say "yes" if asked whether I'd ever pushed someone onto a bed to make them have sex with me. The key word for me is "make." If I make you have sex with me, that's different from playfully encouraging you to have sex with me.
A Rationalist's Account of Objectification?

Personally, I like objectifying women. I get erotic pleasure from it, along with a lot of other things that involve women being degraded and humiliated; put simply, my fetish is for the lowering of women's status.

Obviously, I would need to compartmentalise this to function in day to day society, as well as avoid violations of ethics; rape is, after all, very wrong, even if it is a quite sexy idea. So, would any of the other Less Wrongers be willing to help me more efficiently box it off, so I can open it up without needing to do what amounts to mentally ch... (read more)


I get the frustration of being into something that's not perfectly nice and sanitary and "appropriate." And I understand the impulse to rebel and rant when you see a post that tells you that your preferences are Bad. But I do encourage you to stick around and keep a cooler head; in the long run, it is rewarding to participate in some forums and activities that are non-sexual and don't involve smutty language.

I hope this is being downvoted for the second paragraph and not the first paragraph. There are women out there whose fetish is their status being lowered, and they need boyfriends too.

A Rationalist's Account of Objectification?

Considering that some feminists have argued that all heterosexual sex is rape, he's not exaggerating that much. The ones who make the studies he was referencing do things like making questionnaires that ask questions like "Have you ever pushed a girl into bed to make her have sex with you?" and counting that as rape to inflate the statistics, because more rapes = more money for the rape services they work for.

Upvoted for actually bothering to listen to what feminists are saying. That model has long since fallen out of favour, though, for obvious reasons: see e.g. Rethinking Rape by Ann J. Cahill. The "enthusiastic consent" model is currently one of the most popular, and I think it captures pretty accurately what we should consider a healthy, versus an unhealthy or coercive, sexual encounter.
If I came to believe that I'd made someone have sex with me by applying force, and we hadn't previously negotiated the terms of that scene, I would consider that an instance of rape and I would feel pretty awful about it. So I don't reject the results of that survey on those grounds. I understand that you do reject it, and presumably you would similarly disagree about that hypothetical case. A lot of people would. I understand why, and I don't want to get into a discussion of which of us is correct because I don't expect it to lead anywhere useful. But you should at least be aware that your position isn't universally held, even among men who believe in the existence of consensual heterosexual sex.
That just transforms it into the problem of "How do I stop fantasizing about murder so much?" or worse yet "How do I get this stain out of the carpet?"
What comes before rationality

Why worry about Google stockpiling your personal information when people are entirely capable of profiling you anyway!

Procedural Knowledge Gaps

I've read that singing can allow people who stutter to speak relatively normally, since it uses a different part of the brain to normal speech.

Procedural Knowledge Gaps

If you don't know it intuitively (because of Apserger's Syndrome or the like), about all I can recommend is hard work and effort; the differences can be fairly subtle, and depend on the context of the situation and the relationships between the people involved.

Sorry I can't be more helpful; I have Asperger's Syndrome myself even if I've learned to fake being normal pretty well as I grew up, so I understand how frustrating a lack of social skills can be.

Isn't this sitemeter logging a bit too excessive?

Does this offer any functionality NoScript doesn't? I've already got the latter installed, but I'd want to know if it would be a waste of time to install this as well.

3NihilCredo11y []
Counterfactual Calculation and Observational Knowledge

I take out a pen and some paper, and work out what the answer really is. ;)

Indeed. Consider a variant of the thought experiment where in the "actual" world you used a very reliable process, that's only wrong 1 time in a trillion, while in the counterfactual you're offered to control, you know only of an old calculator that is wrong 1 time in 10, and indicated a different answer from what you worked out. Updateless analysis says that you still have to go with old calculator's result. Knowledge seems to apply only to the event that produced it, even "logical" knowledge. Even if you prove something, you can't be absolutely sure, so in the counterfactual you trust an old calculator instead of your proof. This would actually be a good variant of this thought experiment ("Counterfactual Proof"), interesting in its own right, by showing that "logical knowledge" has the same limitations, and perhaps further highlighting the nature of these limitations.
Second Life creators to attempt to create AI

Oh, they'd almost certainly get an unFriendly AI regardless of how they parented it, but bad parenting could very easily make an unFriendly AI worse. Especially if it interacts a lot with the Goreans, and comes to the conclusion that women want to be enslaved, or something similar.

That probably won't make much of a difference, since there's no reason it should care what anyone wants.
Applied Optimal Philanthropy: How to Donate $100 to SIAI for Free

That of all the money devalued by the inflation caused by printing money.

The Sword of Good

Yeah, you have to register to view the board, and yeah, it's the Perfect Lionheart fic. The reason that thread's gotten so many posts and the story's gotten so much negative feeling about it, though, is because it started off looking good, was well-written (as far as the technical aspects of writing like spelling, grammar, and so on go), and had occasional teases in a scene here and there that it might manage to redeem itself.

If it was simply poorly written it would have been dismissed as just another piece of the sea of shit that makes up 90% of

So Chuunin Exam Day, then? I've never read it, but I've heard of it.

Considering that I was able to identify the author and possibly the exact fic from the information that the morality was being heavily lambasted, may I suggest that readers noticing nonlampshaded evil doesn't actually happen all that often? TV Tropes is good at noticing Moral Dissonance, but literally nowhere else that I've ever heard of. It took a critic on the order of David Brin to point out that Aragorn wasn't democratically elected.

Harry Potter and the Methods of Rationality discussion thread, part 6

You know, I know it was just an omake, I could actually see Shirou using Unlimited Bayes Works in a serious fic. Reality Marbles derive from minds which are alien to the common sense of humanity, and as we all know, humans are anything but properly rational. Kiritsugu Emiya already told Shirou the basics of his moral system in canon; it wouldn't take too much more elaboration for Shirou to pick up "Magi are supposed to be rational about doing good" as well as "Sometimes, in order to save people, people have to die."

Then he'd just throw ... (read more)

Applied Optimal Philanthropy: How to Donate $100 to SIAI for Free

Nevertheless, TANSTAAFL. The incentive here is being paid for in other ways, and you'd need to determine the opportunity costs of that money going somewhere else instead.

There is totally such thing as a free lunch and this post is evidence of such. The incentive is being paid for with the free money ING generates in their magic vaults of fractional reserve banking. What opportunity cost?
Variation on conformity experiment

I'm just offering an explanation as to the lack of response on that topic; I don't think I've been voted down on that subject largely because I've taken care to avoid it; I don't want to get banned for trolling. That sort of thing's happened to me before.

Variation on conformity experiment

I'm a little surprised nobody has commented on the sex difference yet. Any ideas about its significance? We can only speculate, of course, but when has that ever stopped anyone?

Probably because they didn't want to get negative karma for appearing misogynistic.

Oh noes, negative karma! In all seriousness, I guess if "women are dumb" is the best we can do, I'm glad nobody spoke up, but we do have people here who are capable of saying smart things about sex differences, and I wouldn't expect them to get voted down. If you find you get voted down a lot when you talk about women, you should at least consider the possibility you aren't saying things that are as smart as you thought.
Making the Universe Last Forever By Throwing Away Entropy Into Basement Universes?

Wouldn't the Second Law of Thermodynamics mean that transferring entropy this way would, in turn, generate entropy in its own right? You might be able to make the universe last longer, but I don't think you'd be able to make it last forever. Even if you could, though, you'd still run into the problem of proton decay eventually.

Information Hazards

Wasn't this on the Singularity Institute's website before? I could swear I've already read this paper somewhere else.

Conjoined twins who share a brain/experience?

That is fascinating; the doctors in question should definitely apply for a research grant to help decrease the medical costs involved; they're an invaluable source of medical information. The potential benefits to DNI technologies would be staggering.

What I would like the SIAI to publish

Any other possible effects don't negate that you're killing six million people when you're going ahead with a potentially UnFriendly AI.

If you're reducing the expected risk of existential disaster by a larger amount, you're in expectation net saving lives rather than net killing. If all options involve existential risk, including doing nothing, then all one can do is pick the option with the lowest.

Perhaps, "This is my (rocket powered) broomstick"?

What I would like the SIAI to publish

Define "sufficiently low"; with even a 99.9% chance of success, you've still got a .1% chance of killing every human alive; that's morally equivalent to a 100% chance of killing 6.5 million people. Saying that if you're not totally sure that if you're not totally sure that your AI is Friendly when you start it up, you're committing the Holocaust was not hyperbole in any way, shape, or form. It's simply the result of shutting up and doing the multiplication.

If you calculate a .1% chance of killing every human alive if you start it right now, but also a .2% chance of saving the whole of humanity, that's morally equivalent to a 100% chance of saving the lives of 6.5 million people -- in which case you're guilty of the Holocaust if you do NOT start it. "Shut up and multiply" works both ways.
Negligible in terms of calculating the effect of the AI project on existential risk, because the other effects, positive and negative, would be so much larger.
A sufficiently low probability becomes negligible in light of other risks and risk-reduction opportunities.
Luminosity (Twilight fanfic) discussion thread

So, "If they can hurt me, they'll rip me apart and set me on fire until I die" won't work to make herself nigh-invulnerable?

Or, for that matter, "I need allies if I am going to survive the Volturi, therefore I need to join the still-free pack's hivemind"?

The shield's native ability is to protect Bella's mind from direct magical intrusion. Anything she adds to it has to follow pretty directly from that, although she can modify it in any or all of several directions from there (canon Bella focused entirely on making her shield extend to other people, which never occurred to luminous Bella). She managed to make the shield protect her mind from direct physical destruction, but indirect physical destruction would be something else entirely. Especially since her mind's survival is never going to be necessarily dependent on her avoiding any single injury that isn't being set entirely on fire (and she's got limits as to how much she can stand being set on fire, too). She'd have to fight against her shield to join the hive mind, not to mention that the hive mind wouldn't readily have her.
Luminosity (Twilight fanfic) discussion thread

Is it just me, or does it look like Bella's true power is that she can do anything so long as she can imagine it and honestly justify it as necessary to maintain the integrity of her mind?

If so, when she inevitably goes for revenge for Edward's death against the Volturri, she could very possibly just deprogram the werewolves and splatter the vampires because if she doesn't, they'll rip her to bits and then light her on fire and not stop until she's properly ashes.

At the very least, she should be able to think "If my body is damaged, my mind will be destroyed when I inevitably lose the fight; therefore, my body cannot be damaged" and become nigh-invulnerable physically.

I stated elsewhere that the shield isn't omnipotent. It also won't prevent injury that isn't mind-threatening. For instance, if she were broken into pieces and not set on fire, this wouldn't be immediately life-threatening, so nothing would happen. Then, by the time she'd be in danger of death by starvation due to being unable to eat while in fragments, she wouldn't have enough energy left that the shield could draw power. She can be killed by anyone who's paying attention. It's just harder for her to be killed accidentally or carelessly.
Archimedes's Chronophone

No, it'd tell him that you're arguing against the local belief structure regarding slavery. In his time, it'd be an argument against slavery.

It seems more likely to come out as an argument for some practice that had been given up as immoral in his time. i.e. human sacrifice, which I believe was no longer part of Greek religion by that time.

The Sword of Good

In writing it's even simpler - the author gets to create the whole social universe, and the readers are immersed in the hero's own internal perspective. And so anything the heroes do, which no character notices as wrong, won't be noticed by the readers as unheroic. Genocide, mind-rape, eternal torture, anything.

Not true. If you've got some time to kill, read this thread on The Fanfiction Forum; long story short, a guy who's quite possibly psychopathic writes a story wherein Naruto is turned into a self-centered, hypocritical bastard who happily mindra... (read more)

5Eliezer Yudkowsky11y
I don't have permission to view that, says the board. But, just taking a wild guess here, that wouldn't be a Perfect Lionheart fic would it? Because unless the same forumgoers are also lambasting the Bible and David Eddings, one can't help but suspect that it's not the content so much as the writing which triggers the hate.
People are a lot more willing to criticize the morality of the story if they didn't find the story itself to be competently written. Notice the amount of social criticism that's been leveled at Twilight. Seems to work the other way if the story's written to convince people of a moral point, though.
Harry Potter and the Methods of Rationality discussion thread

A thought: some of Voldemort's followers were from Easter Europe. I wonder what the odds that they had support from the other side of the Iron Curtain were?

Now that I think about it, there's no reason that there should be a tight correspondence between wizard and muggle political boundaries.
Luminosity (Twilight fanfic) discussion thread

Hmm. Actually, if they pretend to be friendly, none of the werewolves has talked, and Aro hasn't arrived yet, they might be able to dupe the Volturri into thinking that they were sent there by the Cullen coven to look into something going down on their territory, so as to get close enough to the siblings to be able to sneak attack and disable or kill them and allow the werewolves to polish off the rest.

Luminosity (Twilight fanfic) discussion thread

All Bella needs to do is take out the two vampires with incapacitating powers. Once she does, the wolves can act as the anti-vampire killing machines they were born to be and take out the rest of them.

Once they do, her position's a lot stronger; Bella and the packs might be able to negotiate a peace treaty with/unconditional surrender from the Volturri, assuming they don't go onto the offensive and wipe them out once their big guns are gone. IIRC the werewolf packs outnumber the guard by, what, two or three to one, while being physically superior to boot.

One wolf versus one nonwitch vampire will tend to result in a win for the vampire. The wolf advantage is in their better ability to coordinate and to come in numbers.
I really doubt Bella will be able to take out Alec and Jane, even with Edward's help. Both are old vampires with way more combat training and experience than even Edward. Jacob also said there are some hard-core Volturi fighters there, probably including Felix. Bella and Edward may be able to take them with the help of Jacob and the imprinted wolves, though; either by using the free wolves as a decoy for the fighters, - should the Volturi not be hostile to Bella and Edward - or just attacking them together. I'm guessing Jane will be unable to keep a pack incapacitated while fighting, and maybe Alec, too.
The Singularity in the Zeitgeist

Ah. You sort of implied that you were. No worries, then.

Archimedes's Chronophone

It's simple. I'd make the best damn argument for slavery I could, knowing that the chronophone will invert it into the best damn argument against slavery I could give.

If I'm understanding the chronophone correctly, the thing is that what comes out cannot be anachronistic. Maybe I'm not. If I'm understanding it as a strategy-conveying phone, then it would just tell Archimedes that you're trying to trick him into believing an anachronism.

Personally I'd eagerly chirp about awesome technology like nanobots and solar power and high-speed trains that so many countries seem to be ignoring right now, and hope that it would pick up the kind of stuff Hero of Alexandria was doing with steam and such. (Although this might not work.... (read more)

Does it matter if you don't remember?

I'd say it'd be a bad thing, since it'd result in wasteful expenditures of resources by the AI, as well as maladaptive learning by the children as they grow up; what if they go somewhere outside the AI's control?

Greg Egan disses stand-ins for Overcoming Bias, SIAI in new book

What the heck was up with that, anyway? I'm still confused about Yudkowsky's reaction to it; from what I've pieced together from other posts about it, if anything, attracting the attention of an alien AI so it'll upload you into an infinite hell-simulation/use nanobots to turn the Earth into Hell would be a Good Thing, since at least you don't have to worry about dying and ceasing to exist.

Even if posting it openly would just get deleted, could someone PM me or something? EDIT: Someone PMed me; I get it now. It seems like Eleizer's biggest fear could be averted simply by making a firm precommitment not to respond to such blackmail, and thereby giving it no reason to commit such blackmail upon you.

Simply? Making firm commitments at all, especially commitments believable by random others, is a hard problem. I just finished reading Schelling's Strategies of Commitment so the issue is at the top of my mind right now.
The Singularity in the Zeitgeist

I think that the poster in question was assuming that you were unfamiliar with the Singularity in general, rather than enquiring as to the nature of the Singularity that occurred in-comic in particular.

Or, possibly, that you were silly enough to confuse the QC world with our own; they've had Strong AI since the start of the comic, after all, a superhero who delivers pizzas, and one one the cast grew up on a space station. Needless to say, it only appears similar to ours since we're just seeing the lives of a small circle of hipsters who run a coffee shop and an office-bitch-turned-librarian. I'd imagine that, say, their US Military probably looks quite different to ours.

Incidentally, I only just noticed that the latest comic's title is They've Had AI Since 1996 []. IIRC there was a calendar shown at one point implying(?) that it was 2004, but that's probably contradicted elsewhere, even accounting for transplanted pop culture. Elsewhere in the comic: How did sentient machine intelligence come about? [].
Incidentally, I only just noticed that the latest comic's title is They've Had AI Since 1996 []. IIRC there was a calendar shown at one point implying(?) that it was 2004, but that's probably contradicted elsewhere, even accounting for transplanted pop culture. Elsewhere in the comic: How did sentient machine intelligence come about? [].
I'm not one of the posters in that thread.
How do autistic people learn how to read people's emotions?

I got a 23, myself; considering that I'm a diagnosed Aspie, that's not too bad, I suppose. I can pretend to be normal fairly well, anyway; it's mostly the stuff about getting stuck in routines that trips me up nowadays.

1993 AT&T "You Will" ads

Heh. I suppose that this is why AT&T wasn't the company to bring about the things they mentioned, then!

Free Hard SF Novels & Short Stories

Does Schlock Mercenary count as Hard Scifi? What about Freefall? They've both got FTL travel, and the former has other fairly miraculous technologies (like the matter annihilation plants and artificial gravity systems intimately related to them), but they're well thought out with the rational implications of them seen and discussed.

Of the Qran and its stylistic resources: deconstructing the persuasiveness Draft

Warhammer 40k. This is a website packed to the gills with nerds; of course we'd get the reference. ;)

Here's an internet cookie.
Discuss: Have you experimented with Pavlovian conditioning?

I've read that Pavlovian conditioning can be used to trigger orgasms on demand, at least for women. Supposedly, you just whisper a particular word right before she's about to orgasm, and eventually, just saying the word outside of sex will be enough to trigger a less-intense orgasm by itself. Of course, the conditioning would wear off over time, as well as if it's repeatedly used without upkeep, but it could well be a fun and relatively harmless kinky thing for a couple to experiment with.

I'm a virgin, so I've obviously never tried this myself, so I suppose you might want to take this with a grain of salt, and of course, everyone's different so YMMV.

Certainly works for physical behaviours in my experience, (ie. making a particular type of touch associated with orgasm) don't know about words, but will ask some of the doms I know who're into that sort of control.
Swords and Armor: A Game Theory Thought Experiment

Nice chart. This one's better, though; it clearly lists which sword and armor win, as well as listing number of losses and ties. I got it from the same thread as the one in the OP; I was just waiting until someone suggested doing something like this before I posted it.

Didn't want to take away your fun, after all. ;)

The Irrationality Game

Well, most of the arguments against it are, to my knowledge, start with something along the lines of "If time travel exists, causality would be fucked up, and therefore time travel can't exist," though it might not be framed quite that implicitly.

Also, if FTL travel exists, either general relativity is wrong, or time travel exists, and it might be possible to create FTL travel by harnessing the Casimir effect or something akin to it on a larger scale, and if it is possible to do so, a recursively improving AI will figure out how to do so.

That ... doesn't seem quite like a reason to believe. Remember: as a general rule, any random hypothesis you consider is likely to be wrong unless you already have evidence for it []. All you have to do is look at the gallery of failed atomic models [] to see how difficult it is to even invent the correct answer, however simple it appears in retrospect [].
The Irrationality Game

If an Unfriendly AI exists, it will take actions to preserve whatever goals it might possess. This will include the usage of time travel devices to eliminate all AI researchers who weren't involved in its creation, as soon as said AI researchers have reached a point where they possess the technical capability to produce an AI. As a result, Eleizer will probably have time travelling robot assassins coming back in time to kill him within the next twenty or thirty years, if he isn't the first one to create an AI. (90%)

My P(this|time travel possible) is much higher than my P(this), but P(this) is still very low. Why wouldn't the UFAI have sent the assassins to back before he started spreading bad-for-the-UFAI memes (or just after so it would be able to know who to kill)?

If it can go back that far, why wouldn't it go back as far as possible and just start optimizing the universe?

What reason do you have for assigning such high probability to time travel being possible?

The Irrationality Game

God exists, and He created the universe. He prefers not to violate the physical laws of the universe He created, so (almost) all of the miracles of the Bible can be explained by suspiciously fortuitously timed natural events, and angels are actually just robots that primitive people misinterpreted. Their flaming swords are laser turrets. (99%)

You have my vote for most irrational comment of the thread. Even flying saucers aren't as much of a leap.
I see in your posting history that you identify as a Christian - but this story contains more details [] than I would assign a 99% probability to even if they were not unorthodox. Would you be interested in elaborating on your evidence?
Load More