Your post prompted me to recall what I read in Military Nanotechnology: Potential Applications and Preventive Arms Control by Jürgen Altmann. It deals mostly with non-molecular nanotech we can expect to see in the next 5-20 years (or already, as it was published in 2006), but it does go over molecular nanotech and it's worth thinking about the commonly mentioned x-risk of a universal molecular assembler in addition to AGI for the elites to handle over the next 70 years.
I think as a small counter to the pessimistic outlook the parable gives, it's worth remembering that the Biological and Toxin Weapons Convention and especially the Chemical Weapons Convention have been fairly successful in their goals. The CWC lays out acceptable verification methods which aren't so demanding that if a country accepts them then they slide into complete subjugation of the inspectors... If it could be extended to cover nanotech weapons that'd be a good thing.
On the other hand, maybe they're not so much cause for optimism. The BTWC has a noticeable lack of verification measures, and Altmann cites that as mainly due to the US dragging its feet. The US can't even deal with managing smaller threats at home where it has complete jurisdiction, like 3D printed guns, so it's hard for me to see it in its current form dealing with a bigger threat of a nanotech arms race (let alone x-risks), especially if that requires playing nice with the international community.
I was going to reply with something similar. Kevin Knuth in particular has an interesting paper deriving special relativity from causal sets: http://arxiv.org/abs/1005.4172
You are having an overreaction. (But I would also say the ops are being overzealous and inefficient with their goal of having less people suck at IRC, which seems like a fine goal.)
A person who does not want to suck at IRC should not want to participate in this behavior: http://pastebin.com/yBw1iX1C
(Times are Pacific, my client does not always log every channel event.)
Here's the follow-up log up until this moment, which includes various chatter and discussion on this "drama": http://pastebin.com/8Rz9PFv4
Edit: to the downvoter, I'll happily delete both these comments if you feel that context logs shouldn't be linked to so that anyone else on this site has a clue what this discussion is about.
It was my understanding that this is one of Kurzweil's eventual goals: reconstructing his father from DNA, memories of people who knew him, and just general human stuff.
This has been on my reading queue for ages, might as well join in!
I live in Seattle (technically on the border of Bellevue and Redmond), which makes me #3 for this area. Meetups would be great, though I'm unavailable weekdays until after 7 or so.
I've been lurking for a while, looks like. (My how time flies.) I'll throw my name in the pot of wanting more communication channels like IRC (looks like a room's setup, time to check it out!), especially less formal ones to ease transitioning to formal comments / top-level posts. The proportion of high-quality posts and comments around here seems awesomely high, but unfortunately makes it uncomfortable to just dive into. I also feel like I need to read all the sequences, in which admittedly I've made a pretty big hole so that there's not many posts left. (Currently going through quantum stuff, also picked up a copy of Feynman's QED.)
I've always thought it would be nice to have a "Frequentist-to-Bayesian" guide. Sort of a "Here's some example problems, here's how you might go about it doing frequentist methods, here's how you might go about it using Bayesian techniques." My introduction to statistics began with an AP course in high school (and I used this HyperStat source to help out), and of course they teach hypothesis testing and barely give a nod to Bayes' Theorem.
While I'm not in any way an expert in simulation making, wouldn't it seem just a bit too convenient that, in all the monstrous computing power behind making the universe run, the Overlords couldn't devise a pretty clever and powerful algorithm that would have found us already? Maybe you can help me see why there would only be a crude algorithm that superintelligences should fear being caught by, and why they wouldn't have considered themselves caught already.
Apart from this, I'm in agreement with other commenters that a stronger argument is the vastness of space.
Sorry for the necro -- the linked article is 404'd. I uploaded a backup here. I didn't find it on the author's site but did find a copy through Web Archive; still, maybe my link will save someone else the hassle.