Lara_Foster2
Lara_Foster2 has not written any posts yet.

Lara_Foster2 has not written any posts yet.

Eliezer,
How are you going to be 'sure' that there is no landmine when you decide to step?
Are you going to have many 'experts' check your work before you'll trust it? Who are these experts if you are occupying the highest intellectual orbital? How will you know they're not YesMen?
Even if you can predict the full effects of your code mathematically (something I find somewhat doubtful, given that you will be creating something more intelligent than we are, and thus its actions will be by nature unpredictable to man), how can you be certain that the hardware it will run on will perform with the integrity you need it to?
If you... (read more)
Eliezer, How do you envision the realistic consequences of mob-created AGI? Do you see it creeping up piece by piece with successive improvements until it reaches a level beyond our control,
Or do you see it as something that will explosively take over once one essential algorithm has been put into place, and that could happen any day?
If a recursively self-improving AGI were created today, using technology with the current memory storage and speed, and it had access to the internet, how much damage do you suppose it could do?
What I think is a far more likely scenario than missing out on the mysterious essence of rightness by indulging the collective human id, is that what 'humans' want as a complied whole is not what we'll want as individuals. Phil might be aesthetically pleased by a coherent metamorality, and distressed if the CEV determines what most people want is puppies, sex, and crack. Remember that the percentage of the population that actually engages in debates over moral philosophy is diminishingly small, and everyone else just acts, frequently incoherently.
Actually, I CANNOT grasp what life being 'meaningful' well... means. Meaningful to what? To the universe? That only makes sense if you believe there is some objective judge of what state of the universe is best. And then, why should we care? Cuz we should? HUH? Meaningful to us? Well yes- we want things...Did you think that there was one thing all people wanted? Why would you think that necessary to evolution? What on earth did you think 'meaning' could be?
I second Valter and Ben. It's hard for me to grasp that you actually believed there was any meaning to life at all, let alone with high confidence. Any ideas on where that came from? The thought, "But what if life is meaningless?" hardly seems like a "Tiny Note of Discord," but like a huge epiphany in my book. I was not raised with any religion (well, some atheist-communism, but still), and so never thought there was any meaning to life to begin with. I don't think this ever bothered me 'til I was 13 and recognized the concept of determinism, but that's another issue. Still- why would someone who believed that we're all just information-copying-optimization matter think there was any meaning to begin with?
Greindl, Ah, but could not one be overconfident in their ability to handle uncertainties? People might interpret your well-reasoned arguments about uncertain things as arrogant if you do not acknowledge the existence of unknown variables. Thus, you might say, "If there's a 70% probability of X, and a 50% probability of Y, then there's a clear 35% probability of Z," while another is thinking, "That arrogant fool hasn't thought about A, B, C, D, and E!" In truth, those factors may have been irrelevant, or so obvious that you didn't mention their impact, but all the audience heard was your definitive statement. I'm not arguing that there is a... (read more)
Nice.
By George! You all need to make a hollywood blockbuster about the singularity and get all these national-security soccor moms screaming hellfire about regulating nanotechnology... "THE END IS NEAR!" I mean, with 'Left Behind' being so popular and all, your cause should fit right into the current milieu of paranoia in America.
I can see the preview now, children are quietly singing "My Country 'tis of Thee" in an old-fashioned classroom, a shot zooms from out the window to show suburban homes, a man taking out the trash with a dog, a woman gardening, a newscast can be overheard intermingling with the singing, "Ha Ha Mark, well, today's been a... (read more)
I understand that there are many ways in which nanotechnology could be dangerous, even to the point of posing extinction risks, but I do not understand why these risks seem inevitable. I would find it much more likely that humanity will invent some nanotech device that gets out of hand, poisons a water supply, kills several thousand people, and needs to be contained/quarantined, leading to massive nano-tech development regulation, rather than a nano-tech mistake that immediately depressurizes the whole space suit, is impossible to contain, and kills us all.
A recursively improving, superintelligent AI, on the other hand, seems much more likely to fuck us over, especially if we're convinced it's acting in our best interest for the beginning of its 'life,' and problems only become obvious after it's already become far more 'intelligent' than we are.
Ohhhh... oh so many things I could substitute for the word 'Zebra'....