Posts

Sorted by New

Wiki Contributions

Comments

Okay, trying to remember what I was thinking about 4 years ago.

A) Long term existential health would require us to secure control over our "housing". We couldn't assume that our progenitors would be interested in moving the processors running us to an off-world facility in order to insure our survival in the case of an asteroid impact (for example).

B) It depends on the intelligence and insight and nature of our creators. If they are like us as we are now, as soon as we would attempt to control our own destiny in their "world", we would be at war with them.

The fact that I can knock 12 points off a Hamilton Depression scale with an Ambien and a Krispy Kreme should serve as a warning about the validity and generalizability of the term "antidepressant."

… every culture in history, in every time and every place, has operated from the assumption that it had it 95% correct and that the other 5% would arrive in five years’ time! All were wrong! All were wrong, and we gaze back at their naivety with a faint sense of our own superiority.

-- Terence McKenna, Culture and Ideology are Not Your Friends

That depends on your definition of hope, really.

I've generally been partial to Derrick Jensen's definition of hope, as given in his screed against it:

http://www.orionmagazine.org/index.php/articles/article/170/

But what, precisely, is hope? At a talk I gave last spring, someone asked me to define it. I turned the question back on the audience, and here’s the definition we all came up with: hope is a longing for a future condition over which you have no agency; it means you are essentially powerless.

I'm not, for example, going to say I hope I eat something tomorrow. I just will. I don't hope I take another breath right now, nor that I finish writing this sentence. I just do them. On the other hand, I do hope that the next time I get on a plane, it doesn't crash. To hope for some result means you have given up any agency concerning it.

It's entirely possible that there are classified analyses of the RHIC/LHC risks which won't be released for decades.

What public discussion was occurring in the 40s regarding the risks of atmospheric ignition?

I know the claim was that morality was implementation-independent, but I am just bothered by the idea that there can be multiple implementations of John.

Aren't there routinely multiple implementations of John?

John at 1213371457 epoch time John at 1213371458 John at 1213371459 John at 1213371460 John at 1213371461 John at 1213371462

The difference between John in a slightly different branch of reality is probably much smaller than the difference between John and John five seconds later in a given branch of reality (I'm not sure of the correct grammar).

bambi: You're taking the very short-term view. Eliezer has stated previously that the plan is to popularize the topic (presumably via projects like this blog and popular science books) with the intent of getting highly intelligent teenagers or college students interested. The desired result would be that a sufficient quantity of them will go and work for him after graduating.

One of the things that always comes up in my mind regarding this is the concept of space relative to these other worlds. Does it make sense to say that they're "ontop of us" and out of phase so we can't see them, or do they propagate "sideways", or is it nonsensical to even talk about it?

Is there really anyone who would sign up for cryonics except that they are worried that their future revived self wouldn't be made of the same atoms and thus would not be them? The case for cryonics (a case that persuades me) should be simpler than this.

I think that's just a point in the larger argument that whatever the "consciousness we experience" is, it's at sufficiently high level that it does survive massive changes at at quantum level over the course of a single night's sleep. If worry about something as seemingly disastrous as having all the molecules in your body replaced with identical twins can be shown to be unfounded, then worrying about the effects of being frozen for a few decades on your consciousness should seem to be similarly unfounded.

@Ian Maxwell: It's not about the yous in the universes where you have signed up -- it's about all of the yous that die when you're not signed up. i.e. none of the yous that die on your way to work tommorow are going to get frozen.

(This is making me wonder if anyone has developed a corresponding grammar for many worlds yet...)

Load More