Sorted by New

Wiki Contributions


2012 Less Wrong Census/Survey

Good point. This website is dedicated to such an outcome right?

If the future Agent fully revives dead people purely for selfish reasons, that might be worse than no revival at all.

Reconstructed 21st-C minds might be most valuable as stock non-player-characters in RPG games. Their afterlife might consist of endlessly driving a cab in a 3-block circle, occasionally interrupted when a PC hops in and says "follow that car!", death in a fiery crash, followed by amnesia and reset.

Is anyone working on legal rights for sentient software?

2012 Less Wrong Census/Survey

Re: cryonics, assume the following:

1) Any Agent that reconstructs my mind from a plasticized or frozen brain is very smart and well-informed. It is working its way through a whole warehouse of similar 21st century brains, and can reconstruct vast swathes of my mind with generic any-human or any-human-who-grew-up-watching-Sesame-Street boilerplate. This gets boring after the first few hundred.

2) I'm of no practical use in the post-Singularity world, with my obsolete work skills and mismatching social and moral behavior.

3) Frozen-brain reconstruction starts late enough that nobody remains alive who knows and loves me personally.

In this scenario, I expect the compressed mind reconstructions are just stored in an archive for research/entertainment purposes. Why bother ever running the reconstruction long enough for it to subjectively "wake up"?

I think that we need to let go of the idea of immortality as a continuation of our present self. The most we can hope for is that far in the future, some hyper-intelligent Agent has our memories. And probably the memories of thousands of other dead people as well.

Cryonics is most like writing a really detailed autobiography for future people to read after we're dead. This still seems worthwhile to me, but it's not the same thing as there being a living Charlie Davies in the 23rd century.

I took the survey.

Which Parts Are "Me"?

Robin Hanson comments that the "I" even in a reflective person's mind is an unstable coalition.

My guess is that Eliezer knows this, and is defining his "self" to mean something like "the shifting coalition within this brain, that is trying to save the world". If this guess is wrong, I'd love to find out, this seems like the crucial bit to me.

Welcome to Less Wrong! (July 2012)

Never mind, I found the Group rationality diary which is exactly the right aggregation point for self-improvement schemes.

Beyond Bayesians and Frequentists

Trivia: "fairs well against" should be fares.

Rationality Quotes November 2012

True, also the media will tend to exaggerate the tightness of any race to make their news more exciting. Who will say up until the wee hours of the morning watching commercials and news, if the outcome is certain?

Rationality Quotes November 2012

Many Republican pundits had elaborate theories about how polls were understating Romney's chances in the recent US presidential election, but the results turned out to match polls quite well.

Welcome to Less Wrong! (July 2012)

Hi, Charlie here.

I'm a middle-aged high-school dropout, married with several kids. Also a self-taught computer programmer working in industry for many years.

I have been reading Eliezer's posts since before the split from Overcoming Bias, but until recently only lurked the internet -- I'm shy.

I broke cover recently by joining a barbell forum to solve some technical problems with my low-bar back squat, then stayed to argue about random stuff. Few on the barbell forum argue well -- it's unsatisfying. Setting my sights higher, I now join this forum.

I'll probably start by trying some of the self-improvement schemes and reporting results. Any recommendations re: where to start?