Richard_Hollerith
Richard_Hollerith has not written any posts yet.

Richard_Hollerith has not written any posts yet.

Anna, it takes very little effort to rattle off a numerical probability -- and then most readers take away an impression (usually false) of precision of thought.
At the start of Causality Judea Pearl explains why humans (should and usually do) use "causal" concepts rather than "statistical" ones. Although I do not recall whether he comes right out and says it, I definitely took away from Pearl the heuristic that stating your probability about some question is basically useless unless you also state the calculation that led to the number. I do recall that stating a number is clearly what Pearl defines as a statistical statement rather than a causal statement.... (read more)
Instead of describing my normative reasoning as guided by the criterion of non-arbitrariness, I prefer to describe it as guided by the criterion of minimizing or pessimizing algorithmic complexity. And that is a reply to steven's question right above: there is nothing unstable or logically inconsistent about my criterion for the same reason that there is nothing unstable about Occam's Razor.
Roko BTW had a conversion experience and now praises CEV and the Fun Theory sequence.
Let me clarify that what horrifies me is the loss of potential. Once our space-time continuum becomes a bunch of supermassive black holes, it remains that way till the end of time. It is the condition of maximum physical entropy (according to Penrose). Suffering on the other hand is impermanent. Ever had a really bad cold or flu? One day you wake up and it is gone and the future is just as bright as it would have been if the cold had never been.
And pulling numbers (80%, 95%) out of the air on this question is absurd.
Richard, I'd take the black holes of course.
As I expected. Much you (Eliezer) have written entails it, but it still gives me a shock because piling as much ordinary matter as possible into supermassive black holes is the most evil end I have been able to imagine. In contrast, suffering is merely subjective experience and consequently, according to my way of assigning value, unimportant.
Transforming ordinary matter into mass inside a black hole is a very potent means to create free energy, and I can imagine applying that free energy to ends that justify the means. But to put ordinary matter and radiation into black holes massive enough that the mass will never come back out as Hawking radiation as an end in itself -- horror!
Question for Eliezer. If the human race goes extinct without leaving any legacy, then according to you, any nonhuman intelligent agent that might come into existence will be unable to learn about morality?
If your answer is that the nonhuman agent might be able to learn about morality if it is sentient then please define "sentient". What is it about a paperclip maximizer that makes it nonsenient? What is it about a human that makes it sentient?
Speaking of compressing down nicely, that is a nice and compressed description of humanism. Singularitarians, question humanism.
trying to distance ourselves from, control, or delete too much of ourselves - then having to undo it.
I cannot recall ever trying to delete or even control a large part of myself, so no opinion there, but "distancing ourselves from ourselves" sounds a lot like developing what some have called an observing self, which is probably a very valuable thing for an person wishing to make a large contribution to the world IMHO.
A person worried about not feeling alive enough would probably get more bang for his buck by avoiding exposure to mercury, which binds permanently to serotonin receptors, causing a kind of deadening.
s/werewolf/Easter bunny/ IMHO.
Did that make sense?
Yes, and I can see why you would rather say it that way.
My theory is that most of those who believe quantum suicide is effective assign negative utility to suffering and also assign a negative utility to death, but knowing that they will continue to live in one Everett branch removes the sting of knowing (and consequently the negative utility of the fact) that they will die in a different Everett branch. I am hoping Cameron Taylor or another commentator who thinks quantum suicide might be effective will let me know whether I have described his utility function.
Eliezer's novella provides a vivid illustration of the danger of promoting what should have stayed an instrumental value to the the status of a terminal value. Eliezer likes to refer to this all-too-common mistake as losing purpose. I like to refer to it as adding a false terminal value.
For example, eating babies was a valid instrumental goal when the Babyeaters were at an early state of technological development. It is not IMHO evil to eat babies when the only alternative is chronic severe population pressure which will eventually either lead to your extinction or the disintegration of your agricultural civilization with a reversion to a more primitive existence in... (read more)