Posts

Sorted by New

Wiki Contributions

Comments

Your 'epiphenomena' are good old invariants. When you are talking about exorcising epiphenomena, you really are talking about establishing invariants as laws that allow you to use fewer degrees of freedom. One can even talk about consciousness being only dependent on physical makeup of the universe, and hence being an invariant across universes with the same physical makeup. What is the point in reformulating it your way, exactly?

Caledonian, you are not helping by disagreeing without clarification. You don't need to be certain about anything, including estimation of how much you are uncertain about something and estimation of how much you are uncertain about the estimation, etc.

Roland,

Probabilities allow grades of beliefs, and just as Achilles's pursuit of tortoise can be considered as consisting of infinite number of steps, if you note that steps actually get infinitely short, you can sum them up to a finite quantity. Likewise, you can join infinitely many infinitely unlikely events into a compound event of finite probability. It is a way to avoid regress Caledonian was talking about. Evidence can shift probabilities on all metalevels, even if in some hapless formalism there are infinitely many of them, and still lead to reasonable finite conclusions (decisions).

We could provide a warning, of course. But how would we then ensure that people understood and applied the warning? Warn them about the warning, perhaps? And then give them a warning about the warning warning?

That's the problem with discrete reasoning. When you have probabilities, this problem disappears. See http://www.ditext.com/carroll/tortoise.html

I started to seriously think about rationality only when I started to think about AI, trying to understand grounding. When I saw that meaning, communication, correctness and understanding are just particular ways to characterize probabilistic relations between "representation" and "represented", it all started to come together, and later was transferred to human reasoning and beyond. So, it was the enigma of AI that acted as a catalyst in my case, not a particular delusion (or misplaced trust). Most of the things I read on the subject were outright confused or in the state of paralyzed curiosity, not deluded in a particular technical way. But so is "Science". The problem is in settling for status quo, walking along the trodden track where it's possible to do better.

Thus, I see this post as a demonstration by example of how important it is to break the trust in all of your cherished curiosity stoppers.

HA: "Trying cryonics requires a leap of faith straight into the unknown for a benefit with an unestimable likelihood."

It's what probability is for, isn't it? If you don't know and don't have good prior hints, you just choose prior at random, merely making sure that mutually exclusive outcomes sum up to 1, and then adjust with what little evidence you've got. In reality, you usually do have some prior predispositions though. You don't raise your hands in awe and exclaim that this probability is too shaky to be estimated and even thought about, because in doing so you make decisions, actions, which given your goals implicitly assume certain assignment of probability.

In other words, if you decide not to take a bet, you implicitly assigne low probability to the outcome. It conflicts with you saying that "there are too many unknowns to make an estimation". You just made an estimation. If you don't back it up, it's as good as any other.

I assign high probability to success of cryonics (about 50%), given benevolent singularity (which is a different issue entirely, and it's not necessarily a high-probability outcome, so it can shift resulting absolute probability significantly). In other words, if information-theoretic death doesn't occur during cryopreservation (and I don't presently have noticeable reasons to believe that it does), singularity-grade AI should provide enough technological buck to revive patients "for free". Of course for the decision it's absolute probability that matters, but I have my own reasons to believe that benevolent singularity will likely be technically possible (relative to other outcomes), and I assign about 10% to it.

One problem is that 'you' that can be affected by things that you expect to interact with in the future is in principle no different from those space colonists that are sent out. You can't interact with future-you. All decisions that we are making form the future with which we don't directly interact. Future-you is just a result of one more 'default' manufacturing process, where laws of physics ensure that there is a physical structure very similar to what was in the past. Hunger is a drive that makes you 'manufacture' a fed-future-you, compassion is a drive that makes you 'manufacture' a good-feeling-other-person, and so on.

I don't see any essential difference between decisions that produce 'observable' effect and those that produce 'invisible' effect. What makes you value some of the future states and not others is your makeup, 'thousand shards of desire' as Eliezer put it, and among these things there might as well be those that imply value for physical states that don't interact with decision-maker's body.

If I put a person in a black box, and program it to torture that person for 50 years, and then automatically destroy all evidence, so that no tortured-person state can ever be observed, isn't it as 'invisible' as sending a photon away? I know that person is being tortured, and likewise I know that photon is flying away, but I can't interact with either of them. And yet I assign a distinct negative value to invisible-torture box. It's one of the stronger drives inbuilt in me.

The joy of textbook-mediated personal discovery...

Eliezer,

What do specks have to do with circularity? Where in last posts you explain that certain groups of decision problems are mathematically equivalent, independent on actual decision, here you argue for a particular decision. Note that utility is not necessarily linear of number of people.

Discount rate takes care of effect your effort can have on the future, relative to effect it will have on present, it has nothing to do with 'intrinsic utility' of things in the future. Future doesn't exist in the present, you only have a model of the future when you make decisions in the present. Your current decisions are only as good as you can anticipate their effect in the future, and process Robin described in his blog post replay is how it can proceed, it assumes that you know very little and will be better off with just passing resources to future folk to take care of whatever they need themselves.

Load More