# All of Kindly's Comments + Replies

Based on our rational approach we are at a disadvantage for discovering these truths.

Is that a bad thing?

Because lotteries cost more to play than the chance of winning is worth, someone who understands basic probability will not buy lottery tickets. That puts them at a disadvantage for winning the lottery. But it gives than an overall advantage in having more money, so I don't see it as a problem.

The situation you're describing is similar. If you dismiss beliefs that have no evidence from a reference class of mostly-false beliefs, you're at a disadvant...

0TheAncientGeek6y
Whereas as someone who understands advanced probability, particularly the value/utility distinction, might. So long as you can put a ceiling on possible benefits.
0Erfeyah6y
I propose that it is a bad thing. Your assessment makes the assumption that the knowledge that we are missing is "not that important". Since we do not know what the knowledge we are missing is, its significance could range from insignificant to essential. We are not at a the point where we can make that distinction so we better start realising and working on the problem. That is my position. To my eyes your further analysis makes the assumption that the only strategy we can follow would be to randomly try out beliefs. Although I have not formulated a solution (I am currently just describing the problem), I can already see much more efficient ways of navigating the space. I will post when I have something more developed to say about this.

When lacking evidence, the testing process is difficult, weird and lengthy - and in light of the 'saturation' mentioned in [5.1] - I claim that, in most cases, the cost-benefit analysis will result in the decision to ignore the claim.

And I think that this inarguably the correct thing to do, unless you have some way of filtering out the false claims.

From the point of view of someone who has a true claim but doesn't have evidence for it and can't easily convince someone else, you're right that this approach is frustrating. But if I were to relax my stand...

0Erfeyah6y
Exactly, that is why I am pointing towards the problem. Based on our rational approach we are at a disadvantage for discovering these truths. I want to use this post as a reference to the issue as it can become important in other subjects. Yes, that is the other way in. Trust and respect. Unfortunately, I feel we tend to surround ourselves with people that are similar to us and thus selecting our acquaintances in the same way we select ideas to focus on. In my experience (which is not necessarily indicative), people tend to just blank out unfamiliar information or consider it a bit of an eccentricity. In addition, as stated, if a subject requires substantial effort before you can confirm its validity it becomes exponentially harder to communicate even in these circumstances.

Rational assessment can be misleading when dealing with experiential knowledge that is not yet scientifically proven, has no obvious external function but is, nevertheless, experientially accessible.

So, uh, is the typical claim that has an equal lack of scientific evidence true, or false? (Maybe if we condition on how difficult it is to prove.)

If true - then the rational assessment would be to believe such claims, and not wait for them to be scientifically proven.

If false - then the rational assessment would be to disbelieve such claims. But for most su...

0Erfeyah6y
[5.1] As ProofOfLogic indicates with his example of shamanistic scammers the space of claims about subjective experiences is saturated with demonstrably false claims. [5.2] This actually causes us to adjust and have a rule of ignoring all strange sounding claims that require subjective evidence (except if it is trivial to test). You are right that if the claim is true an idealised rational assessment should be to believe the claim. But how do you make a rational assessment when you lack evidence? When lacking evidence, the testing process is difficult, weird and lengthy - and in light of the 'saturation' mentioned in [5.1] - I claim that, in most cases, the cost-benefit analysis will result in the decision to ignore the claim.
0ProofOfLogic6y
We also have to take into account priors in an individual situation. So, for example, maybe I have found that shamanistic scammers who lie about things related to dreams are pretty common. Then it would make sense for me to apply a special-case rule to disbelieve strange-sounding dream-related claims, even if I tend to believe similarly surprising claims in other contexts (where my priors point to people's honesty).

I think that in the interests of being fair to the creators of the video, you should link to http://www.nottingham.ac.uk/~ppzap4/response.html, the explanation written by (at least one of) the creators of the video, which addresses some of the complaints.

In particular, let me quote the final paragraph:

There is an enduring debate about how far we should deviate from the rigorous academic approach in order to engage the wider public. From what I can tell, our video has engaged huge numbers of people, with and without mathematical backgrounds, and got them

...

No, I think I meant what I said. I think that this song lyric can in fact only make a difference given a large pre-existing weight, and I think the distribution of being weirded out by Solstices is bimodal: there are not people that are moderately weirded out but not enough to leave.

Extremely unlikely that people exist that aren't weirded out by Solstices in general but one song lyric is the straw that breaks the camel's back.

2itaibn06y
"Straw that breaks the camel's back" implies the existence of a large pre-existing weight, so your claim is a tautology.

Not quite. I outlined the things that have to be going on for me to be making a decision.

0Unknowns8y
You cannot assume that any of those things are irrelevant or that they are overridden just because you have a gene. Presumably the gene is arranged in coordination with those things.

In the classic problem, Omega cannot influence my decision; it can only figure out what it is before I do. It is as though I am solving a math problem, and Omega solves it first; the only confusing bit is that the problem in question is self-referential.

If there is a gene that determines what my decision is, then I am not making the decision at all. Any true attempt to figure out what to do is going to depend on my understanding of logic, my familiarity with common mistakes in similar problems, my experience with all the arguments made about Newcomb's prob...

1Unknowns8y
This is like saying "if my brain determines my decision, then I am not making the decision at all."

Let's assume that every test has the same probability of returning the correct result, regardless of what it is (e.g., if + is correct, then Pr[A returns +] = 12/20, and if - is correct, then Pr[A returns +] = 8/20).

The key statistic for each test is the ratio Pr[X is positive|disease] : Pr[X is positive|healthy]. This ratio is 3:2 for test A, 4:1 for test B, and 5:3 for test C. If we assume independence, we can multiply these together, getting a ratio of 10:1.

If your prior is Pr[disease]=1/20, then Pr[disease] : Pr[healthy] = 1:19, so your posterior odds ...

2Bound_up8y
Kindly, indeed. Thank you. I believe I've got it down now. Prior:1/101 Test: Correct positive 95% False positive 20% 1 of the 101 has the disease, with 95% probability of receiving a positive reading, denoting 1 x .95 = .95 And 100 don't have the disease, each with a 20% probability of a positive reading, denoting 100 x .2=20 .95 + 20 = 20.95 .95 / 20.95 = .045, denoting a 4.5% chance that someone receiving a positive reading has the disease. Thank you again :)

If you're looking for high-risk activities that pay well, why are you limiting yourself to legal options?

0Ishaan8y
I'm not limiting myself to " high-risk activities that pay well", I'm limiting myself to "legally feasible high risk and helpful services that also pay really well" ;) The "helpful" is the goal, the rest are instrumental. I think most stuff leading to morally good outcomes is legal. Even illegal stuff which might be good if only it were legal turns out bad simply due to the practical realities of illegal operations.

On the subject of Arimaa, I've noted a general feeling of "This game is hard for computers to play -- and that makes it a much better game!"

Progress of AI research aside, why should I care if I choose a game in which the top computer beats the top human, or one in which the top human beats the top computer? (Presumably both the top human and the top computer can beat me, in either case.)

Is it that in go, you can aspire (unrealistically, perhaps) to be the top player in the world, while in chess, the highest you can ever go is a top human that wi...

This ought to be verified by someone to whom the ideas are genuinely unfamiliar.

I know that's what you're trying to say because I would like to be able to say that, too. But here's the problems we run into.

1. Try writing down "For all x, some number of subtract 1's cause it to equal 0". We can write the "âˆ€x. âˆƒy. F(x,y) = 0" but in place of F(x,y) we want "y iterations of subtract 1's from x". This is not something we could write down in first-order logic.

2. We could write down sub(x,y,0) (in your notation) in place of F(x,y)=0 on the grounds that it ought to mean the same thing as "y iterations of su

...
0Houshalter8y
I think I'm starting to get it. That there is no property that a natural number could be defined as having, that a infinite chain couldn't also satisfy in theory. That's really disappointing. I took a course on logic and the most inspiring moment was when the professor wrote down the axioms of peano arithmitic. They are more or less formalizations of all the stuff we learned about numbers in grade school. It was cool that you could just write down what you are talking about formally and use pure logic to prove any theorem with them. It's sad that it's so limited you can't even express numbers.

Repeating S n times is not addition: addition is the thing defined by those axioms, no more, and no less. You can prove the statements:

âˆ€x. plus(x, 1, S(x))

âˆ€x. plus(x, 2, S(S(x)))

âˆ€x. plus(x, 3, S(S(S(x))))

and so on, but you can't write "âˆ€x. plus(x, n, S(S(...n...S(x))))" because that doesn't make any sense. Neither can you prove "For every x, x+n is reached from x by applying S to x some number of times" because we don't have a way to say that formally.

From outside the Peano Axioms, where we have our own notion of "number", we ...

0Houshalter8y
But that's what I'm trying to say. To say n number of times, you start with n and subtract 1 each time until it equals zero. So for addition, 2+3 is equal to 3+2, is equal to 4+1, is equal to 5+0. For subtraction you do the opposite and subtract one from the left number too each time. If no number of subtract 1's cause it to equal 0, then it can't be a number.

What makes you think that decision making in our brains is free of "regular certainty in physics"? Deterministic systems such as weather patterns can be unpredictable enough.

To be fair, if there's some butterfly-effect nonsense going on where the exact position of a single neuron ends up determining your decision, that's not too different from randomness in the mechanics of physics. But I hope that when I make important decisions, the outcome is stable enough that it wouldn't be influenced by either of those.

IÂ´d say this is not needed, when people say "Snow is white" we know that it really means "Snow seems white to me", so saying it as "Snow seems white to me" adds length without adding information.

Ah, but imagine we're all-powerful reformists that can change absolutely anything! In that case, we can add a really simple verb that means "seems-to-me" (let's say "smee" for short) and then ask people to say "Snow smee white".

Of course, this doesn't make sense unless we provide alternatives. For inst...

0ChristianKl8y
A quarter of the worlds languages mark evidentiality [http://en.wikipedia.org/wiki/Evidentiality] at a grammer level. Indo-European languages like English don't do this but other languages do.
2Jiro8y
It isn't possible for someone to consistently assert "X is true, but X doesn't seem true to me". And it isn't possible for someone to consistently assert "X seems true to me, but X is false". [1] So even though "seems to me" and "is" are not logically the same thing, no human being can separate them and we have no need for a special word to make it convenient to separate them. [1] Of course they can assert that if we use a secondary meaning for 'seems' such as "superficially appears to be", but that's not the meaning of 'seems' in question here.
3TheOtherDave8y
http://en.wikipedia.org/wiki/Evidentiality [http://en.wikipedia.org/wiki/Evidentiality]

Insurance makes a profit in expectation, but an insurance salesman does have some tiny chance of bankruptcy, though I agree that this is not important. What is important, however, is that an insurance buyer is not guaranteed a loss, which is what distinguishes it from other Dutch books for me.

Prospect theory and similar ideas are close to an explanation of why the Allais Paradox occurs. (That is, why humans pick gambles 1A and 2B, even though this is inconsistent.) But, to my knowledge, while utility theory is both a (bad) model of humans and a guide to ho...

Yyyyes and no. Our utility functions are nonlinear, especially with respect to infinitesimal risk, but this is not inherently bad. There's no reason for our utility to be everywhere linear with wealth: in fact, it would be very strange for someone to equally value "Having \$1 million" and "Having \$2 million with 50% probability, and having no money at all (and starving on the street) otherwise".

Insurance does take advantage of this, and it's weird in that both the insurance salesman and the buyers of insurance end up better off in expec...

4Xachariah8y
I didn't mean to imply nonlinear functions are bad. It's just how humans are. Prospect Theory describes this and even has a post here on lesswrong [http://lesswrong.com/lw/6kf/prospect_theory_a_framework_for_understanding/]. My understanding is that humans have both a non-linear utility function as well as a non-linear risk function. This seems like a useful safeguard against imperfect risk estimation. If you setup your books correctly, then it is guaranteed. A dutch book doesn't need to work with only one participant, and in fact many dutch books only work with on populations rather than individuals, in the same way insurance only guarantees a profit when properly spread across groups.

When it comes to neutral geometry, nobody's ever defined "parallel lines" in any way other than "lines that don't intersect". You can talk about slopes in the context of the Cartesian model, but the assumptions you're making to get there are far too strong.

As a consequence, no mathematicians ever tried to "prove that parallel lines don't intersect". Instead, mathematicians tried to prove the parallel postulate in one of its equivalent forms, of which some of the more compelling or simple are:

• The sum of the angles in a trian

...
2Epictetus8y
Well, Euclid was the standard textbook in geometry for a long time. There was a movement in the 1800s to replace the Elements with a more modern textbook and a number of authors used different definitions, which just ended up requiring them to introduce other axioms to get the result. Lewis Carroll ended up satirizing [http://www.amazon.com/Euclid-modern-rivals-Lewis-Carroll-ebook/dp/B00APRRSMG/ref=sr_1_1?s=books&ie=UTF8&qid=1430453557&sr=1-1] the affair. If it were elegant, mathematicians wouldn't have spent 2,000 years trying to prove it from the other four postulates. I very much doubt Euclid himself liked it. Intuition suggests that the result should follow from more elementary notions. It was a workaround to let Euclid get on with his book and later mathematicians looked for a more elegant formulation. Is it obvious from the definition of parallel l lines that this ought to be true? That equality should be transitive seems like so obvious an idea that it's barely worth writing down. EDIT: It's worth noting that classical mathematicians had very different ideas about what axioms should be. To them, axioms should be self-evident. Modern mathematics has no such requirements for its axioms. These are two very different attitudes about what axioms ought to be.

Understandable; perhaps. In mathematics, it is very easy to say understandable things that are simply false. In this case, those false things become nonsense when you realize that the meaning of "parallel lines" is "lines that do not intersect".

You might say that an explanation gets these facts completely wrong, then it is still a good explanation if it makes you think the right things. I say that such an explanation goes against the spirit of all mathematics. It is not enough that your argument is understandable, for many understandabl...

Only a single mile to the mile? I've seen maps in biology textbooks that were much larger than that.

Okay, then interpret my answer as "rape and murder are bad because they make others sad, and making others sad is bad by definition".

2dxu8y
Your second sentence does not imply your first. (Nor is it true--ignoring the misphrasing of the axiom, the rest of the discussion is perfectly understandable.)

You can always keep asking why. That's not particularly interesting.

0DanArmak8y
In morals, as in logic, you can't explain something by appealing to something else unless the chain terminates in an axiom. The question "why is it bad to rape and murder?" can be rephrased as, "how can we determine if a thing is bad, in the case of rape and murder?" The answer "rape and murder are bad by definition" may be unsatisfying, but at least it's a workable way: everything on the list is bad, everything else is not. But the answer "because they make others sad" assumes you can determine making others sad is bad. You substitute one question for another, and unless we keep asking why, we won't have answered the original question.

It occurs to me that we can express this problem in the following isomorphic way:

1. Omega makes an identical copy of you.

2. One copy exists for a week. You get to pick whether that week is torture or nirvana.

3. The other copy continues to exist as normal, or maybe is unconscious for a week first, and depending on what you picked for step 2, it may lose or receive lots of money.

I'm not sure how enlightening this is. But we can now tie this to the following questions, which we also don't have answers to: is an existence of torture better than no existence at all? And is an existence of nirvana good when it does not have any effect on the universe?

Yes, this, exactly.

I do nice things for myself not because I have deep-seated beliefs that doing nice things for myself is the right thing to do, but because I feel motivated to do nice things for myself.

I'm not sure that I could avoid doing those things for myself (it might require willpower I do not have) or that I should (it might make me less effective at doing other things), or that I would want to if I could and should (doing nice things for myself feels nice).

But if we invent a new nice thing to do for myself that I don't currently feel motivated to...

2James_Miller8y
You might be able to achieve significantly better life outcomes for yourself by becoming more strategic [http://lesswrong.com/lw/2p5/humans_are_not_automatically_strategic/].

I did say what I would do, given the premise that I know Omega is right with certainty. Perhaps I was insufficiently clear about this?

I am not trying to fight the hypothetical, I am trying to explain why one's intuition cannot resist fighting it. This makes the answer I give seem unintuitive.

So the standard formulation of a Newcomb-like paradox continues to work if you assume that Omega has a merely 99% accuracy.

Your formulation, however, doesn't work that way. If you precommit to suicide when Omega asks, but Omega is sometimes wrong, then you commit suicide with 1% probability (in exchange for having \$990 expected winnings). If you don't precommit, then with a 1% chance you might get \$1000 for free. In most cases, the second option is better.

Thus, the suicide strategy requires very strong faith in Omega, which is hard to imagine in practice....

-2Houshalter8y
Please Don't Fight the Hypothetical [http://lesswrong.com/lw/bwp/please_dont_fight_the_hypothetical/]. I agree with you if you are only 99% sure, but the premise is that you know Omega is right with certainty. Obviously that is implausible, but so is the entire situation with an omniscient being asking people to commit suicide, or oracles that can predict if you will die. But if you like you can have a lesser cost, like Omega asking you to pay \$10,000. Or some amount of money significant enough to seriously consider just giving away.

Result spoilers: Fb sne, yvxvat nypbuby nccrnef gb or yvaxrq gb yvxvat pbssrr be pnssrvar, naq gb yvxvat ovggre naq fbhe gnfgrf. (Fbzr artngvir pbeeryngvba orgjrra yvxvat nypbuby naq yvxvat gb qevax ybgf bs jngre.)

I haven't done the responsible thing and plotted these (or, indeed, done anything else besides take whatever correlation coefficient my software has seen fit to provide me with), so take with a grain of salt.

I believe editing polls resets them, so there's no reason to do it if it's just an aesthetically unpleasant mistake that doesn't hurt the accuracy of the results.

Absolutely. We're bad at anything that we can't easily imagine. Probably, for many people, intuition for "torture vs. dust specks" imagines a guy with a broken arm on one side, and a hundred people saying 'ow' on the other.

The consequences of our poor imagination for large numbers of people (i.e. scope insensitivity) are well-studied. We have trouble doing charity effectively because our intuition doesn't take the number of people saved by an intervention into account; we just picture the typical effect on a single person.

What, I wonder, are the ...

1TomStocker8y
My feeling is that situations like being caught for doing something horrendous might or might not be subject to psychological adjustment - that many situations of suffering are subject to psychological adjustment and so might actually be not as bad as we though. But chronic intense pain, is literally unadjustable to some degree - you can adjust to being in intense suffering but that doesn't make the intense suffering go away. That's why I think its a special class of states of being - one that invokes action. What do people think?
7dxu8y
My heuristic for dealing with such situations is somewhat reminiscent of Hofstadter's Law: however bad you imagine it to be, it's worse than that, even when you take the preceding statement into account. In principle, this recursion should go on forever and lead to you regarding any sufficiently unimaginably bad situation as infinitely bad, but in practice, I've yet to have it overflow, probably because your judgment spontaneously regresses back to your original (inaccurate) representation of the situation unless consciously corrected for.

That wasn't obvious to me. It's certainly false that "people who use the strategy of always paying have the same odds of losing \$1000 as people who use the strategy of never paying". This means that the oracle's prediction takes its own effect into account. When asking about my future, the oracle doesn't ask "Will Kindly give me \$1000 or die in the next week?" but "If hearing a prophecy about it, will Kindly give me \$1000 or die in the next week?"

Hearing the prediction certainly changes the odds that the first clause will come...

You're saying that it's common knowledge that the oracle is, in fact, predicting the future; is this part of the thought experiment?

If so, there's another issue. Presumably I wouldn't be giving the oracle \$1000 if the oracle hadn't approached me first; it's only a true prediction of the future because it was made. In a world where actual predictions of the future are common, there should be laws against this, similar to laws against blackmail (even though it's not blackmail).

(I obviously hand over the \$1000 first, before trying to appeal to the law.)

2DanielLC8y
Why? People who use the strategy of always paying don't live any longer than people who use the strategy of never paying. They also save money and get to find out a week in advance if they'd die so they can get their affairs in order.

Given that I remember spending a year of AP statistics only doing calculations with things we assumed to be normally distributed, it's not an unreasonable objection to at least some forms of teaching statistics.

Hopefully people with statistics degrees move beyond that stage, though.

There are varieties of strawberries that are not sour at all, so I suppose it's possible that you simply have limited experience with strawberries. (Well, you probably must, since you don't like them, but maybe that's the reason you don't think they're sour, as opposed to some fundamental difference in how you taste things.)

I actually don't like the taste of purely-sweet strawberries; the slightly-sour ones are better. A very unripe strawberry would taste very sour, but not at all sweet, and its flesh would also be very hard.

Do you have access to the memory wiping mechanism prior to getting your memory wiped tomorrow?

If so, wipe your memory, leaving yourself a note: "Think of the most unlikely place where you can hide a message, and leave this envelope there." The envelope contains the information you want to pass on.

Then, before your memory is wiped tomorrow, leave yourself a note: "Think of the most unlikely place where you can hide a message, and open the envelope hidden there."

Hopefully, your two memory-wiped selves should be sufficiently similar that ...

Wouldn't you forget the password once your memories are wiped?

0DataPacRat8y
That depends on how many of your memories are wiped, or if it's episodic memory versus declarative memory, or if it's recent memories versus long-term memories. Remember, enough of your memories have to still exist for you to understand the message, for the problem to make any sense at all.

In an alternate universe, Peter and Sarah could have had the following conversation instead:

P: I don't know the numbers.

S: I knew you didn't know the numbers.

P: I knew that you knew that I didn't know the numbers.

S: I still don't know the numbers.

P: Now I know the numbers.

S: Now I also know the numbers.

But I'm worried that my version of the puzzle can no longer be solved without brute force.

I believe I have it. rot13:

Sbyq naq hasbyq gur cncre ubevmbagnyyl, gura qb gur fnzr iregvpnyyl, gb znex gur zvqcbvag bs rnpu fvqr. Arkg, sbyq naq hasbyq gb znex sbhe yvarf: vs gur pbearef bs n cncre ner N, O, P, Q va beqre nebhaq gur crevzrgre, gura gur yvarf tb sebz N gb gur zvqcbvag bs O naq P, sebz O gb gur zvqcbvag bs P naq Q, sebz P gb gur zvqcbvag bs N naq Q, naq sebz Q gb gur zvqcbvag bs N naq O.

Gurfr cnegvgvba gur erpgnatyr vagb avar cvrprf: sbhe gevnatyrf, sbhe gencrmbvqf, naq bar cnenyyrybtenz. Yrg gur cnenyyrybtenz or bar cneg, naq tebhc rnpu ge...

Desensitization training is great if it (a) works and (b) is less bad than the problem it's meant to solve.

(I'm now imagining Alice and Carol's conversation: "So, alright, I'll turn my music down this time, but there's this great program I can point you to that teaches you to be okay with loud noise. It really works, I swear! Um, I think if you did that, we'd both be happier.")

Treating thin-skinned people (in all senses of the word) as though they were already thick-skinned is not the same, I think. It fails criterion (a) horribly, and does not satisfy (b) by definition: it is the problem desensitization training ought to solve.

I wanted to upvote you for amusing me, but I changed my vote to one I think you would prefer.

What if we assume a finite universe instead? Contrary to what the post we're discussing might suggest, this actually makes recurrence more reasonable. To show that every state of a finite universe recurs infinitely often, we only need to know one thing: that every state of the universe can be eventually reached from every other state.

Is this plausible? I'm not sure. The first objection that comes to mind is entropy: if entropy always increases, then we can never get back to where we started. But I seem to recall a claim that entropy is a statistical law: i...

2ChaosMote8y
You are absolutely correct. If the number of states of the universe is finite, then as long as any state is reachable from any other state, then every state will be reached arbitrarily often if you wait long enough.

To Bob, I would point out that:

1. Contrary to C, it is easy to prove that you have an ear or mental condition that makes you sensitive to noise; a note from a doctor or something suffices.

2. Contrary to D, in case such a condition exists, "toughening up and growing a thicker skin" is not actually a possible response. In some cases, it appears that loud noises make the condition worse. Even when this is not the case, random exposure to noises at the whim of the environment doesn't help.

I realize that you are appealing to a metaphor, but I think that these points often apply to the unmetaphored things as well.

Regarding my style: many philosophies have both a function and a form. In writing, some philosophies have a message to convey and a style that it is often conveyed in. There is a style to objectivist essays, Maoist essays, Buddhist essays, and often there is a style to less wrong essays. I wrote my egoist essay in the egoist style, in honor of those egoists who led to me including Max Stirner, Dora Marsden, Apio Ludd and especially Malfew Seklew. Egoism - it's not for everybody.

The things that make your writing style unapproachable are not features of &...

That's true, but I think I agree with TheOtherDave that the things that should make you start reconsidering your strategy are not bad outcomes but surprising outcomes.

In many cases, of course, bad outcomes should be surprising. But not always: sometimes you choose options you expect to lose, because the payoff is sufficiently high. Plus, of course, you should reconsider your strategy when it succeeds for reasons you did not expect: if I make a bad move in chess, and my opponent does not notice, I still need to work on not making such a move again.

I also w...

0TheOtherDave8y
Right. And your point about reconsidering strategy on surprising good outcomes is an important one. (My go-to example of this is usually the stranger who keeps losing bets on games of skill, but is surprisingly willing to keep betting larger and larger sums on the game anyway.)

Part of it might just be the order. Compare that paragraph to the following alternative:

The rationality of Rationality: AI to Zombies isn't about using cold logic to choose what to care about. Reasoning well has little to do with what you're reasoning towards. If your goal is to annihilate as many puppies as possible, then this kind of rationality will help you annihilate more puppies. But if your goal is to enjoy life to the fullest and love without restraint, then better reasoning (while hot or cold, while rushed or relaxed) will also help you do so.

I'm not sure that regretting correct choices is a terrible downside, depending on how you think of regret and its effects.

If regret is just "feeling bad", then you should just not feel bad for no reason. So don't regret anything. Yeah.

If regret is "feeling bad as negative reinforcement", then regretting things that are mistakes in hindsight (as opposed to correct choices that turned out bad) teaches you not to make such mistakes. Regretting all choices that led to bad outcomes hopefully will also teach this, if you correctly identify mi...

1Quill_McGee8y
I was thinking of the "feeling bad and reconsider" meaning. That is, you don't want regret to occur, so if you are systematically regretting your actions it might be time to try something new. Now, perhaps you were acting optimally already and when you changed you got even /more/ regret, but in that case you just switch back.
5TheOtherDave8y
If we don't actually have a common understanding of what "regret" refers to, it's probably best to stop using the term altogether. If I'm always less likely to implement a given decision procedure D because implementing D in the past had a bad outcome, and always more likely to implement D because doing so had a good outcome (which is what I understand Quill_McGee to be endorsing, above), I run the risk of being less likely to implement a correct procedure as the result of a chance event. There are more optimal approaches. I endorse re-evaluating strategies in light of surprising outcomes.(It's not necessarily a bad thing to do in the absence of surprising outcomes, but there's usually something better to do with our time.) A bad outcome isn't necessarily surprising -- if I call "heads" and the coin lands tails, that's bad, but unsurprising. If it happens twice, that's bad and a little surprising. If it happens ten times, that's bad and very surprising.

My proposal: the ideas that goodness or evil are substances and they can formed into magic objects such as sword made of pure evil.

Of course, some novels also subvert this delightfully. Patricia Wrede's The Seven Towers, for instance, is all about exactly what goes wrong when you try to make a magical object out of pure good.

(Edit: that is, Wrede does not literally spend the whole book talking about this problem. It is merely mentioned as backstory. But still.)

What changes is that I would like to have a million dollars as much as Joe would. Similarly, if I had to trade between Joe's desire to live and my own, the latter would win.

In another comment you claim that I do not believe my own argument. This is false. I know this because if we suppose that Joe would like to be killed, and Joe's friends would not be said if he died, then I am okay with Joe's death. So there is no other hidden factor that moves me.

I'm not sure what the observation that I do not give all of my money away to charity has to do with anything.

-4seer8y
Um, what are you using to compare preferences across people. How about Joe's desire to live against you desire to not have him annoy you, or to have sex with his wife, or any number of other possible motives?

I don't think that's true in any important way.

I might say: "Killing Joe is bad because Joe would like not to be killed, and enjoys continuing to live. Also, Joe's friends would be sad if Joe died." This is not a sophisticated argument. If an atheist would have a hard time making it, it's only because one feels awkward making such an unsophisticated argument in a debate about morality.

0DanArmak8y
This doesn't answer the question. Why is doing things Joe doesn't like, or making his friends sad, bad? Consequentialism isn't a moral system by itself; you need axioms or goals.
-5seer8y
Load More