All of Dacyn's Comments + Replies

Doing "good"

And so, knowing that the true cumulative impact of a donation is never certain, why did I still choose to donate?

Why did you phrase it like this -- what does being certain about the cumulative impact have to do with the choice of whether to donate or not? Why isn't the expected impact more important?

This is a serious question, I have the same psychological issue where certainty of a positive outcome is sometimes more important than positive expected value. But I don't yet have a rule which allows me to predict when this psychological effect will come into play.

1pchvykov2doooh, don't get me started on expectation values... I have heated opinions here, sorry. The two easiest problems with expectations in this case is that to average something, you need to average over some known space, according to some chosen measure - neither of which will be by any means obvious in a real-world scenario. More subtly, with real-world distributions, expectation values can often be infinite or undefined, and median might be more representative - but then should you look at mean, median, or what else?
On Virtue

I want to distinguish this mental action from the behaviours that result from it. I’m trying not to make claims directly about who and what should be outwardly praised.

I totally agree that this is an important distinction (it is the distinction the post I linked to is about), and when I talked about Virtue Points in my comment I was meaning the mental action. But mental actions still have effects, which is why the first sentence of my comment isn't self-contradictory.

On the connection to involuntary suffering, fair enough, but then you shouldn't call it "Virtue Points". That already means something else.

1Midnight_Analyst4dYeah so on the question of the effects of mentally assigning Virtue Points, I think the extent to which the ideas of my post should change behaviour, and whether that change would be good, is unclear. I wrote the post under the assumption that it's be better for us to have this fairer understanding of how the amount of suffering involved in a task can be drastically different for different people depending on their existing abilities. I feel like it's important for society to realise this, and I feel like we've only partially realised it at the moment. But possibly this isn't the case, and I need to think about it more. I'm open to the idea that actually the way people currently assign Virtue Points actually shouldn't be meddled with (which is why my post is more of a 'starting point for discussion' than 'thing I am completely sure about'). I think you're right to see the effects (rather than the mental action itself) as the thing that is actually important at the end of the day. On involuntary suffering, having thought about this a bit more, I suppose the phrase 'something akin to Virtue Points' does imply that I think 'Virtue Points' would be an okay-ish name for the kind of thing I'm pointing to in the case of involuntary suffering, which is not the case. I do agree that Virtue Points is not a good name for that. I was trying to point out in the post that, as a very general statement, I feel like sufferers deserve compensation whether or not the suffering was voluntary.
On Virtue

From the perspective of the collective, the point of awarding Virtue Points is so that people know what traits to signal to remain in good graces with the community. From the perspective of the individual, a lot of the time that will feel like doing the Right Thing and not getting rewarded, due to phenomena discussed here.

Since involuntary suffering doesn't show up in this paradigm, I think it is irrelevant to the notion of Virtue Points.

1Midnight_Analyst6dI think with my post I'm pointing to something quite specific - a collection of ideas I expect to be somewhat useful in some not-particularly-well-thought-through way, by making sure that, to the extent that people think 'person X deserves recompense', they think so in a way that is fair. Basically, I think I'm trying to make sure people don't get Utility Points and Virtue Points muddled up. I'm not going into whether people should mentally assign others Virtue Points, but I'm saying that most people will mentally assign others Virtue Points whatever anyone says, and that it'd probably be good for those people to be fairer in the way they do so. I want to distinguish this mental action from the behaviours that result from it. I'm trying not to make claims directly about who and what should be outwardly praised. On the connection to involuntary suffering, I have written the following in response to another comment:
Signaling isn't about signaling, it's about Goodhart

Hmm, I am trying to see if it is really the same puzzle?? The self-deception I see, since if you get the opposite of whatever you choose then it motivates you to self-deceive so that you'll choose the opposite of whatever you want to get. But then why is the alternative death? Ah well, maybe it'll make sense to me later.

Radical openness - say things that others strongly dislike

I think having the confidence to express what you are actually thinking gives a big advantage. Such a person is able to see more. They can think about uncomfortable and inconvenient ideas that most people would instinctively self-censure.

The exact opposite is true. If you commit to yourself to not immediately taking any thought you have and shouting it from the rooftops, this gives you the mental space to think "uncomfortable and inconvenient ideas that most people would instinctively self-censure". It's just that once you have these ideas in your thought-space, you have to think more about how to express them than just "put them in my latest song". Good secrets take a while to express properly.

0tomdekan11dI agree with your main point that good ideas often take time and reflection. However, I think it is hard to know if a person, such as Kanye, has already done this reflection. Perhaps he has.
Signaling isn't about signaling, it's about Goodhart

This is all true.

And yet.

Very rarely, but I would guess at least once in your life, you will be faced with a decision whose outcome is so important that all of this is stripped away, like rationalization. At which point you are faced with the decision: to signal, or not to signal. But it is not clear which choice corresponds to which outcome: does treating it as a signal correspond to signalling more, or to signalling less, as suggested by Dagon's comment?

Would you rather be trustworthy, or trusted?

The OP suggests that maybe we can have both, but what if t... (read more)

1ckai11dIf I understand the sort of thing you're talking about correctly, I like Miles Vorkosigan's solution (from Memory, by Lois McMaster Bujold): "The one thing you can't trade for your heart's desire is your heart."
4Valentine12dA different frame on what I see as the same puzzle: If faced with the choice, would you rather self-deceive, or die? It sure looks like the sane choice is self-deception. You might be able to unwind that over time, whereas death is hard to recover from. Sadly, this means you can be manipulated and confused via the right kind of threat, and it'll be harder and harder for you over time to notice these confusions. You can even get so confused you don't actually recognize what is and isn't death — which means that malicious (to you) forces can have some sway over the process of your own self-deception. It's a bit like the logic of "Don't negotiate with terrorists": The more scenarios in which you can precommit to choosing death over self-deception, the less incentive any force will have to try to present you with such a choice, and thus the more reliably clear your thinking will be (at least on this axis). It just means you sincerely have to be willing to choose to die.
Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think

OK, fair enough. So you are asking something like "is it ever ethical to keep a secret?" I would argue yes, because different people are entitled to different parts of your psyche. E.g. what I am willing to share on the internet is different from what I am willing to share in real life. Or am I missing something again?

2Said Achmiz18dPerhaps. Consider this scenario: Your best friend Carol owns a pastry shop. One day you learn that her store manager, Dave, is embezzling large sums of money from the business. What do you do? Silly question, obvious answer: tell Carol at once! Indeed, failing to do so would be a betrayal—later, when Carol has to close the shop and file for bankruptcy, her beloved business ruined, and she learns that you knew of Dave’s treachery and said nothing—how can you face her? The friendship is over. It’s quite clear: if you know that the guy is stealing, you will tell Carol, period. Now suppose that Dave, the pastry shop manager, is actually also a friend of yours, and your knowledge of his crime is not accidental but comes because he confides in you, having first sworn you to secrecy. Foolishly, you agreed—a poor decision in retrospect, but such is life. And now: (a) you have information (Dave is stealing from Carol); (b) ordinarily, having such information dictates your behavior in a clear way (you must take it at once to Carol); (c) yet you have sworn to keep said information secret. Thus the question: are you obligated to behave as if you know this information (i.e., to inform Carol of Dave’s treachery)? Or, is it morally permissible for you to behave as if you know nothing (and thus to do nothing—and not only that, but to lie if Carol asks “do you know if Dave is stealing from the shop?”, etc.)?
Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think

Is it ethically mandatory always to behave as if you know all information which you do, in fact, know?

Maybe I am missing the point, but since you do know all information which you do in fact, know, wouldn't behaving as if you do just mean behaving... the way in which you behave? In which case, isn't the puzzle meaningless?

On the other hand, if we understand the meaning of the puzzle to be illuminated by elriggs' first reply to it, we could rephrase it (or rather its negation) as follows:

Is it ever ethically acceptable to play a role, other than one of

... (read more)
2Said Achmiz18dThis is true in the same technically-correct-but-useless sense that it’s true to say something like “choosing what to do is impossible, since you will in fact do whatever you end up doing”. Unless we believe in substance dualism, or magic, or what have you, we have to conclude that our actions are determined, right? So when you do something, it’s impossible for you to have done something different! Well, ok, but having declared that, we do still have to figure out what to have for dinner, and which outfit to wear to the party, and whether to accept that job offer or not. Neither do I think that talk of “playing roles” is very illuminating here. For a better treatment of the topic, see this recent comment by Viliam [] .
Taboo Truth

What if you feel the need to proclaim your taboo truth, but you aren't quite sure what it is yet?

Then you will find yourself drawn to those situations in which you can proclaim basically any truth without fear of undue repercussion, and then once you will there you will find yourself pontificating.

Is this a good position to be in? Knowing what the taboo truth is in more detail might lead you to being forced to proclaim it earlier, or else give up on it, which is presumably why its detail has been occluded to you.

(incidentally, I at first read the title as "apply Rationalist Taboo to the word 'truth' " which could also be an interesting exercise)

The Martial Art of Rationality

-"Deliberately we decide that we want to seek only the truth; but our brains have hardwired support for rationalizing falsehoods."

Deciding that you want to seek only the truth will not give you the truth. This is because, as you say, our brains have hardwired support for rationalizing falsehoods. What I have found to be a better strategy is self-cooperation. Your mind makes its existence known on several different levels like the vocal, the subvocal but still audible, the purely silent but still internally audible, and eventually laughter and similar respo... (read more)

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

-"Jessica is reporting a perverse optimization where people are penalized more for talking confusedly about important problems they’re confused about, than for simply ignoring the problems."

I feel like "talking confusedly" here means "talking in a way that no one else can understand". If no one else can understand, they cannot give feedback on your ideas. That said, it is not clear that penalizing confused talk is a solution to this problem.

3jessicata2moAt least some people were able to understand though. This lead to a sort of social division where some people were much more willing/able to talk about certain social phenomena than other people were.
Meta-discussion from "Circling as Cousin to Rationality"

This is a great answer. I will have to incorporate concepts like "interlocutor" and "author" into my worldview.

If I may ask a somewhat metaphorical question, what determines who the interlocutor and author are in a context which is not so clear-cut as an online interaction? Like, if I ask a question in a talk, does that mean the presenter is the author and I am the interlocutor? Is DSL the author and JB the interlocutor? Or maybe the other way around? I may even go so far as to claim that in a context like this one, I am the author and my conversationmate was the interlocutor!!

Meta-discussion from "Circling as Cousin to Rationality"

So, I like this comment (and strong-upvoted it) because you are placing your concept of "obligation" out in the open for scrutiny. I have a question though. Here someone responded to a request for information after I said I would be "surprised" by the information they now claim to have provided. Would you say that I have an obligation to react to their response, i.e. either admit that I lost an argument, or take the effort to see whether I agree with their interpretation of the information? Right now I am not motivated to do the latter.

If this doesn't fall... (read more)

7Said Achmiz2moWell, first of all, my comment described an interaction between the author of a post or comment (i.e., someone who was putting forth some idea) and an interlocutor who was requesting a clarification (or noting an inconsistency, or asking for a term to be defined, etc.). As far as I can tell, based on a skim of the discussion thread you linked, in that case you were the one who was asking someone else a question about something they had posted, so you would be the interlocutor, and they the author. They posted something, you asked a question, they gave an answer… Are you obligated to then respond to their response? Well… yes? I mean, what was the point of asking the question in the first place? You asked for some information, and received it. Presumably you had some reason for asking, right? You were going to do something with either the received information, or the fact that none could be provided? Well, go ahead and do it. Integrate it into your reasoning, and into the discussion. Otherwise, why ask? I can’t easily find it at the moment, but Eliezer once wrote something to the effect that an argument isn’t really trustworthy unless it’s critiqued, and the author responds to the critics, and the critics respond to the response, and the author responds to the critics’ response to his response. But why? What motivates this requirement? As I wrote in the grandparent comment: nothing but normative epistemic principles, i.e. the fact that if we don’t conform to these requirements, we are far more likely to end up mistaken, believing nonsense, etc. Similarly with your obligation to respond. Why are you thus obligated? Well, if you ask for information from your interlocutor, they provide it, and then you just ignore it… how exactly do you expect ever to become less wrong?
The Real Rules Have No Exceptions
the rule as stated, together with the criteria for deciding whether something is a “legitimate” exception, is the actual rule.
The approach I describe above merely consists of making this fact explicit.

This would be true were it not for your meta-rule. But the criteria for deciding whether something is a legitimate exception may be hazy and intuitive, and not prone to being stated in a simple form. This doesn't mean that the criteria are bad though.

For example, I wouldn't dream of formulating a rule about cookies that covered the case "you c... (read more)

2Said Achmiz2yWhy? This seems like a case that is entirely amenable to formalization (and without any great difficulty, either). “Judgment calls” are not irreducible. One of the great insights that comes from the informal canon of best practices for GMing TTRPGs (e.g. []) is that “rules” and “judgment calls” need not be contrasted with each other; on the contrary, the former can, and often does, assist and improve the latter. In other words, it’s not that following explicitly stated rules is better than making judgment calls, but rather that following explicitly stated rules is how you do better at making judgment calls.
Torture and Dust Specks and Joy--Oh my! or: Non-Archimedean Utility Functions as Pseudograded Vector Spaces

The OP didn't give any argument for SPECKS>TORTURE, they said it was "not the point of the post". I agree my argument is phrased loosely, and that it's reasonable to say that a speck isn't a form of torture. So replace "torture" with "pain or annoyance of some kind". It's not the case that people will prefer arbitrary non-torture pain (e.g. getting in a car crash every day for 50 years) to a small amount of torture (e.g. 10 seconds), so the argument still holds.

1Louis_Brown2yI don't think this is true. As an example, when I wake up in the morning I make the decision between granola [] and cereal [] for breakfast. Universe Destruction is undoubtedly high up on the severity scale (certainly higher than crunch satisfaction utility), so your argument suggests that I should spend time researching which choice is more likely to impact that. However, the difference in expected impact in these options is so averse to detection that, despite the fact that I literally act on this choice every single day of my life, it would never be worth the time to research breakfast foods instead of other choices which have stronger (i.e. measurable by the human mind) impacts on Universe Destruction. This is not a bug, but an incredible feature of the non-Archimedean framework. It allows you to evaluate choices only on the highest severity level at which they actually occur, which is in fact how humans seem to make their decisions already, to some approximation. As for the car example, your analysis seems sound (assuming there's no positive expected utility at or above the severity level of car crash injuries to counterbalance it, which is not necessarily the case--e.g. driving somewhere increases the chance that you meet more people and consequently find the love(s) of your life, which may well be worth a broken limb or two. Alternatively, if you are driving to a workshop on AI risk then you may believe yourself to be reducing the expected disutility from unaligned AI, which appears to be incomparable with a car crash). But, forgiving my digression and argument of the hypothetical: the claim that not driving is (often) preferable to driving feels much more reasonable to me than the claim that some number of dust specks is worse than torture. I'm not sure I understand this properly. To clarify, I don't believe that any
Torture and Dust Specks and Joy--Oh my! or: Non-Archimedean Utility Functions as Pseudograded Vector Spaces

Once you introduce any meaningful uncertainty into a non-Archimedean utility framework, it collapses into an Archimedean one. This is because even a very small difference in the probabilities of some highly positive or negative outcome outweighs a certainty of a lesser outcome that is not Archimedean-comparable. And if the probabilities are exactly aligned, it is more worth your time to do more research so that they will be less aligned, than to act on the basis of a hierarchically less important outcome.

For example, if we cared infinitely more about not d... (read more)

1Slider2yWell small positive probabilities need not be finite if we have a non-archimedean utility framework. Infinidesimal times an inifinite number might yield a finite number that would be on equal footing with familiar expected values that would trade sensibly. And it might help that the infinidesimals might compare mostly against each other. You compare the danger of driving against the dangers of being in a kitchen. If you find that driving is twice as dangerous it means you need to spend half the time to drive to accomplish something rather than do it in a kitchen rather than categorically always doing things in a kitchen. I guess the relevance of waste might be important. If you could choose 0 chance of death you would take that. But given that you are unable to choose that you choose among the death optimums. Sometimes further research is not possible.
1Andrew Jacob Sauer2yRegarding your comments on SPECKS preferable to TORTURE, I think that misses the argument they made. The reason you have to prefer 10N at X to N at X' at some point, is that a speck counts as a level of torture. That's exactly what OP was arguing against.
Explaining "The Crackpot Bet"

To repeat what was said in the CFAR mailing list here: This "bet" isn't really a bet, since there is no upside for the other party; they are worse off than when they started in every possible scenario.

0glennonymous3yThat is what was said. I’m also pretty sure it’s wrong FWIW... but I can’t explain why without spoiling the joke. I know this will get me downvoted. Shrug
What are your plans for the evening of the apocalypse?

I don't think that chapter is trying to be realistic (it paints a pretty optimistic picture),

Counterfactuals, thick and thin

Sure, in that case there is a 0% counterfactual chance of heads, your words aren't going to flip the coin.

4Nisan3yOk. I think that's the way I should have written it, then.
Counterfactuals, thick and thin

The question "how would the coin have landed if I had guessed tails?" seems to me like a reasonably well-defined physical question about how accurately you can flip a coin without having the result be affected by random noise such as someone saying "heads" or "tails" (as well as quantum fluctuations). It's not clear to me what the answer to this question is, though I would guess that the coin's counterfactual probability of landing heads is somewhere strictly between 0% and 50%.

3Nisan3yOh, interesting. Would your interpretation be different if the guess occurred well after the coinflip (but before we get to see the coinflip)?
3shminux3yI agree that is is a well-defined question, though not easily answered without knowing how guessing physically affects flipping the coin, reading the results (humans are notoriously prone to making mistakes like that) and so on. But I suspect that Nisan is asking something else, though I am not quite sure what. The post says I am not sure how physical uncertainty is different from logical uncertainty, maybe there are some standard examples there that could help the uninitiated like myself.
The Feedback Problem
Reviewer is obliged to find all errors.

Not true. A reviewer's main job is to give a high-level assessment on the quality of a paper. If the assessment is negative then usually they do not look for all specific errors in the paper. A detailed list of errors is more common when the reviewer recommends the journal to accept the paper (since then the author(s) can edit the paper and then publish in the journal) but still many reviewers do not do this (which is why it is common to find peer-reviewed papers with errors in them).

At least, this is the case in math.

2avturchin3yYes, but even in the case of a negative review they often demonstrate the cause by pointing on the several errors, or by listing some high-level reason why they are negative and it could be used as some form of the feedback.
Decisions are not about changing the world, they are about learning what world you live in

You don't harbor any hopes that after reading your post, someone will decide to cooperate in the twin PD on the basis of it? Or at least, if they were already going to, that they would conceptually connect their decision to cooperate with the things you say in the post?

Decisions are not about changing the world, they are about learning what world you live in

I am not sure how else to interpret the part of shminux's post quoted by dxu. How do you interpret it?

Decisions are not about changing the world, they are about learning what world you live in

My point was that intelligence corresponds to status in our world: calling the twins not smart means that you expect your readers to think less of them. If you don't expect that, then I don't understand why you wrote that remark.

I don't believe in libertarian free will either, but I don't see the point of interpreting words like "recommending" "deciding" or "acting" to refer to impossible behavior rather than using their ordinary meanings. However, maybe that's just a meaningless linguistic difference between us.

2shminux3yI can see why you would interpret it this way. That was not my intention. I don't respect Forrest Gumps any less than Einsteins.
Decisions are not about changing the world, they are about learning what world you live in

A mind-reader looks to see whether this is an agent's decision procedure, and then tortures them if it is. The point of unfair decision problems is that they are unfair.

Decisions are not about changing the world, they are about learning what world you live in

dxu did not claim that A could receive the money with 50% probability by choosing randomly. They claimed that a simple agent B that chose randomly would receive the money with 50% probability. The point is that Omega is only trying to predict A, not B, so it doesn't matter how well Omega can predict B's actions.

The point can be made even more clear by introducing an agent C that just does the opposite of whatever A would do. Then C gets the money 100% of the time (unless A gets tortured, in which case C also gets tortured).

2Said Achmiz3yThis doesn’t make a whole lot of sense. Why, and on what basis, are agents B and C receiving any money? Are you suggesting some sort of scenario where Omega gives A money iff A does the opposite of what Omega predicted A would do, and then also gives any other agent (such as B or C) money iff said other agent does the opposite of what Omega predicted A would do? This is a strange scenario (it seems to be very different from the sort of scenario one usually encounters in such problems), but sure, let’s consider it. My question is: how is it different from “Omega doesn’t give A any money, ever (due to a deep-seated personal dislike of A). Other agents may, or may not, get money, depending on various factors (the details of which are moot)”? This doesn’t seem to have much to do with decision theories. Maybe shminux ought to rephrase his challenge. After all— … can be satisfied with “Omega punches A in the face, thus causing A to end up with lower utility than B, who remains un-punched”. What this tells us about decision theories, I can’t rightly see.
Decisions are not about changing the world, they are about learning what world you live in
I note here that simply enumerating possible worlds evades this problem as far as I can tell.

The analogous unfair decision problem would be "punish the agent if they simply enumerate possible worlds and then choose the action that maximizes their expected payout". Not calling something a decision theory doesn't mean it isn't one.

2shminux3yPlease propose a mechanism by which you can make an agent who enumerates the worlds seen as possible by every agent, no matter what their decision theory is, end up in a world with lower utility than some other agent.
Decisions are not about changing the world, they are about learning what world you live in
Again, this is just a calculation of expected utilities, though an agent believing in metaphysical free will may take it as a recommendation to act a certain way.

Are you not recommending agents to act in a certain way? You are answering questions from EYNS of the form "Should X do Y?", and answers to such questions are generally taken to be recommendations for X to act in a certain way. You also say things like "The twins would probably be smart enough to cooperate, at least after reading this post" which sure sounds like a recommendation of cooperation (if they do not cooperate, you are lowering their status by calling them not smart)

2shminux3yI have mentioned in the title and in the first part that I do not subscribe to the idea of the metaphysical free will. Sure, subjectively it feels like "recommending" or "deciding" or "acting," but there is no physical basis for treating it as actually picking one of the possible worlds. What feels like making a decision and seeing the consequences is nothing but discovering which possible world is actual. Internally and externally.j "smart" is a statement about the actual world containing the twins, and if intelligence corresponds to status in that world, then making low-utility decisions would correspond to low status. In general, I reject the intentional stance [] in this model. Paradoxically, it results in better decision making for those who use it to make decisions.
Conceptual problems with utility functions

Games can have multiple Nash equilibria, but agents still need to do something. The way they are able to do something is that they care about something other than what is strictly written into their utility function so far. So the existence of a meta-level on top of any possible level is a solution to the problem of indeterminacy of what action to take.

(Sorry about my cryptic remark earlier, I was in an odd mood)

Physics has laws, the Universe might not

There I was using "to be" in the sense of equality, which is different from the sense of existence. So I don't think I was tabooing inconsistently.

2shminux3yConsciously tabooing a term like "exist" is what I have been doing, as well. Makes a lot of things less confusing.
Computational efficiency reasons not to model VNM-rational preference relations with utility functions

Maybe there is no absolutely stable unit, but it seems that there are units that are more or less stable than others. I would expect a reference unit to be more stable than the unit "the difference in utility between two options in a choice that I just encountered".

Computational efficiency reasons not to model VNM-rational preference relations with utility functions

This seems like a strawman. There's a naive EU calculation that you can do just based on price, tastiness of sandwich etc that gives you what you want. And this naive EU calculation can be understood as an approximation of a global EU calculation. Of course, we should always use computationally tractable approximations whenever we don't have enough computing power to compute an exact value. This doesn't seem to have anything to do with utility functions in particular.

Regarding the normalization of utility differences by picking two arbitrary... (read more)

4AlexMennen3yI disagree with this. The value of a QALY could depend on other features of the universe (such as your lifespan) in ways that are difficult to explicitly characterize, and thus are subject to revision upon further thought. That is, you might not be able to say exactly how valuable the difference between living 50 years and living 51 years is, denominated in units of the difference between living 1000 years and living 1001 years. Your estimate of this ratio might be subject to revision once you think about it for longer. So the value of a QALY isn't stable under re-evaluation, even when expressed in units of QALYs under different circumstances. In general, I'm skeptical that the concept of good reference points whose values are stable in the way you want is a coherent one.
Sleeping Beauty Resolved?

Right, so it seems like our disagreement is about whether it is relevant whether the value of a proposition is constant throughout the entire problem setup, or only throughout a single instance of someone reasoning about that setup.

Conceptual problems with utility functions, second attempt at explaining

I agree with the matching of the concepts, I don't think it means that there is a clear difference between instrumental and terminal values.

Conceptual problems with utility functions, second attempt at explaining

Fair enough, maybe I don't have enough familiarity with non-MIRI frameworks to make an evaluation of that yet.

A Step-by-step Guide to Finding a (Good!) Therapist

Incidentally here is another rationalist guide on how to get therapy, which I have been told is good.

3squidious3yOh yeah, thanks for linking that! Looking over it now, I got some of my ideas from this post when I read it quite a few years ago, and forgot to link it in my main post.
Sleeping Beauty Resolved?

Hmm. I don't think I see the logical rudeness, I interpreted TAG's comment as "the problem with non-timeless propositions is that they don't evaluate to the same thing in all possible contexts" and I brought up Everett branches in response to that, I interpreted your comment as saying "actually the problem with non-timeless propositions is that they aren't necessarily constant over the course of a computation" and so I replied to that, not bringing up Everett branches because they aren't relevant to your comment. Anyway I'm not sure exactly what kind of explanation you are looking for, it feels like I have explained my position already but I realize there can be inferential distances.

1TAG3yIt's more “the problem with non-timeless propositions is that they don’t evaluate to the same thing in all possible context AND a change of context can occur in the relevant situation". No one knows whether Everett branches are, or what they are. If they are macroscopic things that remain constant over the course of the SB story, they are not a problem....but time still is, because it doesn't. If branching occurs on coin flips, or smaller scales, then they present the same problem as time indexicals.
Conceptual problems with utility functions

Fair enough. Though in this case the valuing fairness is a big enough change that it makes a difference to how the agents act, so it's not clear that it can be glossed over so easily.

Conceptual problems with utility functions

It is not the problem, but the solution.

1TAG3yThe solution to what?
Conceptual problems with utility functions

Sure, their ability to model each other matters. Their inability to model each other also matters, and this is where non-utility values come in.

Conceptual problems with utility functions

I don't understand what it means to say that an agent who "values fairness" does better than another agent. If two agents have different value systems, how can you say that one does better than another? Regarding EY and the Prisoner's Dilemma, I agree that EY is making that claim but I think he is also making the claim "and this is evidence that rational agents should use FDT".

1Hazard4yTo your first point: If two agents had identical utility functions, except for one or two small tweaks, it feels reasonable to ask "Which of these agents got more utility/actualized it's values more?" This might be hard to actually formalize. I'm mostly running on the intuition that sometimes humans that are pretty similar might look at another and say, "It seems like this other person is getting more of what they want than I am."
Conceptual problems with utility functions

1) The notion of a "perfectly selfish rational agent" presupposes the concept of a utility function. So does the idea that agent A's strategy must depend on agent B's which must depend on agent A's. It doesn't need to depend, you can literally just do something. And that is what people do in real life. And it seems silly to call it "irrational" when the "rational" action is a computation that doesn't converge.

2) I think humanity as a whole can be thought of as a single agent. Sure maybe you can have a ... (read more)

Repeated (and improved) Sleeping Beauty problem

I'm confused, isn't the "objective probability" of heads 1/2 because that is the probability of heads in the definition of the setup? The halver versus thirder debate is about subjective probability, not objective probability, as far as I can tell. I'm not sure why you are mentioning objective probability at all, it does not appear to be relevant. (Though it is also possible that I do not know what you mean by "objective probability".)

The Dilemma of Worse Than Death Scenarios
As the observer would prefer to die in a worse than death scenario, one can assume that they would be willing to do anything to escape the scenario. Thus, it follows that we should do anything to prevent worse than death scenarios from occurring in the first place.

There seems to be a leap of logic here. One can strongly prefer an outcome without being "willing to do anything" to ensure it. Furthermore, just because someone in an extreme situation has an extreme reaction to it does not mean that we need to take that extreme reaction as our own -- ... (read more)

1[anonymous]4yWell, it does feel like you're betraying yourself if you ignore the experiences of your future self, unless you don't believe in continuity of consciousness at all. So if you're future self would do anything to stop a situation, I think anything should be done to prevent it. I guess this post may have come off as selfish as it focuses only on saving yourself. However, I would argue that preventing unfriendly ASI is one of the most altruistic things you could do because ASI could create an astronomical number of sentient beings, as Bostrom wrote.
Repeated (and improved) Sleeping Beauty problem

This argument seems to depend on the fact that Sleeping Beauty is not actually copied, but just dissociated from her past self and so that from her perspective it seems like she is copied. If you deal with actual copies then it is not clear what is the sensible way for them to all pass around a science journal to record their experiences, or all keep their own science journals, or all keep their own but then recombine somehow, or whatever. Though if this thought experiment gives you SIA intuitions on the Sleeping Beauty problem then maybe those intuitions will still carry over to other scenarios.

1TAG4yThis statement of the problem concedes that SB is calculating subjective probability. It should be obvious that subjective probabilities can diverge from each and objective probability -- that is what subjective means. It seems to me that the SB paradox is only a paradox if y ou try to do justice to objective and subjective probability in the same calculation.
Probability is fake, frequency is real

I don't know what you mean by "should be allowed to put whatever prior I want". I mean, I guess nobody will stop you. But if your beliefs are well approximated by a particular prior, then pretending that they are approximated by a different prior is going to cause a mismatch between your beliefs and your beliefs about your beliefs.

[Nitpick: The Kelly criterion assumes not only that you will be confronted with a large number of similar bets, but also that you have some base level of risk-aversion (concave utility function) that repeated bets ... (read more)

3Linda Linsefors4yI agree that "want" is not the correct word exactly. What I mean by prior is an agents actual a priori beliefs, so by definition there will be no mis-match there. I am not trying to say that you choose your prior exactly. What I am gesturing at is that no prior is wrong, as long as it does not assign zero probability to the true outcome. And I think that much of the confusion in atrophic situation comes from trying to solve an under-constrained system.
Stories of Summer Solstice

Regarding silence after the last pixel of sun, "no pre-planning" is not exactly right, there were some people passing around the message that that was what we were supposed to do. It was a little ad-hoc though.

7MalcolmOcean4yThere wasn't *pre*-planning but yeah, there was explicit (though emergent) coordination. I had loved the idea of stopping right as the sun vanished, from a practice drum-circle that Brent led in the Berkeley hills earlier in June. I didn't find any way to mention this prior to getting out to the clifftop, but then once we were there and there was kind of a small circle where most of the drummers were, I indicated to them to stop when the last bit of sun was gone. Cody was nearby without a drum and overheard me saying this, and asked "should I pass that along to the other drummers?" (because not everybody was right next to me, although all of the biggest drums were) and I said "yes!" and she did! And yeah, it was really magical, I think in part because we didn't *quite* have common knowledge that we were going to stop then—even I didn't know if everyone would get the message, or would follow it, etc.
6Raemon4yAh – I wasn't in on it, so it was still at least my subjective experience.
Wirehead your Chickens

I guess it just seems to me that it's meaningless to talk about what someone would prefer if they knew about/understood X, given that they are incapable of such knowledge/understanding. You can talk about what a human in similar circumstances would think, but projecting this onto the animal seems like anthropomorphizing to me.

You do have a good point that physiological damage should probably still be considered harmful to an animal even if it doesn't cause pain, since the pre-modified animal can understand the concept of such damage and would prefer to avoid it. However, this just means that giving the animal a painkiller doesn't solve the problem completely, not that it doesn't do something valuable.

Wirehead your Chickens

It is not clear that there is any such base state: what would it mean for an animal to "be aware of the possibility" that it could be made to have a smaller brain, have part of its brain removed, or modified so that it enjoys pain? Maybe you have more of a case with amputation and the desire to be eaten, since the animal can at least understand amputation and understand what it means to be eaten (though maybe not what it would mean to not be afraid of being eaten). But "The proposals above all fail this standard" seems to be overgeneralizing.

9Jiro4yThere are two related but separate ideas. One is that if you want to find out if someone is harmed by X, you need to consider whether they would prefer X in a base state, even if X affects their preferences. Another is that if you want to find out if someone is harmed by X, you need to consider what they would prefer if they knew about and understood X, even if they don't. Modifying an animal to have a smaller brain falls in the second category; pretty much any being who can understand the concept would consider it harmful to be modified to have a smaller brain, so it should also be considered harmful for beings who don't understand the concept. It may also fall in the first category if you try to argue "their reduced brain capacity will prevent them from knowing what they're missing by having reduced brain capacity". Modifying it so that it enjoys pain falls in the second category for the modification, and the first category for considering whether the pain is harmful.
UDT can learn anthropic probabilities

Yes (once you have uploaded your brain into a computer so that it can be copied). If lots of people do this, then in the end most agents will believe that SIA is true, but most descendants of most of the original agents will believe that SSA is true.

Load More