All of Evan Clark's Comments + Replies

Example population ethics: ordered discounted utility

My internal visualization is that all the individuals in the world are disjoint line segments of a certain length which laid end to end correspond to the world-segment, and that when the weighting-fairy (or whatever) passes through, sets of segments which were all previously the same length ought to still be sets of segments of the same length.

Honestly, I do apologize for spending so much of your time running you around in verbal circles because something didn't correspond to my internal model. Thank you for trying to understand/help.

Example population ethics: ordered discounted utility

I have realized that I am coming off like I don't understand algebra, which is a result of my failure to communicate. As unlikely as I am making it sound, I understand what you are saying and already knew it.

What I mean is this:

Despite a = b, it could "look like" a < b or b > a if you didn't have access to the world but only to the (expanded) sum. If you can ask for the difference between the total sum and the sum ignoring a, but not for the actual value of a.

I can't think of a non-pathological case where this would actually... (read more)

2Stuart_Armstrong3yHum, not entirely sure what you're getting at... I'd say that ua=ub always "looks like ua=ub", in the sense that there is a continuity in the overall U(W); small changes to our knowledge of ua and ub make small changes to our estimate of U(W). I'm not really sure what stronger condition you could want; after all, when ua=u b, we can always write * …+γnuz+γn+1ua+γn+2ub+γn+3uc+… as: * …+γnuz+γn+1+γn+22(ua+ub)+γn+3uc+…. We could equivalently define U(W) that way, in fact (it generalises to larger sets of equal utilities). Would that formulation help?
Example population ethics: ordered discounted utility

My mistake with respect to the sum being over all time, thank you for clarifying.

No. If a=b, then a+γb=b+γa. The ordering between identical utilities won't matter for the total sum, and the individual that is currently behind will be prioritised.

While the ordering between identical utilities does not affect the total sum, it does affect the individual valuation. a can be prioritized over b just by the ordering, even though they have identical utility. Unless I am missing something obvious.

2Stuart_Armstrong3yNope. Their ordering is only arbitrary as long as they have exactly the same utility. As soon as a policy would result in one of them having higher utility than the other, their ordering is no longer arbitrary. So if we ignore other people ua<ub means the term in the sum is ua+γub. If ua>ub, it's ub+γua. If ua=u b, it can be either term (and they are equal). (I can explain in more detail if that's not enough?)
Example population ethics: ordered discounted utility

It seems odd to me that it is so distribution-dependent. If there is a large number of people, with a large gap between the highest and the lowest, then it's worth killing (potentially most people) just to move the high utility individual down the preference ordering. One solution might be to fix the highest power of γ (for any population), and approach it across the summation in a way weighted by the flatness of the distribution.

Another issue is that two individuals with the same unweighted utility can become victims of the ordering, although that could be patched by grouping individuals by equal unweighted utility, and then summing over the weighted sums of the group utilities.

4Stuart_Armstrong3yEDIT: I realised I wasn't clear that the sum was over everyone that ever lived. I've clarified that in the post. Killing people with future lifetime non-negative utility won't help, as they will still be included in the sum. No. If a=b, then a+γb=b+γa. The ordering between identical utilities won't matter for the total sum, and the individual that is currently behind will be prioritised.
A Taxonomy of Weirdness

(Also "uncompromising" could mean a few things and some of them are pretty bad. The good kind of "uncompromising" is something like believing what you believe, feeling what you feel, thinking what you think, and wanting what you want, and not letting someone else suppress that. The bad kind is trying to impose any of that on someone else / demand that someone else change to accommodate that.)
Relatedly, I'm also concerned that in this taxonomy it's very tempting for people to label themselves as Fried Eggs to justify their lac
... (read more)
Macroscale Minds

As you can see, I similarly struggled to communicate my ideas. Probably more than you did, however.

Two or maybe three years ago I suggested at a CFAR reunion that close-knit tribes / communities of humans, rather than individual humans, might be 1) alive / thinking in some important sense and 2) the natural unit of moral value
  1. I am not sure that small groups of humans are complicated enough in their interactions to form a collective mind capable of thought.
  2. It seems like tribe-centered moralities have a poor track record, but that obviously assumes a metric for evaluating moral success that you might dispute.

Macroscale Minds
Are you familiar with Searle’s “Chinese Room”[1] thought experiment?

Yes. As I believe the provided link makes clear, the China Brain is related both historically and obviously conceptually to the Chinese Room.

So, if we imagine every single person in America (including babies, etc.) being organized in such a way as to give rise to a mind-like structure (connectome), then it would seem that the resulting mind would be about as “smart” or “conscious” as a parakeet. Not very impressive!

On the contrary, this is incredibly impressive. Regardless, the point st... (read more)

The Jordan Peterson Mask
you can just have the System 1 experience and then do the System 2 processing afterwards (which could be seconds afterwards). It's really not that hard. I believe that most rationalists can handle it, and I certainly believe that I can handle it.

It is probably true that most rationalists could handle it. It is also probably true, however, that people who can't handle it could end up profoundly worse for the experience. I am not sure we should endorse potential epistemic hazards with so little certainty about both costs and benefits. I also gran... (read more)

8Qiaochu_Yuan4yI'm not sure what "endorse" means here. My position is certainly not "everyone should definitely do [circling, meditation, etc.]"; mostly what I have been arguing for is "we should not punish people who try or say good things about [circling, meditation, etc.] for being epistemically reckless, or allege that they're evil and manipulative solely on that basis, because I think there are important potential benefits worth the potential risks for some people." I still think you're over-updating on school. For example, why do graduate students have advisors? At least in fields like pure mathematics that don't involve lab work, it's plausibly because being a researcher in these fields requires important mental skills that can't just be learned through reading, but need to be absorbed through periodic contact with the advisor. Great advisors often have great students; clearly something important is being transmitted even if it's hard to write down what. My understanding of CFAR's position is also that whatever mental skills it tries to teach, those skills are much harder to teach via text or even video than via an in-person workshop, and that this is why we focus so heavily on workshops instead of methods of teaching that scale better. I know, right? Also ironically, learning how to not be subject to my triggers (at least, not as much as I was before) is another skill I got from circling.
The Jordan Peterson Mask

(This is my second comment on this site, so it is probable that the formatting will come out gross. I am operating on the assumption that it is similar to Reddit, given Markdown)

  1. To be as succinct as possible, fair enough.
  2. I want to have this conversation too! I was trying to express what I believe to be the origins of people's frustrations with you, not to try to discourage you. Although I can understand how I failed to communicate that.
  3. I am going to wrap this up with the part of your reply that concerns experiential distance and respond to both. I
... (read more)
I suspect that a lot of fear of epistemic contamination comes from the emphasis on personal experience. Personal (meatspace) experiences, especially in groups, can trigger floods of emotions and feelings of insights without those first being fed through rational processing.

I recognize the concern here, but you can just have the System 1 experience and then do the System 2 processing afterwards (which could be seconds afterwards). It's really not that hard. I believe that most rationalists can handle it, and I certainly believe that I can handle it. I&... (read more)

The Jordan Peterson Mask

I think that perhaps what bothers a lot of rationalists about your (or Valentine's) assertions is down to three factors:

  1. You don't tend to make specific claims or predictions. I think you would come off better - certainly to me and I suspect to others - if you were to preregister hypotheses more, like you did in the above comment. I believe that you could and should be more specific, perhaps stating that over a six month period you expect to work n more hours without burning out or that a consensus of reports from outsiders about your mental well-
... (read more)
You don't tend to make specific claims or predictions. I think you would come off better - certainly to me and I suspect to others - if you were to preregister hypotheses more, like you did in the above comment. I believe that you could and should be more specific, perhaps stating that over a six month period you expect to work n more hours without burning out or that a consensus of reports from outsiders about your mental well-being will show a marked positive change during a particular time period that the evaluators did not know was special.

I have... (read more)