My internal visualization is that all the individuals in the world are disjoint line segments of a certain length which laid end to end correspond to the world-segment, and that when the weighting-fairy (or whatever) passes through, sets of segments which were all previously the same length ought to still be sets of segments of the same length.
Honestly, I do apologize for spending so much of your time running you around in verbal circles because something didn't correspond to my internal model. Thank you for trying to understand/help.
I have realized that I am coming off like I don't understand algebra, which is a result of my failure to communicate. As unlikely as I am making it sound, I understand what you are saying and already knew it.
What I mean is this:
Despite a = b, it could "look like" a < b or b > a if you didn't have access to the world but only to the (expanded) sum. If you can ask for the difference between the total sum and the sum ignoring a, but not for the actual value of a.
I can't think of a non-pathological case where this would actually... (read more)
My mistake with respect to the sum being over all time, thank you for clarifying.
No. If a=b, then a+γb=b+γa. The ordering between identical utilities won't matter for the total sum, and the individual that is currently behind will be prioritised.
While the ordering between identical utilities does not affect the total sum, it does affect the individual valuation. a can be prioritized over b just by the ordering, even though they have identical utility. Unless I am missing something obvious.
It seems odd to me that it is so distribution-dependent. If there is a large number of people, with a large gap between the highest and the lowest, then it's worth killing (potentially most people) just to move the high utility individual down the preference ordering. One solution might be to fix the highest power of γ (for any population), and approach it across the summation in a way weighted by the flatness of the distribution.
Another issue is that two individuals with the same unweighted utility can become victims of the ordering, although that could be patched by grouping individuals by equal unweighted utility, and then summing over the weighted sums of the group utilities.
(Also "uncompromising" could mean a few things and some of them are pretty bad. The good kind of "uncompromising" is something like believing what you believe, feeling what you feel, thinking what you think, and wanting what you want, and not letting someone else suppress that. The bad kind is trying to impose any of that on someone else / demand that someone else change to accommodate that.)
Relatedly, I'm also concerned that in this taxonomy it's very tempting for people to label themselves as Fried Eggs to justify their lac
As you can see, I similarly struggled to communicate my ideas. Probably more than you did, however.
Two or maybe three years ago I suggested at a CFAR reunion that close-knit tribes / communities of humans, rather than individual humans, might be 1) alive / thinking in some important sense and 2) the natural unit of moral value
Are you familiar with Searle’s “Chinese Room” thought experiment?
Yes. As I believe the provided link makes clear, the China Brain is related both historically and obviously conceptually to the Chinese Room.
So, if we imagine every single person in America (including babies, etc.) being organized in such a way as to give rise to a mind-like structure (connectome), then it would seem that the resulting mind would be about as “smart” or “conscious” as a parakeet. Not very impressive!
On the contrary, this is incredibly impressive. Regardless, the point st... (read more)
you can just have the System 1 experience and then do the System 2 processing afterwards (which could be seconds afterwards). It's really not that hard. I believe that most rationalists can handle it, and I certainly believe that I can handle it.
It is probably true that most rationalists could handle it. It is also probably true, however, that people who can't handle it could end up profoundly worse for the experience. I am not sure we should endorse potential epistemic hazards with so little certainty about both costs and benefits. I also gran... (read more)
(This is my second comment on this site, so it is probable that the formatting will come out gross. I am operating on the assumption that it is similar to Reddit, given Markdown)
I suspect that a lot of fear of epistemic contamination comes from the emphasis on personal experience. Personal (meatspace) experiences, especially in groups, can trigger floods of emotions and feelings of insights without those first being fed through rational processing.
I recognize the concern here, but you can just have the System 1 experience and then do the System 2 processing afterwards (which could be seconds afterwards). It's really not that hard. I believe that most rationalists can handle it, and I certainly believe that I can handle it. I&... (read more)
I think that perhaps what bothers a lot of rationalists about your (or Valentine's) assertions is down to three factors:
You don't tend to make specific claims or predictions. I think you would come off better - certainly to me and I suspect to others - if you were to preregister hypotheses more, like you did in the above comment. I believe that you could and should be more specific, perhaps stating that over a six month period you expect to work n more hours without burning out or that a consensus of reports from outsiders about your mental well-being will show a marked positive change during a particular time period that the evaluators did not know was special.
I have... (read more)