I mean, if it's about looking for post-hoc rationalizations, what's even the point of pretending there's a consistent ethical system?
Hmm, I would not describe it as rationalization in the motivated reasoning sense.
My model of this process is that most of my ethical intuitions are mostly a black-box and often contradictory, but still in the end contain a lot more information about what I deem good than any of the explicit reasoning I am capable of. If however, I find an explicit model which manages to explain my intuitions sufficiently well, I am willing to update or override my intuitions. I would in the end accept an argument that goes against some of my intuitions if it is strong enough. But I will also strive to find a theory which manages to combine all the intuitions into a functioning whole.
In this case, I have an intuition towards negative utilitarianism, which really dislikes utility monsters, but I also have noticed the tendency that I land closer to symmetric utilitarianism when I use explicit reasoning. Due to this, the likely options are that after further reflection I
and my intuition for negative utilitarianism would prefer cases 2 or 3.
So the above description was what was going on in my mind, and combined with the always-present possibility that I am bullshitting myself, led to the formulation I used :)
As I understand, the main difference form her view is that decoherence is the relation between objects in the system, but measurement is related to the whole system "collapse".
I think I would agree to "decoherence does not solve the measurement problem" as the measurement problem has different sub-problems. One corresponds to the measurement postulate which different interpretations address differently and which Sabine Hossenfelder is mostly referring to in the video. But the other one is the question of why the typical measurement result looks like a classical world - and this is where decoherence is extremely powerful: it works so well that we do not have any measurements which manage to distinguish between the hypotheses of
With regards to her example of Schrödinger's cat, this means that the state will not actually occur. It will always be a state where the environment must be part of the equation such that the state is more like after a nanosecond and already includes any surrounding humans after a microsecond (light went 300 m in all directions by then). When human perception starts being relevant, the state is With regards to the first part of the measurement problem, this is not yet a solution. As such I would agree with Sabine Hossenfelder. But it does take away a lot of the weirdness because there is no branch on the wave function that contains non-classical behaviour[1].
Wigner's friend.
You got me here. I did not follow the large debate around Wigner's friend as i) this is not the topic I should spend huge amounts of time on, and ii) my expectations were that these will "boil down to normality" once I manage to understand all of the details of what is being discussed anyway.
It can of course be that people would convince me otherwise, but before that happens I do not see how these types of situations could lead to strange behaviour that isn't already part of the well-established examples such as Schrödinger's cat. Structurally, they only differ in that there are multiple subsequent 'measurements', and this can only create new problems if the formalism used for measurements is the source. I am confident that the many worlds and Bohmian interpretations do not lead to weirdness in measurements[2], such that I am as-of-yet not convinced.
I think (give like 30 per cent probability) that the general nature of the UFO phenomenon is that it is anti-epistemic
Thanks for clarifying! (I take this to be mostly 'b) physical world' in that it isn't 'humans have bad epistemics') Given the argument of the OP, I would at least agree that the remaining probability mass for UFOs/weirdness as a physical thing is on the cases where the weird things do mess with our perception, sensors and/or epistemics.
The difficult thing about such hypotheses is that they can quickly evolve to being able to explain anything and becoming worthless as a world-model.
Could you clarify whether you attribute the similarity to a) how human minds work, or b) how the physical world works, or c) something I am not thinking of?
b would seem clearly mistaken to me:
In some sense it is similar to large-scale Schrodinger's cat, which can be in the state of both alive and dead only when unobserved.
For this I would recommend to use the decoherence conception of what measurements do (which is the natural choice in the Many Worlds Interpretation and still highly relevant if one assumes that a physical collapse occurs during measurement processes). From this perspective, what any measurement does is to separate the wave function into a bunch of contributions where each contains the measurement device showing result x and the measured system having the property x that is being measured[1]. Due to the high-dimensional space that the wave-function moves in, these parts will tend to never meet again, and this is what the classical limit means[2]. When people talk about 'observation' here, it is important to realize that an arbitrary physical interaction with the outside world is sufficient to count. This includes air molecules, thermal radiation, cosmic radiation, and very likely even gravity[3]. For objects large enough that we can see them, it will not happen without extreme effort that they remain 'unobserved' for longer times[4].
For anything macroscopic, there is no reason to believe that "human observation" is remotely relevant for observing classical behaviour.
This assumes that this is a useful measurement. More generally, any arbitrary interaction between two systems does the same thing except that there is no legible "result x" or "property x" which we could make use of. ↩︎
of course, if there is a collapse which actually removes most of the parts there is additional reason why they will not meet in the future. The measurements we have done so far do not show any indication of a collapse in the regimes we could access, which implies that this process of decoherence is sufficient as a description for everyday behaviour. The reason why we cannot access further regimes is that decoherence kicks in and makes the behaviour classical even without the need for a physical collapse. ↩︎
Though getting towards experiments which manage to remove the other decoherence sources enough that gravity's decoherence even could be observed is one of the large goals that researchers are striving for. ↩︎
E.g. Decoherence and the Quantum-to-Classical Transition by Maximilan Schlosshauer has a nice derivation and numbers for the 'not-being-observed' time scales: Table 3.2 gives the time scales resulting from different 'observers' for a dust grain of size 0.01 mm as "1 s due to cosmic background radiation, s from photons at room temperature, s from collisions with air molecules". ↩︎
You are right. Somehow I had failed to attribute this to culture. There clearly are lots of systems with a zero-sum competitive mentality.
Compared to the US, the German school and social system seems significantly less competitive to me (from what I can tell, living only in the latter). There still is a lot of competition, but my impression is that there are more niches which provide people with slack.
I do tend to round things off to utilitarianism it seems.
Your point on the distinct categories between puppies and other animals is a good one. With the categorical distinction in place, our other actions aren't really utilitarian trade-offs any more. But there are animals like guinea pigs which are in multiple categories.
what do we do with the now created utility monster?
I have trouble seriously imagining an utility monster which actually is net-positive from a total utility standpoint. In the hypothetical with the scientist, I would tend towards not letting the monster do harm just to remove incentives for dangerous research. For the more general case, I would search for some excuses why I can be a good utilitarian while stopping the monster. And hope that I actually find a convincing argument. Maybe I think that most good in the world needs strong cooperation which is undermined by the existence of utility monsters.
In a way, the stereotypical "Karen" is a utility monster
One complication here is that I would expect the stereotypical Karen to be mostly role-playing such that it would not actually be positive utility to follow her whims. But then, there could still be a stereotypical Caren who actually has very strong emotions/qualia/the-thing-that-matters-for-utility. I have no idea how this would play out or how people would even get convinced that she is Caren and not Karen.
Very fun read, thanks!
[...] we ought to indeed accept the numbers and with them the super-puppy and the pain to be found within its joyous and drooling jaws, which isn't the common sense ethical approach to the problem (namely, take away the mad scientist's grant funding and have them work on something useful, like a RCT on whether fresh mints cause cancer)
I am not sure about this description of common sense morality. People might agree about not creating such a super-puppy, but we do some horrible stuff to lab/farm animals in order to improve medical understanding/enjoy cheaper food. Of course, there isn't the aspect of "something directly values the pain of others", but we are willing to hurt puppies if this helps human interests.
Also, being against the existence of 'utility monsters which actually enjoy the harm to others' could also be argued for from a utilitarian perspective. We have little reason to believe that "harm to others" plays any significant/unavoidable role in feeling joy. Thus, anyone who creates entities with this property is probably actually optimizing for something else.
For normal utility monsters (entities which just have huge moral weight), my impression is that people mostly accept this in everyday examples. Except maybe for comparisons between humans where we have large amounts of historical examples where people used these arguments to ruin the lifes of others using flawed/motivated reasoning.
Just because there is somebody who is smarter than you, who works on some specific topic, doesn't mean that you shouldn't work on it. You should work on the thing where you can make the largest positive difference. [...]
I think you address an important point. Especially for people who are attracted to LessWrong, there tends to be a lot identification with one's cognitive abilities. Realizing that there are other people who just are significantly more capable can be emotionally difficult.
For me, one important realization was that my original emotions around this kind of assumed a competition where not-winning was actually negative. When I grokked that a huge fraction of these super capable people are actually trying to do good things, this helped me shift towards mostly being glad if I encounter such people.
Also The Pont of Trade and Being the (Pareto) best in the World are good posts which emphasize that "contributing value" needs way fewer assumptions/abilities than one might think.
I do think that tuning cognitive strategies (and practice in general) is relevant to improving the algorithm.
Practically hard-coded vs. Literally hard-coded
My introspective impression is less that there are "hard-coded algorithms" in the sense of hardware vs. software, but that it is mostly practically impossible to create major changes for humans.
Our access to unconscious decision-making is limited and there is a huge amount of decisions which one would need to focus on. I think this is a large reason why the realistic options for people are mostly i) only ever scratching the surface for a large number of directions for cognitive improvement, or ii) focussing really strongly on a narrow topic and becoming impressive in that topic alone[1].
Then, our motivational system is not really optimizing for this process and might well push in different directions. Our motivational system is part of the algorithm itself, which means that there is a boot strapping problem. People with unsuited motivations will never by motivated to change their way of thinking.
Why this matters
Probably we mostly agree on what this means for everyday decisions.
But with coming technology, some things might change.
Also, this topic is relevant to AI takeoff. We do perceive that there is this in-principle possibility for significant improvement in our cognition, but notice that in practice current humans are not capable of pulling it off. This lets us imagine that beings who are somewhat beyond our cognitive abilities might hit this threshold and then execute the full cycle of reflective self-improvement.
I think this is the pragmatic argument for thinking in separate magisteria ↩︎
I have a pet theory that some biases can be explained as a mix-up between probability and likelihood. (I don't know if this is a good explanation.)
At least not clearly distinguishing probability and likelihood seems common. One point-in-favour is our notation of conditional probabilities (e.g. ) where is a symbol with mirror-symmetry. As Eliezer writes in a lecture in plane crash, this is a didactically bad idea and an asymmetric symbol would be a lot easier to understand: is less optically obvious than[1]
Of course, our written language has an intrinsic left-to-right directional asymmetry, so that the symmetric isn't a huge amount of evidence[2].
One aspect which I disagree with is that collapse is the important thing to look at. Decoherence is sufficient to get classical behaviour on the branches of the wave function. There is no need to consider collapse if we care about 'weird' vs. classical behaviour. This is still the case even if the whole universe is collapse-resistant (as is the case in the many worlds interpretation). The point of this is that true cat states ( = superposed universe branches) do not look weird.
Superposition of universe - We can certainly regard the possibility that the macroscopic world is in a superposition as seen from our brain. This is what we should expect (absent collapse) just from the sizes of universe and brain:
Because of this, anyone following the many worlds interpretation should agree that from our perspective, the universe is always in a superposition - no unknown brain properties required. But due to decoherence (and assuming that branches will not meet), this makes no difference and we can replace the superposition with a probability distribution.
Perhaps this is captured by your "why Everett called his theory relative interpretation of QM" - I did not read his original works.
The question now becomes the interference between whole universe branches: A deep assumption in quantum theory is locality which implies that two branches must be equal in all properties[1] in order to interfere[2]. Because of this, interference of branches can only look like "things evolving in a weird direction" (double slit experiment) and not like "we encounter a wholly different branch of reality" (fictional stories where people meet their alternate-reality versions).
Because of this, I do not see how quantum mechanics could create the weird effects that it is supposed to explain.
If we do assume that human minds have an extra ability to facilitate interaction between otherwise distant branches if they are in a superposition compared to us, this of course could create a lot of weirdness. But this seems like a huge claim to me that would depart massively from much of what current physics believes. Without a much more specific model, this feels closer to a non-explanation than to an explanation.
more strictly: must have mutual support in phase-space. For non-physicists: a point in phase-space is how classical mechanics describes a world. ↩︎
This is not a necessary property of quantum theories, but it is one of the core assumptions used in e.g. the standard model. People who explore quantum gravity do consider theories which soften this assumption ↩︎