Posts

Sorted by New

Wiki Contributions

Comments

After playing around für a few minutes, I like your app with >95% Probability ;) compare this bayescalc.io calculation

Unfortunately, I do not have useful links for this - my understanding comes from non-English podcasts of a nutritionist. Please do not rely on my memory, but maybe this can be helpful for localizing good hypotheses.

According to how I remember this, one complication of veg*n diets and amino acids is that the question of which of the amino acids can be produced by your body and which are essential can effectively depend on your personal genes. In the podcast they mentioned that especially for males there is a fraction of the population who totally would need to supplement some "non-essential" amino acids if they want to stay healthy and follow veg*n diets. As these nutrients are usually not considered as worthy of consideration (because most people really do not need to think about them separately and also do not restrict their diet to avoid animal sources), they are not included in usual supplements and nutrition advice
 (I think the term is "meat-based bioactive compounds").

I think Elizabeth also emphasized this aspect in this post

log score of my pill predictions (-0.6)

If did not make a mistake, this score could be achieved by e.g. giving ~55% probabilities and being correct every time or by always giving 70% probabilities and being right ~69 % of the time.

you'd expect the difference in placebo-caffeine scores to drop

I am not sure about this. I could also imagine that the difference remains similar, but instead the baseline for concentration etc. shifts downwards such that caffeine-days are only as good as the old baseline and placebo-days are worse than the old baseline.

Update: I found a proof of the "exponential number of near-orthogonal vectors" in these lecture notes https://www.cs.princeton.edu/courses/archive/fall16/cos521/Lectures/lec9.pdf From my understanding, the proof uses a quantification of just how likely near-orthogonality becomes in high-dimensional spaces and derives a probability for pairwise near-orthogonality of many states.

This does not quite help my intuitions, but I'll just assume that the "it it possible to tile the surface efficiently with circles even if their size gets close to the 45° threshold" resolves to "yes, if the dimensionality is high enough".

One interesting aspect of these considerations should be that with growing dimensionality the definition of near-orthogonality can be made tighter without loosing the exponential number of vectors. This should define a natural signal-to-noise ratio for information encoded in this fashion.

Weirdly, in spaces of high dimension, almost all vectors are almost at right angles.

This part, I can imagine. With a fixed reference vector written as , a second random vector has many dimensions that it can distribute its length along  while for alignment to the reference (the scalar product) only the first entry  contributes.

It's perfectly feasible for this space to represent zillions of concepts almost at right angles to each other.

This part I struggle with. Is there an intuitive argument for why this is possible?

If I assume smaller angles below 60° or so, a non-rigorous argument could be:

  • each vector blocks a 30°-circle around it on the d-hypersphere[1] (if the circles of two vectors touch, their relative angle is 60°).
  • an estimate for the blocked area could be that this is mostly a 'flat' (d-1)-sphere of radius  which has an area that scales with 
  • the full hypersphere has a surface area with a similar pre-factor but full radius 
  • thus we can expect to fit a number of vectors  that scales roughly like  which is an exponential growth in .

For a proof, one would need to include whether it is possible to tile the surface efficiently with the  circles. This seems clearly true for tiny angles (we can stack spheres in approximately flat space just fine), but seems a lot less obvious for larger angles. For example, full orthogonality would mean 90° angles and my estimate would still give , an exponential estimate for the number of strictly orthogonal states although these are definitely not exponentially many.


and a copy of that circle on the opposite end of the sphere ↩︎

But the best outcomes seem to come out of homeopathy, which is as perfect of a placebo arm as one can get.

I did expect to be surprised by the post given the title, but I did not expect this surprise.

I have previously heard lots of advocates for evidence-based medicine claim that homoeopathy has very weak evidence for effects (mostly the amount that one would expect from noise and flawed studies, given the amount of effort being put into proving its efficacy) – do I understand correctly that this is an acceptable interpretation, while the aggregate mortality of real-world patients (as opposed to RCT participants) clearly improves when treated homoeopathically (compared to usual medicine)?

More generally, if I assume that the shift "no free healthcare"->"free healthcare" does not improve outcomes, and that "healthcare"->"healthcare+homoeopathy" does improve outcomes, wouldn't that imply that "healthcare+homoeopathy" is preferable to "no free healthcare"?

  • of course, there are a lot of steps in this argument that can go wrong
  • but generally, I would expect that something like this reasoning should be right.
  • if I do assume that homoeopathy is practically a placebo, this can point us to at least some fraction of treatments which should be avoided: those that homoeopathy claims to heal without the need for other treatments

That looks like an argument that an approach like your "What I do"-section can actually lead to strong benefits from the health system, and that non-excessively complicated strategies are available.

One aspect which I disagree with is that collapse is the important thing to look at. Decoherence is sufficient to get classical behaviour on the branches of the wave function. There is no need to consider collapse if we care about 'weird' vs. classical behaviour. This is still the case even if the whole universe is collapse-resistant (as is the case in the many worlds interpretation). The point of this is that true cat states ( = superposed universe branches) do not look weird.

The whole 'macroscopic quantum effects' are interferences between whole universes branches from the view of this small quantum object in they brain.

Superposition of universe - We can certainly regard the possibility that the macroscopic world is in a superposition as seen from our brain. This is what we should expect (absent collapse) just from the sizes of universe and brain:

  1. The size of our brain corresponds to a limited number for the dimensionality of all possible brain states (we can include all sub-atomic particles for this)
  2. If the number of branches of the universe is larger than the number of possible brain states, there is no possible wave function in which there aren't some contributions in which the universe is in a superposition with regards to the brain. Some brain states must be associated with multiple branches.
  3. the universe is a lot larger than the brain and dimensionality scales exponentially with particle number
  4. further, it seems highly likely that many physical brain-states correspond to identical mind states (some unnoticeable vibration propagating through my body does not seem to scramble my thinking very much)

Because of this, anyone following the many worlds interpretation should agree that from our perspective, the universe is always in a superposition - no unknown brain properties required. But due to decoherence (and assuming that branches will not meet), this makes no difference and we can replace the superposition with a probability distribution.

Perhaps this is captured by your "why Everett called his theory relative interpretation of QM" - I did not read his original works.

The question now becomes the interference between whole universe branches: A deep assumption in quantum theory is locality which implies that two branches must be equal in all properties[1] in order to interfere[2]. Because of this, interference of branches can only look like "things evolving in a weird direction" (double slit experiment) and not like "we encounter a wholly different branch of reality" (fictional stories where people meet their alternate-reality versions).

Because of this, I do not see how quantum mechanics could create the weird effects that it is supposed to explain.

If we do assume that human minds have an extra ability to facilitate interaction between otherwise distant branches if they are in a superposition compared to us, this of course could create a lot of weirdness. But this seems like a huge claim to me that would depart massively from much of what current physics believes. Without a much more specific model, this feels closer to a non-explanation than to an explanation.


  1. more strictly: must have mutual support in phase-space. For non-physicists: a point in phase-space is how classical mechanics describes a world. ↩︎

  2. This is not a necessary property of quantum theories, but it is one of the core assumptions used in e.g. the standard model. People who explore quantum gravity do consider theories which soften this assumption ↩︎

I mean, if it's about looking for post-hoc rationalizations, what's even the point of pretending there's a consistent ethical system?

Hmm, I would not describe it as rationalization in the motivated reasoning sense.

My model of this process is that most of my ethical intuitions are mostly a black-box and often contradictory, but still in the end contain a lot more information about what I deem good than any of the explicit reasoning I am capable of. If however, I find an explicit model which manages to explain my intuitions sufficiently well, I am willing to update or override my intuitions. I would in the end accept an argument that goes against some of my intuitions if it is strong enough. But I will also strive to find a theory which manages to combine all the intuitions into a functioning whole.

In this case, I have an intuition towards negative utilitarianism, which really dislikes utility monsters, but I also have noticed the tendency that I land closer to symmetric utilitarianism when I use explicit reasoning. Due to this, the likely options are that after further reflection I

  • would be convinced that utility monsters are fine, actually.
  • would come to believe that there are strong utilitarian arguments to have a policy against utility monsters such that in practice they would almost always be bad
  • would shift in some other direction

and my intuition for negative utilitarianism would prefer cases 2 or 3.

So the above description was what was going on in my mind, and combined with the always-present possibility that I am bullshitting myself, led to the formulation I used :)

As I understand, the main difference form her view is that decoherence is the relation between objects in the system, but measurement is related to the whole system "collapse".

I think I would agree to "decoherence does not solve the measurement problem" as the measurement problem has different sub-problems. One corresponds to the measurement postulate which different interpretations address differently and which Sabine Hossenfelder is mostly referring to in the video. But the other one is the question of why the typical measurement result looks like a classical world - and this is where decoherence is extremely powerful: it works so well that we do not have any measurements which manage to distinguish between the hypotheses of

  • "only the expected decoherence, no collapse"
  • "the expected decoherence, but additional collapse"

With regards to her example of Schrödinger's cat, this means that the state will not actually occur. It will always be a state where the environment must be part of the equation such that the state is more like after a nanosecond and already includes any surrounding humans after a microsecond (light went 300 m in all directions by then). When human perception starts being relevant, the state is With regards to the first part of the measurement problem, this is not yet a solution. As such I would agree with Sabine Hossenfelder. But it does take away a lot of the weirdness because there is no branch on the wave function that contains non-classical behaviour[1].

Wigner's friend.

You got me here. I did not follow the large debate around Wigner's friend as i) this is not the topic I should spend huge amounts of time on, and ii) my expectations were that these will "boil down to normality" once I manage to understand all of the details of what is being discussed anyway.

It can of course be that people would convince me otherwise, but before that happens I do not see how these types of situations could lead to strange behaviour that isn't already part of the well-established examples such as Schrödinger's cat. Structurally, they only differ in that there are multiple subsequent 'measurements', and this can only create new problems if the formalism used for measurements is the source. I am confident that the many worlds and Bohmian interpretations do not lead to weirdness in measurements[2], such that I am as-of-yet not convinced.

I think (give like 30 per cent probability) that the general nature of the UFO phenomenon is that it is anti-epistemic

Thanks for clarifying! (I take this to be mostly 'b) physical world' in that it isn't 'humans have bad epistemics') Given the argument of the OP, I would at least agree that the remaining probability mass for UFOs/weirdness as a physical thing is on the cases where the weird things do mess with our perception, sensors and/or epistemics.

The difficult thing about such hypotheses is that they can quickly evolve to being able to explain anything and becoming worthless as a world-model.


  1. This will generally be the case for any practical purposes. Mathematically, there will be minute contributions away from classicality. ↩︎

  2. at least not to this type of weirdness ↩︎

Load More