Posts

Sorted by New

Wiki Contributions

Comments

We are sitting close to the playground on top of red and blue blankets

Hmm.. . In my mind, the Pilot wave theory position does introduce a substrate dependence for the particle-position vs. wavefunction distinction, but need not distinguish any further than that. This still leaves simulation, AI-consciousness and mind-uploads completely open. It seems to me that the Pilot wave vs. Many worlds question is independent of/orthogonal to these questions.

I fully agree that saying "only corpuscle folk is real" (nice term by the way!) is a move that needs explaining. One advantage of Pilot wave theory is that one need not wonder about where the Born probabilities are coming from - they are directly implied of one wishes to make predictions about the future. One not-so-satisfying property is that the particle positions are fully guided by the wavefunction without any influence going the other way. I do agree that this makes it a lot easier to regard the positions as a superfluous addition that Occam's razor should cut away.

For me, an important aspect of these discussions is that we know that our understanding is incomplete for every of these perspectives. Gravity has not been satisfyingly incorporated into any of these. Further, the Church-Turing thesis is an open question.

I am not too familiar with how advocates of Pilot wave theory usually state this, but I want to disagree slightly. I fully agree with the description of what happens mathematically in Pilot wave theory, but I think that there is a way in which the worlds that one finds oneself outside of do not exist.

If we assume that it is in fact just the particle positions which are "reality", the only way in which the wave function (including all many worlds contributions) affects "reality" is by influencing its future dynamics. Sure, this means that the many worlds computationally do exist even in pilot wave theory. But I find the idea that "the way that the world state evolves is influenced by huge amounts of world states that 'could have been'" meaningfully different to "there literally are other worlds that include versions of myself which are just as real as I am". The first is a lot closer to everyday intuitions.

Well, this works to the degree to which we can (arbitrarily?) decide the particle positions to define "reality" (the thing in the theory that we want to look at in order to locate ourselves in the theory) in a way that is separate from being computationally a part of the model. One can easily have different opinions on how plausible this step is.

Finally, if we want to make the model capture certain non-Bayesian human behaviors while still keeping most of the picture, we can assume that instrumental values and/or epistemic updates are cached. This creates the possibility of cache inconsistency/incoherence.

In my mind, there is an amount of internal confusion which feels much stronger than what I would expect for an agent as in the OP. Or is the idea possibly that everything in the architecture uses caching and instrumental values? From reading, I imagined a memory+cache structure instead of being closer to "cache all the way down".

Apart from this, I would bet that something interesting will happen for a somewhat human-comparable agent with regards to self-modelling and identity. Would anything similar to human identity emerge or would this require additional structure? Some representation of the agent itself, and its capabilities should be present at least

After playing around für a few minutes, I like your app with >95% Probability ;) compare this bayescalc.io calculation

Mart_Korz10mo90

Unfortunately, I do not have useful links for this - my understanding comes from non-English podcasts of a nutritionist. Please do not rely on my memory, but maybe this can be helpful for localizing good hypotheses.

According to how I remember this, one complication of veg*n diets and amino acids is that the question of which of the amino acids can be produced by your body and which are essential can effectively depend on your personal genes. In the podcast they mentioned that especially for males there is a fraction of the population who totally would need to supplement some "non-essential" amino acids if they want to stay healthy and follow veg*n diets. As these nutrients are usually not considered as worthy of consideration (because most people really do not need to think about them separately and also do not restrict their diet to avoid animal sources), they are not included in usual supplements and nutrition advice
 (I think the term is "meat-based bioactive compounds").

I think Elizabeth also emphasized this aspect in this post

Mart_Korz10mo12

log score of my pill predictions (-0.6)

If did not make a mistake, this score could be achieved by e.g. giving ~55% probabilities and being correct every time or by always giving 70% probabilities and being right ~69 % of the time.

Mart_Korz10mo40

you'd expect the difference in placebo-caffeine scores to drop

I am not sure about this. I could also imagine that the difference remains similar, but instead the baseline for concentration etc. shifts downwards such that caffeine-days are only as good as the old baseline and placebo-days are worse than the old baseline.

Mart_Korz10mo20

Update: I found a proof of the "exponential number of near-orthogonal vectors" in these lecture notes https://www.cs.princeton.edu/courses/archive/fall16/cos521/Lectures/lec9.pdf From my understanding, the proof uses a quantification of just how likely near-orthogonality becomes in high-dimensional spaces and derives a probability for pairwise near-orthogonality of many states.

This does not quite help my intuitions, but I'll just assume that the "it it possible to tile the surface efficiently with circles even if their size gets close to the 45° threshold" resolves to "yes, if the dimensionality is high enough".

One interesting aspect of these considerations should be that with growing dimensionality the definition of near-orthogonality can be made tighter without loosing the exponential number of vectors. This should define a natural signal-to-noise ratio for information encoded in this fashion.

Mart_Korz10mo20

Weirdly, in spaces of high dimension, almost all vectors are almost at right angles.

This part, I can imagine. With a fixed reference vector written as , a second random vector has many dimensions that it can distribute its length along  while for alignment to the reference (the scalar product) only the first entry  contributes.

It's perfectly feasible for this space to represent zillions of concepts almost at right angles to each other.

This part I struggle with. Is there an intuitive argument for why this is possible?

If I assume smaller angles below 60° or so, a non-rigorous argument could be:

  • each vector blocks a 30°-circle around it on the d-hypersphere[1] (if the circles of two vectors touch, their relative angle is 60°).
  • an estimate for the blocked area could be that this is mostly a 'flat' (d-1)-sphere of radius  which has an area that scales with 
  • the full hypersphere has a surface area with a similar pre-factor but full radius 
  • thus we can expect to fit a number of vectors  that scales roughly like  which is an exponential growth in .

For a proof, one would need to include whether it is possible to tile the surface efficiently with the  circles. This seems clearly true for tiny angles (we can stack spheres in approximately flat space just fine), but seems a lot less obvious for larger angles. For example, full orthogonality would mean 90° angles and my estimate would still give , an exponential estimate for the number of strictly orthogonal states although these are definitely not exponentially many.


and a copy of that circle on the opposite end of the sphere ↩︎

Load More