After having read a few GPT-3 generated texts, its type of pattern-matching babbling really reminds me of what is here described as apologist. Maybe the apologist part of the mind just does not do sufficiently model-based thinking to catch mistakes that are obvious to an explicitly model-based way of thinking ("revolutionary")?
It seems very plausible to me that there are both high-level model-based and model-free parts in the human mind. This would also match the seemingly obvious mistakes in the apologists reasoning and explain why it is effectively impossible to get someone's apologist to realise their mistakes by talking to them (I would assume that for healthy people, the model-based thinking does inform/override the model-free thinking to a degree)
I really liked this question and the breadth of interesting answers.
I want to add a mechanism which might contribute to a weakening of institutions that is related to the 'stronger memes' described by ete (I have not thought this out properly, but I am quite confident that I am pointing at something real even if I might well be mistaken in many of the details):
In myself, and I think this is quite common, while considering my life/career options, I noticed an internal drive (think elephant from elephant in the brain) that made me focus on the highest-prestige group that seemed like a viable option. A natural choice is an institution at the highest (available) power level/size.
I think that modern communication technologies are strong enough to capture that drive by giving (felt) access to the most prestigious groups from around the globe.
As a consequence, I expect that the emotionally impactful access to global culture/'tribes' decreases the felt importance of, and thus the effort put into local institutions, culture and tribes. (Related topics that come to mind would be the loss of spoken languages or local newspapers)
I am not sure whether my take on this is correct, so I'd be thankful if someone corrects me if I am wrong:
I think that if the goal was only 'predicting' this bit-sequence after knowing the sequence itself, one could just state probability 1 for the known sequence.
In the OP instead, we regard the bit-sequence as stemming from some sequence-generator, of which only this part of the output is known. Here, we only have limited data such that singling out a highly complex model out of model-space has to be weighed against the models' fit to the bit-sequence.
Thanks for sharing!
There seems to be a typo ('k4rss' compared to 'krss') in the link to your blog-post introducing kindle4rss
I'm glad if this was helpful.
I was also surprised to learn about this formalism at my university, as it wasn't mentioned in either the introductory nor the advanced lecture on QM, but turns out to be very helpful for understanding how/when classical mechanics can be a good approximation in a QM universe.
I would need to think about this more to be sure, but from my first read it seems as if your idea can be mapped to decoherence.
The maths you are using looks a bit different than what I am used to, but I am somewhat confident that your uncalibrated experiment is equivalent to a suitably defined decohering quantum channel. The amplitudes that you are calculating would be transition amplitudes from the prepared initial state to the measured final state (Denoting the initial state as |i>, the final state as |f> and the time evolution operator as U, your amplitudes would be <f|U|i> in the notation of the linked wikipedia article). The go-to method for describing statistical mixtures over quantum states or transition amplitudes is to switch from wave-functions and operators to density matrices and quantum channels (physics lectures about open quantum systems or quantum computing will introduce these concepts) - they should be equivalent to (more accurately: a super-set of) your averaging over s and t for the uncalibrated experiment, as one can just define a time evolution operator for fixed values of s and t and then get the corresponding channel by taking the probability weighted integral (compare the Operator-sum representation in the Wikipedia article) to arrive at the corresponding channel.
Regarding all the interesting aspects regarding the Born rule, I cannot contribute at the moment.
I’m just sick of struggling through life. The inefficiencies all around me are staggering and overwhelming.
Your mileage will vary, but a train of thought that helped me change my perspective on this (and I fully endorse this shift) was to realize that my emotions were ill-calibrated:
When I considered the state of the world, my emotional reaction was mostly negative, but when I tried to compare this reaction to a world in which earth is replaced by a lifeless rock I realized that this would clearly not be an improvement. After contemplating this, I decided that my emotions were missing a huge chunk: The immense value of life on earth which makes it reasonable to be pained by all the inadequacies in the first place. Since then, my emotional estimation of our world's value has climbed a lot which makes seeing all the problems much more bearable. (this change in perspective was largely influenced by the Sequences and HPMOR, but I am not sure whether this train of thought was mentioned explicitly)
Up-voted for thoroughly putting the idea into less wrong context - i enjoyed being reminded of all the related ideas
A thought: I am a bit surprised that one can distil a single belief network explaining a whole lot of the variance of beliefs across many people. This makes me take the idea more seriously that a large number of people regularly do have very similar beliefs (down to the argumentative structure). Remembering You Have About Five Words this surprises me as I would expect a less reliable transmission of beliefs? (It might well be that I am just misunderstanding something)
Now reading the post for the second time, I again find it fascinating – and I think I can pinpoint my confusion more clearly now:
One aspect that sparks confusion when matched against my (mostly introspection + lesswrong-reading generated) model, is the directedness of annealing:On the one hand, I do not see how the mechanism of free energy creates such a strong directedness as the OP describes with 'aesthetics',on the other hand if in my mind I replace the term "high-energy-state" with "currently-active-goal-function(s)", this becomes a shockingly strong model describing my introspective experiences (matching large parts of what I would usually think of roughly as 'System 1-thinking'). Also the aspects of 'dissonance' and 'consonance' directly being unpleasant and pleasant feel more natural to me if I treat them as (possibly contradicting) goal functions, that also synchronize the perception-, memorizing-, modelling- and execution-parts of the mind. A highly consonant goal function will allow for vibrant and detailed states of mind.Is there some mechanism that would allow for evolution to somewhat define the 'landscape' of harmonics? Is reframing the harmonics as goals compatible with the model? Something like this seems to be pointed at in the quote
Panksepp’s seven core drives (play, panic/grief, fear, rage, seeking, lust, care) might be a decent first-pass approximation for the attractors in this system.
--- Another aspect where my current model differs is that I do not identify consciousness (at least the part that creates the feeling of pleasure/suffering and the explicit feeling of 'self') as part of this goal-setting mechanism. In my model, the part of the mind that generates the feeling of pleasure or suffering is more of a local system (plus complications*) that takes the global state as model- and goal-input and tries to derive strategies from this. In my model, this part of the mind is what usually identifies as 'self' and it is this that is most relevant for depression or schizophrenia. But as what I describe as 'model- and goal-input' really defines the world and goals that the 'self' sees and pursues at each moment (sudden changes can be very disconcerting experiences), the implications of annealing for health would stay similar. ---After writing all of this I can finally address the question of the parent comment:
Are your previous models single or multi-agent?
I very much like the multiagent-model sequence although I am not sure how well my "Another aspect [...]"-description matches: On the one hand, my model does have a privileged 'self'-system that is much less fragmented than the goal-function-landscape. On the other hand, the goal-function-landscape seems best described by "shards of desire" (which is a formulation used in the sequences if I remember correctly) and they can direct and override the self easily. This part fits well with the multiagent-model ---*) A complication is that the 'self' can also endorse/reject goals and redirect 'active goal-energy' (it feels like a kind of delegable voting power that the self as strategy-expert can use if it gained the trust and thus voting-power of goal-setting parts) onto the goal-setting parts themselves in order to shape them.
I am very much impressed by the exchange in the parent-comments and cannot upvote sufficiently.
With regards to the 'mental motion':
In contrast, the model description you gave made it sound like craving was an active process that one could simply refrain from [...]
As I see it, the perspective of this (sometimes) being an active process makes sense from the global workspace theory perspective: There is a part of one's mind that actually decides on activating craving or not. (Especially if trained through meditation) it is possible to connect this part to the global workspace and thus consciousness, which allows noticing and influencing the decision. If this connection is strong enough and can be activated consciously, it can make sense to call this process a mental motion.