God in the Loop: How a Causal Loop Could Shape Existence
If you do travel back to the past though, you may find yourself travelling along a different timeline after that

No, that's not how it works. That's not how any of this works. If you are embedded in a CTC, there is no changing that. There is no escaping the groundhog day, or even realizing that you are stuck in one. You are not Bill Murray, you are an NPC.

And yes, our universe is definitely not a Godel universe in any way. The Godel universe is isotropic and stationary, while our universe is of the FRW-de Sitter type, the best we can tell.

More generally, knowledge about the system, or memory, as well as the ability to act upon it to rearrange information. In fact, if an agent has perfect knowledge of a system, it can rearrange it in any way it desires.

Indeed, but it would not be an embedded agent, but something from outside the Universe, at which point you might as well say "God/Simulator/AGI did it" and give up.

if we assume our universe is a causal loop, but it is not a CTC

That is incompatible with classical GR, the best I can glean. The philosophy paper is behind a paywall (boo!), and it's by a philosopher, not a physicist, apparently, so can be safely discounted (this attitude goes both ways, of course).

From that point on in your post, it looks like you are basically throwing **** against the wall and seeing what sticks, so I stopped trying to understand your logic.

Life doesn’t just veer off the rails into oblivion; it’s locked on a path, or lots of equivalent paths that are all destined to tell the same story — the same universal archetype. The loop cannot be broken, else it would have never existed. Life is bound to persist, bound to overcome, bound to exist again

To quote the classic movie, "Life, uh, finds a way". Which is a nice and warm sentiment, but nothing more.

But, if your goal is a search for God, then 10/10 for rationalization.

Egan's Theorem?
When physicists were figuring out quantum mechanics, one of the major constraints was that it had to reproduce classical mechanics in all of the situations where we already knew that classical mechanics works well - i.e. most of the macroscopic world.

Well, that's false. The details of quantum to classical transition are very much an open problem. Something happens after the decoherence process removes the off-diagonal elements from the density matrix, and before only a single eigenvalue remains; the mysterious projection postulate. We have no idea at what scales it becomes important and in what way. The original goal was to explain new observations, definitely. But it was not "to reproduce classical mechanics in all of the situations where we already knew that classical mechanics works well".

Your other examples is more in line with what was going on, such as

for special and general relativity - they had to reproduce Galilean relativity and Newtonian gravity, respectively, in the parameter ranges where those were known to work

That program worked out really well. But that is not a universal case by any means. Sometimes new models don't work in the old areas at all. The free will or the consciousness models do not reproduce physics or vice versa.

The way I understand the "it all adds up to normality" maxim (not a law or a theorem by any means), is that new models do not make your old models obsolete where the old models worked well, nothing more.

I have trouble understanding what you would want from what you dubbed the Egan's theorem. In one of the comment replies you suggested that the same set of observations could be modeled by two different models, and there should be a morphism between the two models, either directly or through a third model that is more "accurate" or "powerful" in some sense than the other two. If I knew enough category theory, I would probably be able to express it in terms of some commuting diagrams, but alas. But maybe I misunderstand your intent.

Gems from the Wiki: Acausal Trade

I was trying to understand the point of this, and it looks like it is summed up in

Which algorithm should an agent have to get the best expected value, summing across all possible environments weighted by their probability? The possible environments include those in which threats and promises have been made.

Isn't it your basic Max EV that is in the core of all decision theories and game theories? The "acausal" part is using the intentional stance for modeling the parts of the universe that are not directly observable, right?

What's Wrong with Social Science and How to Fix It: Reflections After Reading 2578 Papers
I think the most plausible explanation is that scientists don't read the papers they cite

Indeed. Reading an abstract and skimming intro/discussion is as far as it goes in most cases. Sometimes it's just the title that is enough to trigger a citation. Often it's "reciting", copying the references from someone else's paper on the topic. My guess is that maybe 5% of references in a given paper have actually been read by the authors.

‘Ugh fields’, or why you can’t even bear to think about that task (Rob Wiblin)

Is there a more standard terminology in psychology for this phenomenon? "Ugh field" feels LW-cultish.

Leslie's Firing Squad Can't Save The Fine-Tuning Argument
So unless you are willing to commit that not only there is no reliable way to assign a prior, but also assigning a probability in this situation is invalid in itself

Indeed. If you have no way to assign a prior, probability is meaningless. And if you try, you end up with something as ridiculous as the Doomsday argument.

Leslie's Firing Squad Can't Save The Fine-Tuning Argument

Note that speaking of probabilities only makes sense if you start with a probability distribution over outcomes.

In the firing squad setup we have an a priori probability distribution is something like 99% dead vs 1% alive without a collusion to miss, and probably the opposite with the collusion to miss. So the Bayesian update gives you high probability of collusion to miss. This matches the argument you presented here.

In the fine tuning argument we have no reliable way to create an a priori probability distribution. We don't know enough physics to even guess reliably. Maybe it's the uniform distribution of some "fundamental" constants. Maybe it's normal or log-normal. Maybe it's not even a distribution of the constants, but something completely different. Maybe it's Knightean. Maybe it's the intelligent designer/simulator. There is no hint from quantum mechanics, relativity, string theory, loop quantum gravity or any other source. There is only this one universe we observe, that's it. Thus we cannot use Bayesian updating to make any useful conclusions, whether about fine tuning or anything else. Whether this matches your argument, I am not clear.

The ethics of breeding to kill

If you talk to a real vegan, their ethical argument will likely be "do not create animals in order to kill and eat them later", period. Any discussion of the quality of life of the farm animal is rather secondary. This is your second argument, basically. The justification is not based on what the animals feel, or on their quality of life, but on what it means to be a moral human being, which is not a utilitarian approach at all. So, none of your utilitarian arguments are likely to have much effect on an ethical vegan. Note that rationalist utilitarian people here are not too far from that vegan, or at least that's my conclusion from the comments to my post Wirehead Your Chickens.

Why is Bayesianism important for rationality?


a kind of group mind that is created when people consciously come together for a common purpose
Why is Bayesianism important for rationality?

(Not speaking for Eliezer, obviously.) "Carefully adjusting one's model of the world based on new observations" seems like the core idea behind Bayesianism in all its incarnations, and I'm not sure if there is much more to it than that. The stronger the evidence, the more signifiant the update, yada-yada. It seems important to rational thinking because we all tend to fall into the trap of either ignoring evidence we don't like or being overly gullible when something sounds impressive. Not that it helps a lot, way too many "rationalists" uncritically accept the local egregores and defend them like a religion. But the allegiance to an ingroup is emotionally stronger than logic, so we sometimes confuse rationality with rationalization. Still, relative to many other ingroups this one is not bad, so maybe Bayesianism does its thing.

Load More