shminux

shminux's Comments

Book review: Rethinking Consciousness

Actually the superdeterminism models allow for both to be true. There is a different assumption that breaks.

"How quickly can you get this done?" (estimating workload)

The standard process is scope->effort->schedule. Estimate the scope of the feature or fix required (usually by defining requirements, writing test cases, listing impacted components etc.), correct for underestimating based on past experience, evaluate the effort required, again, based on similar past efforts by the same team/person. Then and only then you can figure out the duty cycle for this project, and estimate accordingly. Then double it, because even the best people suck at estimating. Then give the range as your answer if someone presses you on it. "This will be between 2 and 4 weeks, given these assumptions. I will provide updated estimates 1 week into the project."

Book review: Rethinking Consciousness

Not surprisingly, I have a few issues with your chain of reasoning.

1. I exist. (Cogito, ergo sum). I'm a thinking, conscious entity that experiences existence at this specific point in time in the multiverse.

Cogito is an observation. I am not arguing with that one. Ergo sum is an assumption, a model. The "multiverse" thing is a speculation.

Our understanding of physics is that there is no fundamental thing that we can reduce conscious experience down to. We're all just quarks and leptons interacting.

This is very much simplified. Sure, we can do reduction, but that doesn't mean we can do synthesis. There is no guarantee that it is even possible to do synthesis. In fact, there are mathematical examples where synthesis might not be possible, simply because the relevant equations cannot be solved uniquely. I made a related point here. Here is an example. Consciousness can potentially be reduced to atoms, but it may also be reduced to bits, a rather different substrate. Maybe there are other reductions possible.

And it is also possible that constructing consciousness out of quarks and leptons is impossible because of "hard emergence". Of the sorites kind. There is no atom of water. A handful of H2O molecules cannot be described as a solid, liquid or gas. A snowflake requires trillions of trillions of H2O molecules together. There is no "snowflakiness" in a single molecule. Just like there is no consciousness in an elementary particle. There is no evidence for panpsychism, and plenty against it.

Reality-Revealing and Reality-Masking Puzzles
“Getting out of bed in the morning” and “caring about one’s friends” turn out to be useful for more reasons than Jehovah—but their derivation in the mind of that person was entangled with Jehovah.

Cf: "Learning rationality" and "Hanging out with like-minded people" turn out to be useful for more reasons than AI risk -- but their derivation in the mind of CFAR staff is entangled with AI risk.

Predictors exist: CDT going bonkers... forever

That... doesn't seem like a self-consistent decision theory at all. I wonder if any CDT proponents agree with your characterization.

Is backwards causation necessarily absurd?
causation might be in the map rather than the territory

Of course it is. There is no atom of causation anywhere. It's a tool for embedded agents to construct useful models in an internally partially predictable universe.

"Backward causation" may or may not be a useful model at times, but it is certainly nothing but a model.

As a trained (though not practicing) physicist, I can see that you are making a large category error here. Relativity neither adds to not subtracts from the causation models. In a deterministic Newtonian universe you can imagine backward causation as a useful tool. Sadly, its usefulness it rather limited. For example, the diffusion/heat equation is not well posed when run backwards, it blows up after a finite integration time. An intuitive way to see that is that you cannot reconstruct the shape of a glass of water from the puddle you see on the ground some time after it was spilled. But in cases where the relevant PDEs are well posed in both time directions, backward causality is equivalent to forward causality, if not computationally, then at least in principle.

All that special relativity gives you is that the absolute temporal order of events is only defined when they are within a lightcone, not outside of it. General relativity gives you both less and more. On the one hand, the Hilbert action is formulated without referring to time evolution at all and poses no restriction on the type of matter sources, be they positive or negative density, subluminal or superluminal, finite or singlular. On the other hand, to calculate most interesting things, one needs to solve the initial value problem, and that one poses various restrictions on what topologies and matter sources one can start with. On the third hand, there is a lot of freedom to define what constitutes "now", as many different spacetime foliations are on equal footing.

If you add quantum mechanics to the mix, the Born rule, needed to calculate anything useful regardless of one's favorite interpretation, breaks linearity and unitarity at the moment of interaction (loosely speaking) and is not time-reversal invariant.

The entropic argument is also without merit: there is no reason to believe that entropy would decrease in a "high-entropy world", whatever that might mean. We do not even know how observer-independent entropy is (Jaynes argued that apparent entropy depends on the observer's knowledge of the world).

Basically, you are confusing map and territory. If backward causality helps you make more accurate maps, go wild, just don't claim that you are doing anything other than constructing models.

Predictors exist: CDT going bonkers... forever
Omega will predict their action, and compare this to their actual action. If the two match...

For a perfect predictor the above simplifies to "lose 1 utility", of course. Are you saying that your interpretation of EDT would fight the hypothetical and refuse to admit that perfect predictors can be imagined?

Realism about rationality
It seems almost tautologically true that you can't accurately predict what an agent will do without actually running the agent. Because, any algorithm that accurately predicts an agent can itself be regarded as an instance of the same agent.

That seems manifestly false. You can figure out whether an algorithm halts or not without being accidentally stuck in an infinite loop. You can look at the recursive Fibonacci algorithm and figure out what it would do without ever running it. So there is a clear distinction between analyzing an algorithm and executing it. If anything, one would know more about the agent by using the techniques from analysis of algorithms than the agent would ever know about themselves.

Moral uncertainty: What kind of 'should' is involved?

30 seconds of googling gave me this link, which might not be anything exceptional but at least it offers a couple of relevant definitions:

what should I do, given that I don’t know what I should do?

and

what should I do when I don’t know what I should do?

and later a more focused question

what am I (or we) permitted to do, given that I (or we) don’t know what I (or we) are permitted to do

At least they define what they are working on...

Moral uncertainty: What kind of 'should' is involved?
What do we mean by “moral uncertainty”?

I was looking for a sentence like "We define moral uncertainty as ..." and nothing came up. Did I miss something?

Load More