166

LESSWRONG
Petrov Day
LW

165
Rationality
Frontpage

64

A philosophical kernel: biting analytic bullets

by jessicata
15th Aug 2025
Linkpost from unstableontology.com
16 min read
21

64

Rationality
Frontpage

64

A philosophical kernel: biting analytic bullets
14TAG
2jessicata
2TAG
6Wei Dai
2TAG
2jessicata
6MalcolmMcLeod
2jessicata
2MalcolmMcLeod
5S. Alex Bradt
2jessicata
1James Camacho
2jessicata
1James Camacho
2jessicata
1James Camacho
2jessicata
1Oskar Mathiasen
2jessicata
3Oskar Mathiasen
2jessicata
New Comment
21 comments, sorted by
top scoring
Click to highlight new comments since: Today at 6:36 PM
[-]TAG1mo*141

free will

A basic argument against free will: free will for an agent implies that the agent could have done something else

There is more than one definition of free will. According to compatibilitism, the agent only needs to be generally capable, and uncoerced.

If an agent is identified with a particular physical configuration, then, given the same physics / inputs / stochastic bits (which can be modeled as non-temporal extra parameters, per previous discussion), there is only one possible action

If you think of stochastic bits as pre calculated, but hidden until they are needed, that may be the case. Under the more conventional view, where indeterministic value occur, as needed, "on the fly", rather than being retrieved, it's clearly the case that alternative possibilities can exist: the outcome of quantum counts toss, is unknowable, even to Laplace's demon.

we can move any stochasticity into independent random variables, and have everything be a deterministic consequence of those.

But that's not an accurate, lossless alternative model of indeterminism, because , according to it"everything is deterministic"!

The precalculated "stochastic" variables thing, and the on-the-fly calls to the universe's rand() aren't the same thing, because they have different ontological implications.

Why think they are the same?

I would guess that the standard rationalist answer is "they are indistinguishable empirically". But rationalism lacks a proof that unempirical questions are unaswerable or meaningless (unlike logical positivism..but LP is explicitly rejected).

Note that compatibilism and naturalistic libertarian free will are both viable given our present state of knowledge...so there is no necessity to adopt anti realism.

Note also that we still need a motion of moral responsibility for practical puposes.

causal counterfactuals

Causal counterfactuals have theoretical problems, such as implying violations of physical law, hence being un-determined by empirical science (as we can’t observe what happens when physical laws are violated).

So much for MWI then ..according to it, every world is counterfactual to every other.

How do you know counterfactuals require violations of physics itself? The possibility of something happening, that wasn't what happened, only requires (genuine) indeterminism, as above.

Basically, there's three levels:

*Counterfactuals are fully real but not "here". (MWI).

*Alternative outcomes were possible at the time, but didn't happen , and don't exist anywhere (single universe indeterminism, Copenhagen Interpretation).

*Only hypothetical counterfactuals. Single universe determinism (Superdeterministic QM, Newton)

You need to know how the universe works to settle these questions, you can't do it all with armchair reasoning.

And our ability to think about hypothetical and counterfactual situations doesnt require the violations of actual laws...we only need to use imaginary starting conditions as an input to our physical theories. And that's useful! You can hypothetically plan out a moon landing before you perform it for the first time. Sweeping rejection of counterfactuals is anti science!

ETA

Laws of physics: universal satisfaction

I mostly find this unclear.

Do the laws have “additional reality” beyond universal satisfaction?

What is universal satisfaction? Do you mean the law was followed in the past , or that it will be in the future as well?

I can see how a US in the second sense would imply the truth of a law , epistemically, but you said it implied the reality...

I don't see that anything about the ontological nature of a physical law has been asserted.

Decision theory: non-realism

We reject causal decision theory (CDT), because it relies on causal counterfactuals

In what sense of causal counterfactual? Real ones? Why would a DT require you to do anything more than consider hypotheses?

We reject any theory of “logical counterfactuals”, because the counterfactual must be illogical, contradicting modal logics such as S4

That seems to go back to your previous argument, that , if you recast indeterministic arguments in quasi-deterministic form, then everything is determined. According to the new model ... but not the original.

It's typical for rationalists to argue one (hopefully)universal decision theory against another. Fifteen or twenty years on, the problem remains unresolved. There's a problem behind that problem: whether you can arrive at a one-size-fits-all DT without knowing what kind of universe you are in.

Time: eternalism

Eternalism says the future exists, as the past and present do. This is fairly natural from the DAG factorization notion of causality. As there are multiple topological sorts of a given DAG, and multiple DAGs consistent with the same joint distribution, there isn’t an obvious way to separate the present from the past and future;

There's an obvious empirical difference: the future, for you , is what is what you haven't seen yet.

There’s a possible complication, in that our DAG factorization can be stochastic

Block universes can embrace unpredictability very easily. Too easily, since it's difficult to predict the existence of any kind of lawfulness/predictability/ compressibility from the block universe premise alone. The reason for that is that BUT says that time works just like space. However , since there is no reason to expect a slice of an object along the X axis to predict a further slice, there is no reason to think that a slice along the T axis will be able to predict anything, on the premises that time is just like space

Block universes are, of course, deterministic in a sense, since, the future is "already there", and waiting to happen. The thing is that they are also not causally deterministic, since everything is "already there" and therefore doesn't need to be made to happen..

And you can't infer BUT from predictability, because it doesn't imply it.

Moral Realism

Moral realism implies that moral facts exist, but where would they exist?

Realism and anti realism are both a range of claims, and instead ignoring starkly opposed, they are capable of meeting in the middle.

Moral realism is, minimally, the theory that moral propositions have truth values. It doesn't necessarily require the existence of a special domain of objects to serve as truth makers, since correspondence to a state of affairs isn't the only theory of truth.

The apparent requirement for supernatural entities is a common reason to reject MR, but many naturalist theories of realism are available. Eg evolutionary ethics, contractarianism, Kantian ethics and game theoretical ethics. Maybe utilitarianism as well.

Where do truths about how to do things;well -- build bridges, or play chess -- reside? They are not in some inaccessible realm. But they don't stand in one to one correspondence to with basic physical facts either. They are derived from multitudes of physical facts, plus some abstract rules. They stand in a one-to-many relationship with basic physical facts.

What's that got to do with morality? For one thing it shows that the is-ought divide is bridgeable. For another , it shows that there is a middle way between anti realism and Platonism.

Anti realism is not a theory, but a collection of theories. It includes error theory and non cognitivism, both of which imply ethical questions don't have answers. Anti realism also realism also includes subjective and emotivism, which supply far too many answers , potentially one per person. We need morality to justify practices, objective actions that either happen or not, like sending people to jail, and starting wars -- and A Babel of conflicting opinion can't provide the justification. Nihilism and subjectivism are profoundly un-useful.

So, if Moral Realism does not exist, it would be necessary to invent it, to construct a system of ethics that is as close to realism as possible... and perhaps we have.

Naturalistic ethics can still be objected to on grounds of the is-ought gap. --.but I will argue that there is no such thing. How a thing should done is well. It is possible to gather bodies of theoretical and practical information on how to do something --build a bridge, or play chess--well. Such methodological knowledge is conditional: if you want to achieve X, you should do Y. So if we want to apply it to ethics, we need to figure out what ethics is for, what it's purpose is.

This we can do. Ethics is social. If you are all alone in a desert island , there is nobody to steal from or kill. Ethics fulfils a role in society, and originated as a mutually beneficial way of regulating individual actions to minimise conflict, conserve resources, and solve coordination problems.

There are many possible minds (consider the space of AGI programs), and they could find different things compelling

Why does that matter? There might be minds that think 2+2=5, but they are wrong.

If you are trying to make a point about truth, you need to specify rational minds. (In fact everyone who makes this kind of point, except Yudkowsky, does).

If you are making a point about compulsion , the problem has been solved: those who are not internally compelled are externally compelled by threats and rewards.

In addition, the discussion of free will and decision theory shows that there are problems with formulating possibility and intentional action. If, as Kant says, “ought implies can”, then contrapositively “not can implies not ought”; if modal analysis shows that alternative actions for a given agent are not possible, then no alternative actions can be “ought”. (Alternatively, if modal possibility is unreal, then “ought implies can” is confused to begin with).

Compatibilism has its own, internally consistent, senses of "can" and "free". "Free" is "not under compulsion" and can is "generally possible for the type of which the subject is a token": pigeons can fly, penguins cant.

Theory of mind: epistemic reductive physicalism

Chalmers claims that mental properties are “further facts” on top of physical properties, based on the zombie argument: it is conceivable that a universe physically identical to ours could exist, but with no consciousness in it.

No, just phenomenal consciousness. It's possible to combine identity theory about Easy Problem consciousness, with eliminativism or dualism about Hard Problem consciousness (phenomenal consciousness, qualia).

Ontological minimality suggests not believing in these “further facts”,

Explanations need to be as simple as possible, but no simpler. As simple as possible means the minimum to explain the facts. We don't have an explanation of phenomenal consciousness in physical terms, so adding further facts to explain it is justifiable.

especially given how dubious theories of consciousness tend to be. This seems a lot like eliminativism.

If it's eliminativism, why call it reductionism? They are not the same.

People believe in phenomenal consciousness because there is evidence for it, and believe in "further facts" -- dualism -- because there is no reductive explanation of it. (The zombie argument isn't the only argument against physicalism) So you are not in a position to believe in the reductive theory of phenomenal consciousness, here and now ,because there isn't one. You could take the view that a reductive theory will be found one day through normal science ...and that would be promissory materialism. Or you could selectively eliminate phenomenal consciousness .. which leaves you needing to explain where the apparent evidence comes from ... ie. solve the meta problem.

We don’t need to discard all mental concepts, though.

Reductionism doesn't require you to discard any.

Personal identity: empty individualism, similarity as successor

If a machine scans you and makes a nearly-exact physical copy elsewhere, is that copy also you? Paradoxes of personal identity abound. Whether that copy is “really you” seems like a non-question; if it had an answer, where would that answer be located?

In physics, metaphysics, social construction, etc. If physics can undermine identity claims , it can also support them.

What’s fairly simple and predictive to say above X=X is that a near-exact copy of you is similar to you

That just isn't the same question.

Suppose you found out you had an identical twin. You would not consider them to be you yourself. Maximal similarity is not numerical identity.

The major problem is that you and your duplicate exist simultaneously in different places, which goes against the intuition that you are a unique individual. Mere similarity explains how you could be the same individual as your non identical past self, but if it undermines your uniqueness, it is not a nett gain.

Basing identity on material continuity is possible given materialism/reductionism avoids the problem of "my clone is me, I'm in two place at once", but makes it harder to understand how you could be the same individual as your non identical past self. However, Similarity and Continuity are not entirely exclusive, so a hybrid answer is possible.

Anthropic probability: non-realism, graph structure as successor

In the Sleeping Beauty problem, is the correct probability ½ or ⅓?

Is there a single correct probability?

Formalism

Consists of several claims..that mathematical objects have no real existence, that truth is proof, and that there is no single set of mathematical truths.

Formalism is suggestive of finitism and intuitionism

How?

None of the claims above imply that , and these are usually classed as different philosophies. Indeed, finitism is often motivated by the idea that maths is about the physical world.

(I am generally favourable to formalism, but I find some of the subsidiary claims confusing).

Your Conclusion

Treat models as mathematical tools for describing the world’s structure, not as windows onto modal or metaphysical realms.

You are not consistently avoiding metaphysical claims ,because you at siding with determinism and against reductionism. (And against the most innocent forms of modality!)

My Conclusion

Whether it's a good thing or bad, we are not in a position to do ontological minimalism with any precision, because we don't know enough about the territory. Also, filling in knowledge gaps with intuitions and guesswork amounts to confirmation bias.

Reply1
[-]jessicata1mo20

The precalculated "stochastic"variables thing, and the on-the-fly calls to the universe's rand() aren't the same thing, because they have different ontological implications.

Yeah they can be distinguished ontologically. Although there are going to be multiple Bayes nets expressing the same joint distribution. So it's not like there's going to be a canonical ordering.

I would guess that the standard rationalist answer is "they are indistinguishable empirically". But rationalism lacks a proof that unempirical questions are unaswerable or meaningless (unlike logical positivism..but LP is explicitly rejected).

I get that active dis-belief in further facts (such as counterfactuals) can be dogmatic. Rather, it's more of a case of, we can get an adequate empirical account without them, and adding them has problems (like causal counterfactuals implying violations of physical law).

Part of where I'm coming with this is a Chalmers like framework. Suppose there are 2 possible universes, they have the same joint distribution, but different causal ordering. Like maybe in one the stochasticity is on the fly, in the other it's pre-computed. They imply the same joint distribution and the same set of "straightforward" physical facts (particle trajectories and so on). Yet there is a distinction, a further fact.

In which case... The agents in these universes can't have epistemic access to these further facts, it's similar to with the zombie argument. A simple approach is "no further facts", although assuming this is literally the case might be dogmatic. It's more like, don't believe in further facts prior to a good/convincing account of them, where the ontological complexity is actually worth it.

Note that compaibilism and naturalistic libertarian are both viable given our present state of knowledge...so there is no necessity to adopt anti realism.

Well it's more like, most specific theories of these have problems. Like, the counterfactuals being really weird, corresponding to bad decision theories, etc. And it seems simpler to say, the counterfactuals don't exist? Even if assigning high probability to it is dogmatic.

So much for MWI then ..according to it, every world is counterfactual to every other.

If instead of QM our best physics said something like "there are true random coin flips" then it would be a bit of a stretch to posit a MWI-like theory there, that there exist other universes where the coin flips go differently. The case for MWI is somewhat more complex, it has to do with the Copenhagen interpretation being a lot more complicated than "here, have some stochastic coin flips".

How do you know counterfactuals require violations of physics itself? The possibility of something happening that wasn't what happened, only requires (genuine) indeterminism, as above.

Well we can disjunct on high or low universal K complexity. Assuming low universal K complexity, counterfactuals really do have problems, there are a lot of implications. Assuming high universal K complexity, I guess they're more well defined. Though you can't counterfact on just anything, you have to counterfact on a valid quantum event. So like, how many counterfactuals there are depends on the density of relevant quantum events to, say, a computer.

I guess you could make the case from QM that the classical trajectory has high K complexity, therefore counterfactual alternatives to the classical trajectory don't require physical law violations.

If not for QM though, our knowledge would be compatible with determinism / low K complexity of the classical trajectory, and it seems like a philosophy should be able to deal with that case (even if it empirically seems not to be the case)

You can hypothetically plan out a moon landing before you perform it for the first time.

Right so, counterfactual reasoning is practically useful, this is more about skepticism of the implied metaphysics. There might be translations like, observing that a deterministic system can be factored (multiple ways) as interacting systems with inputs/outputs, each factoring implying additional facts about the deterministic system. Without having to say that any of these factorings is correct in the sense of correctness about further facts.

Reply
[-]TAG1mo*20

I get that active dis-belief in further facts (such as counterfactuals) can be dogmatic. Rather, it’s more of a case of, we can get an adequate empirical account without them, and adding them has problems (like causal counterfactuals implying violations of physical law).

As I have explained, that depends on how you conceive of both counterfactuals and physical laws. Physical laws can be deterministic or indeterministic.

Counterfactuals come in three strengths:-

i) Counterfactuals are fully real but not "here".

ii) Alternative outcomes were possible at the time, but didn't happen , and don't exist anywhere.

iii) Only hypothetical counterfactuals are possible.

There are certainly some impossible combinations , such a deterministic laws and type ii counterfactuals .. but there are plenty of allowed combinations as well. Notably, type iii Counterfactuals have no metaphysical implications So.there is no sweeping argument against counterfactuals.

Part of where I’m coming with this is a Chalmers like framework. Suppose there are 2 possible universes, they have the same joint distribution, but different causal ordering. Like maybe in one the stochasticity is on the fly, in the other it’s pre-computed. They imply the same joint distribution and the same set of “straightforward” physical facts (particle trajectories and so on). Yet there is a distinction, a further fact.

In which case… The agents in these universes can’t have epistemic access to these further facts,

Perhaps not via direct empiricism, but it's possible to argue for one ontology over another on grounds of , eg, simplicity, as you actually do.

it’s similar to with the zombie argument.

Additional facts don't have to be causally idle. For instance, physical law is an additional fact over observed events. So maybe not entirely like zombies.

A simple approach is “no further facts”, although assuming this is litery the case might be dogmatic.

It is also difficult to see how it would apply to in/determinism. Indeterminism means there at addition possibilities. Determinism means there is more lawfullness.

It’s more like, don’t believe in further facts prior to a good/convincing account of them, where the ontological complexity is actually worth it.

That creates a presupposition against MWI.

Note that compaibilism and naturalistic libertarian are both viable given our present state of knowledge...so there is no necessity to adopt anti realism.

Well it’s more like, most specific theories of these have problems.

All theories of FW have problems, including hard determinism

Like, the counterfactuals being really weird, corresponding to bad decision theories, etc

Huh? Libertarian free will requires indeterminism, and therefore type ii) counterfactuals, but that isn't "weird" , it just isn't determinism.I

And of course , compatibilism doesn't require any kind of real counterfactuals, so why far with the same brush?

. And it seems simpler to say, the counterfactuals don’t exist? Even if assigning high probability to it is dogmatic.

Saying all types of counterfactual are non existent means ditching the useful , and ontologically unimpactive , type iii's.

So much for MWI then ..according to it, every world is counterfactual to every other.

If instead of QM our best physics said something like “there are true random coin flips” then it would be a bit of a stretch to posit a MWI-like theory there, that there exist other universes where the coin flips go differently. The case for MWI is somewhat more complex, it has to do with the Copenhagen interpretation being a lot more complicated than “here, have some stochastic coin flips”.

I know what Yudkowkys case for MWI is, and why it is wrong

How do you know counterfactuals require violations of physics itself? The possibility of something happening that wasn’t what happened, only requires (genuine) indeterminism, as above.

Well we can disjunct on high or low universal K complexity. Assuming low universal K complexity, counterfactuals really do have problems,

Why? You seem to have missed several steps. Low and high complexity might imply something about (in)determinism, and (in)determinism does imply type ii counterfactuals....but merely thinking in terms of counterfactuals doesn't have to be realistic at all, it can be performed in terms of type iii counterfactuals. (But increasing versus flat complexity would be more relevant to (in)determinism. On-the-fly indeterminism means we information is constantly being added).

there are a lot of implications. Assuming high universal K complexity, I guess they’re more well defined.

Why?

Though you can’t counterfact on just anything, you have to counterfact on a valid quantum event.

You can consider hypothetical counterfactuals about anything.

Rationalists

So like, how many counterfactuals there are depends on the density of relevant quantum events to, say, a computer.

Only type ii.

I guess you could make the case from QM that the classical trajectory has high K complexity, therefore counterfactual alternatives to the classical trajectory don’t require physical law violations.

Huh?

If not for QM though, our knowledge would be compatible with determinism / low K complexity of the classical trajectory, and it seems like a philosophy should be able to deal with that case (even if it empirically seems not to be the case)

You can hypothetically plan out a moon landing before you perform it for the first time.

Right so, counterfactual reasoning is practically useful, this is more about skepticism of the implied metaphysics. There might be translations like, observing that a deterministic system can be factored (multiple ways) as interacting systems with inputs/outputs, each factoring implying additional facts about the deterministic system.

You don't need that. You only need to apply an imaginary starting condition to your deterministic laws. That's how you hypothetically plan a moon landing...the starting conditions represent the launch of a rocket design that hasn't been built yet. And that doesn't require breaking laws even in imagination.

PS I added a long section to my previous response.

Reply
[-]Wei Dai1mo60

I'm pretty sympathetic to this line of thought but haven't made big updates based on these arguments (aside from preferring EDT/conditionals over CDT/counterfactuals for reasons similar to the OP). Some of my reasons:

On the other hand, the idea that the mathematical facts live even partially outside the universe is ontologically and epistemically questionable. How would we access these mathematical facts, if our behaviors are determined by physics? Why even assume they exist, when all we see is in the universe, not anything outside of it?

This argument (and the analogous one for moral non-realism) isn't very convincing to me, because it doesn't seem highly problematic that we can access mathematical facts that "live partially outside the universe" via "reasoning" or "logical correlation", where the computations in our minds are entangled in some way with computations or math that we're not physically connected to. Maybe the easiest way to see this is with the example of using one algorithm running on one computer to predict the output of a different algorithm running on a physically disconnected computer, or even predict a computation that exists in a different (real or hypothetical) universe.

One could still argue for non-realism/formalism by appealing to ontological minimality, i.e., let's not assume the existence of mathematical structures or facts unless there are good reasons to, but I feel like the arguments in favor of some types of mathematical realism/platonism (e.g., universe and multiverse views of set theory) are actually fairly strong (and most working mathematicians and philosophers of math are realists probably for good reasons). For example one line of argument is that when mathematicians reason about math outside of a formal system, e.g. large cardinals, their reasoning still seem to be coherent and about something real.

Another reason I'm not ready to be super-convinced in this direction is I think philosophy is often very hard and slow, therefore as you say "It is somewhat questionable to infer from lack of success to define, say, optimal decision theories, that no such decision theory exists."

Another more pragmatic reason is related to Ontological Crisis in Humans, namely if we currently have some entities in our ontologies that our values are expressed in terms of, and we're not sure whether we'll eventually keep them when we're philosophically mature, and we don't know how to translate these values to a new ontology that lack these entities, it seems better to keep them for now rather than to remove them (and only add them back later when we find good reasons to), because removing them and their associated values might constitute a form of value drift that we should want to prevent. See also Beware Selective Nihilism where I warned about something similar.

Reply
[-]TAG1mo20

One could still argue for non-realism/formalism by appealing to ontological minimality, i.e., let’s not assume the existence of mathematical structures or facts unless there are good reasons to, but I feel like the arguments in favor of some types of mathematical realism/platonism (e.g., universe and multiverse views of set theory) are actually fairly strong

What are they, then? (I mean, I am familiar with the standard ones, and don't find them convincing).

(and most working mathematicians and philosophers of math are realists probably for good reasons). For example one line of argument is that when mathematicians reason about math outside of a formal system, e.g. large cardinals, their reasoning still seem to be coherent and about something real.

What does "seems" mean here? Literally their subjective feeling about what they are doing?

And what does coherence have to do with reality? Surely, you can have coherent fictions.

Reply
[-]jessicata1mo20

it doesn't seem highly problematic that we can access mathematical facts that "live partially outside the universe" via "reasoning" or "logical correlation", where the computations in our minds are entangled in some way with computations or math that we're not physically connected to.

While this is one way to think about, it seems first of all that it is limited to "small" mathematical facts that are computable in physics (not stuff like the continuum hypothesis). With respect to the entanglement, while it's possible to have a Bayes net where the mathematical fact "causes" both computers to output the answers, there's an alternative approach where the computers are two material devices that output the same answer because of physical symmetry. Two processes having symmetrical outputs doesn't in general indicate they're "caused by the same thing".

arguments in favor of some types of mathematical realism/platonism (e.g., universe and multiverse views of set theory)

Not familiar with these arguments. I think a formalist approach would be, the consistency of ZFC already implies a bunch of "small" mathematical facts (e.g. ZFC can't prove any false Π1 arithmetic statements). I think it's pretty hard to find a useful a formal system that is strictly finitist, however my intuition is that set theory goes too far. (This is part of why I have been recently thinking about "reverse mathematics", relatively weak second-order arithmetic theories like WKL0)

Another reason I'm not ready to be super-convinced in this direction is I think philosophy is often very hard and slow, therefore as you say "It is somewhat questionable to infer from lack of success to define, say, optimal decision theories, that no such decision theory exists."

Yeah that makes sense. I think maybe what I've become more reluctant to endorse over time, is a jump from "an intuition that something here works, plus alternative solutions failing" to "here, this thing I came up with or something a lot like it is going to work". Like going from failure of CDT to success of EDT, or failure of CDT+EDT to TDT. There is not really any assurance that the new thing will work either.

we're not sure whether we'll eventually keep them when we're philosophically mature, and we don't know how to translate these values to a new ontology that lack these entities

I see this is a practical consideration in many value systems, although perhaps either (a) the pragmatic considerations go differently for different people, (b) different systems could be used for different pragmatic purposes. It at least presents a case for explaining the psychological phenomena of different ontologies/values even ones that might fail in physicalism.

Reply
[-]MalcolmMcLeod1mo63

Consider an analogy: a Christian fundamentalist considers whether Christ's resurrection didn't really happen. He reasons: "But if the resurrection didn't happen, then Christ is not God. And if Christ is not God, then humanity is not redeemed. Oh no!"

There's clearly a mistake here, in that a revision of a single belief can lead to problems that are avoided by revising multiple beliefs at once. In the Christian fundamentalist case, atheists and non-fundamentalists already exist, so it's pretty easy not to make this mistake.

"Christ was resurrected" isn't a fundamentalist thing. It's the Main Thing about Christianity. If you don't believe it, you are a "cultural Christian" at most, which essentially all churches and communities say Does Not Count.

Reply
[-]jessicata1mo20

Good point; I was mentioning a fundamentalist mainly to ensure that they unironically have standard beliefs like the resurrection, but it applies to a lot more Christians than fundamentalists. (I think Unitarians don't generally believe in the resurrection as a literal physical event?)

Reply
[-]MalcolmMcLeod1mo20

Yeah, contemporary Unitarian Universalists don't believe in much in particular. Mostly they're "people who would be atheists reconstructionist Jews (if they were ethnically Jewish), casual western Buddhists (if they were Californian), or "spiritual" (if they were middle-American young white women), but they are New Englanders descended from people named things like Hortense Rather." It's said that the only time you'll hear "Jesus" in a Unitarian church is when the janitor stubs his toe. Most Christians consider them "historically and aesthetically connected to Christianity, but not actually Christian." In the olden days they were more obviously "heterodox Christians"---like LDS, 7DA, or JW today, they would certainly consider themselves Christians holding the most truly Christian beliefs, though others considered them weirdos. I'm not sure how the transition occurred, but my impression is that the Universalism part of UU made it a uniquely easy religion to keep affirming as the early-20th-century weird-Christian milieu of New England rapidly turned into late-21st-century standard elite atheism. 

Reply
[-]S. Alex Bradt2mo52

such as the Continuum Hypothesis, which is conjectured to be independent of ZFC.

It's in fact known to be independent of ZFC. Sources: Devlin, The Joy of Sets; Folland, Real Analysis; Wikipedia.

Reply
[-]jessicata2mo20

Ah good point. Edited

Reply
[-]James Camacho1mo*10

And to the extent they only partially do, we have no reason to expect that a simple stochastic model of the remainder would be worse than any other model

I think, empirically, there is a good reason to suspect stochastic models have a lower K-complexity. Normalizing flows (or diffusion models) have three sources of information:

  1. the training code (model architecture + optimizer + hyperparameters),
  2. the initial latent variable drawn from a normal distribution,
  3. the trajectory in a stochastic diff. eq.

And the thing is, they work so much better than non-stochastic models like GANs or 'partially-stochastic' models like VAEs (your definition of 'partially-stochastic'). Now, I get that GANs have a different learning dynamic and VAEs have the wrong optimization target, but it seems that relegating some of the bits to trajectory-choosing makes better models and lower K-complexity.

Reply
[-]jessicata1mo20

If the universe has high K complexity than any theoretically best model has to be either stochastic or "inherently complex" (which is worse than stochastic)

That might or might not be the case. From current models in practice having to be stochastic to make good predictions, it doesn't follow that the theoretically best models must be. But it could be the case.

I'm not sure why 'partially-stochastic' would ever fail, due to the coding theorem. That is, there is an alternative way of modeling a model that makes stochastic decisions along the way, where all stochastic decisions are made initially and instead of making a new stochastic decision, you read from these initial bits.

Reply
[-]James Camacho1mo10

Partially-stochastic has a longer running time, because you have to predict which trajectories work ahead of time. Imagine having to guess the model weights, and then only use the training data to see if your guess checks out. Instead of wasting time finding a better guess, VAEs just say, "all guesses [for the 'partially-stochastic' bits] should be equally valid." We know that isn't true, so there's going to be performance issues.

Reply
[-]jessicata1mo20

I'm not sure why you're thinking about guessing model weights here. The thing I'm thinking with stochastic models is the forward pass bit, Monte Carlo sampling. I'm not sure why pre-computed randomness would be a problem for that portion.

As a weird example: Say there's a memoized random function mapping strings to uniform random bits. This can't really be pre-computed, because it's very big. But it can be lazily evaluated, as if pre-computed. Now the stochastic model can query the memoized random function with a unique specification of the situation it's querying. This should be equivalent to flipping coins mid-run.

Alternatively, if the Monte Carlo process is sequential, then it can just "read the next bit", that's computationally simpler.

Maybe it's not an issue for forward sampling but it is for backprop? Not sure what you mean.

Reply
[-]James Camacho1mo10

I'm not really sure what you mean either. Here's a simplified toy that I think captures what you're saying:

  1. A turtle starts at the origin.
  2. We flip a series of coins---on heads we move +1 in the nth dimension, on tails -1 in the nth dimension.
  3. After N coin flips, we'll be somewhere in N-d space. It obviously can be described with N bits.
  4. Why are we flipping coins, instead of storing that N-bit string and then reading them off one at a time? Why do we need the information in real time?

Well, suppose you only care about that particular N-bit string. Maybe it's the code to human DNA. How are you supposed to write down the string before humans exist? You would have to do a very expensive simulation.

If you're training a neural network on offline data, sure you can seed a pseudo-random number generator and "write the randomness" down early. Training robots in simulation translates pretty well to the real world, so you don't lose much. Now that I think about it, you might be able to claim the same with VAEs. My issue with VAEs is they add the wrong noise, but that's probably due to humans not finding the right algorithm rather than the specific distribution being expensive to find.

Reply
[-]jessicata1mo20

This seems like a case of Bayesian inference. Like, we start from the observation that humans exist having the properties they are, and then find the set of strings consistent with that. Like, start from a uniform measure on the strings and then condition on "the string produces humans".

Which is computationally intractable of course. The usual Bayesian inference issues. Though Bayesian inference would be hard if stochasticity was generated on the fly rather than being initial, too.

Reply
[-]Oskar Mathiasen1mo13

I think it would be valuable to also state some things you include in this minimal position. Eg is it antirealist about composite objects? Does it accept further facts about the outside world over and above the facts about sensations? Does it accept any skeptical hypothesis?
 

Reply
[-]jessicata1mo20
  • Composite objects: Statements about composite objects have implications for microstates. The idea would be that there is no content to statements about composite objects, beyond the implications for microstates.
  • Outside world: Broadly scientific realist so yes.
  • Skeptical hypotheses: Some of the sections include "non-realism", not sure if that counts

But also... Did you read the post? I thought I was clear about including a lot of things in this minimal position?

Reply
[-]Oskar Mathiasen1mo30

To me many of the arguments in this article have analog arguments against some of the above positions. And i wondered whether you 
1: disagree the arguments are analogous 
2: think there are positive arguments for these positions that overcome the analogous arguments, where there isnt a analogous positive argument
3: you also reject the above positions

Here is an example of the kind of analogy i am thinking of, this is similar to the second paragraph under Causality.
> This raises the issue that there are multiple theories with different unobservable structures expressing the same observations. For ontological minimality, we could say these are all valid theories (so there is no "further fact" of what is the real unobserved structure, in cases of persistent empirical ambiguity), though of course some have analytically nicer mathematical properties (simplicity) than others.

Regarding option 2, where there is some further argument such as being indispensable to our best scientific theories. Then it seems plausible that mathematics is indispensable. Which could be an example of an analogous positive argument.

Reply
[-]jessicata1mo20

Ah. I think first of all, it is possible to do ontology in a materialist-directed or idealist-directed way, and the original post is materalist-directed.

I get that the joint distribution over physical facts determines a joint distribution over observations, and we couldn't observe further facts about the joint distribution beyond those implied by the distribution over observations.

I do feel there are a few differences though. Like, in the process of "predicting as if physics" we would be expanding a huge hidden variable theory, yet declaring the elements of the theory unreal. Also there would be issues like, how large is the mental unit doing the analysis? Is it a single person over time or multiple people, and over how much time? What theory of personal identity? What is the boundary between something observed or not observed? (With physicalism, although having some boundary between observed / not observed is epistemically relevant, it doesn't have to be exactly defined since it's not ontological; the ontology is something like an algebraic closure that is big enough to contain the state distinctions that are observed.)

I think maybe someone could try to make an idealist/solipsist minimal philosophy work but it's not what I've done and it doesn't seem easy to include this without running into problems like epistemic stability assumptions.

Reply
Moderation Log
More from jessicata
View more
Curated and popular this week
21Comments

Sometimes, a philosophy debate has two basic positions, call them A and B. A matches a lot of people's intuitions, but is hard to make realistic. B is initially unintuitive (sometimes radically so), perhaps feeling "empty", but has a basic realism to it. There might be third positions that claim something like, "A and B are both kind of right".

Here I would say B is the more bullet-biting position. Free will vs. determinism is a classic example: hard determinism is biting the bullet. One interesting thing is that free will believers (including compatibilists) will invent a variety of different theories to explain or justify free will; no one theory seems clearly best. Meanwhile, hard determinism has stayed pretty much the same since ancient Greek fatalism.

While there are some indications that the bullet-biting position is usually more correct, I don't mean to make an overly strong statement here. Sure, position A (or a compatibility between A and B) could really be correct, though the right formalization hasn't been found. But I am interested in what views result from biting bullets at every stage, nonetheless.

Why consider biting multiple bullets in sequence? Consider an analogy: a Christian fundamentalist considers whether Christ's resurrection didn't really happen. He reasons: "But if the resurrection didn't happen, then Christ is not God. And if Christ is not God, then humanity is not redeemed. Oh no!"

There's clearly a mistake here, in that a revision of a single belief can lead to problems that are avoided by revising multiple beliefs at once. In the Christian fundamentalist case, atheists and non-fundamentalists already exist, so it's pretty easy not to make this mistake. On the other hand, many of the (explicit or implicit) intuitions in the philosophical water supply may be hard to think outside of; there may not be easily identifiable "atheists" with respect to many of these intuitions simultaneously.

Some general heuristics. Prefer ontological minimality: do not explode types of entities beyond necessity. Empirical plausibility: generally agree with well-established science and avoid bold empirical claims; at most, cast doubt on common scientific background assumptions (see: Kant decoupling subjective time from clock time). Un-creativity: avoid proposing speculative, experimental frameworks for decision theory and so on (they usually don't work out).

What's the point of all this? Maybe the resulting view is more likely true than other views. Even if it isn't true, it might be a minimal "kernel" view that supports adding more elements later, without conflicting with legacy frameworks. It might be more productive to argue against a simple, focused, canonical view than a popular "view" which is really a disjunctive collection of many different views; bullet-biting increases simplicity, hence perhaps being more productive to argue against.

Causality: directed acyclic graph multi-factorization

Empirically, we don't see evidence of time travel. Events seem to proceed from past to future, with future events being at least somewhat predictable from past events. This can be seen in probabilistic graphical models. Bayesian networks have a directed acyclic graph factorization (which can be topologically sorted, perhaps in multiple ways), while factor graphs in general don't. (For example, it is possible to express the conditional distribution of a Bayesian network on some variable having some value, in a factor graph; the factor graph now expresses something like "teleology", events tending to happen more when they are compatible with some future possibility.)

This raises the issue that there are multiple Bayesian networks with different graphs expressing the same joint distribution. For ontological minimality, we could say these are all valid factorizations (so there is no "further fact" of what is the real factorization, in cases of persistent empirical ambiguity), though of course some have analytically nicer mathematical properties (locality, efficient computability) than others. Each non-trivial DAG factorization has mathematical implications about the distribution; we need not forget these implications even though there are multiple DAG factorizations.

Bayesian networks can be generalized to probabilistic programming, e.g. some variables may only exist dependent on specific values for previous variables. This doesn't change the overall setup much; the basic ideas are already present in Bayesian networks.

We now have a specific disagreement with Judea Pearl: he operationalizes causality in terms of consequences of counterfactual intervention. This is sensitive to the graph order of the directed acyclic graph; hence, causal graphs express more information than the joint distribution. For ontological minimality, we'll avoid reifying causal counterfactuals and hence causal graphs. Causal counterfactuals have theoretical problems, such as implying violations of physical law, hence being un-determined by empirical science (as we can't observe what happens when physical laws are violated). We avoid these, by not believing in causal counterfactuals.

Since causal counterfactuals are about non-actual universes, we don't really need them to make the empirical predictions of causal models, such as no time travel. DAG factorization seems to do the job.

Laws of physics: universal satisfaction

Given a DAG model, some physical invariants may hold, e.g. conservation of energy. And if we transform the DAG model to one expressing the same joint distribution, the physical invariants translate. They always hold for any configuration in the DAG's support.

Do the laws have "additional reality" beyond universal satisfaction? It doesn't seem we need to assume they do. We predict as if the laws always hold, but that reduces to a statement about the joint configuration; no extra predictive power results from assuming the laws have any additional existence.

So for ontological minimality, the reality of a law can be identified with its universal satisfaction by the universe's trajectory. (This is weaker than notions of "counterfactual universal satisfaction across all possible universes".)

This enables us to ask questions similar to counterfactuals: what would follow (logically, or with high probability according to the DAG) in a model in which these universal invariants hold, and the initial state is X (which need not match the actual universe's initial state)? This is a mathematical question, rather than a modal one; see discussion of mathematics later.

Time: eternalism

Eternalism says the future exists, as the past and present do. This is fairly natural from the DAG factorization notion of causality. As there are multiple topological sorts of a given DAG, and multiple DAGs consistent with the same joint distribution, there isn't an obvious way to separate the present from the past and future; and even if there were, there wouldn't be an obvious point in declaring some nodes real and others un-real based on their topological ordering. Accordingly, for ontological minimality, they have the same degree of existence.

Eternalism is also known as "block universe theory". There's a possible complication, in that our DAG factorization can be stochastic. But the stochasticity need not be "located in time". In particular, we can move any stochasticity into independent random variables, and have everything be a deterministic consequence of those. This is like pre-computing random numbers for a Monte Carlo sampling algorithm.

The main empirical ambiguity here is whether the universe's history has a high Kolmogorov complexity, increasing approximately linearly with time. If it does, then something like a stochastic model is predictively appropriate, although the stochasticity need not be "in time". If not, then it's more like classical determinism. It's an open empirical question, so let's not be dogmatic.

We can go further. Do we even need to attribute "true stochasticity" to a universe with high Kolmogorov complexity? Instead, we can say that simple universally satisfied laws constrain the trajectory, either partially or totally (only partially in the high K-complexity case). And to the extent they only partially do, we have no reason to expect that a simple stochastic model of the remainder would be worse than any other model (except high K-complexity ones that "bake in" information about the remainder, a bit of a cheat). (See the "The Coding Theorem — A Link between Complexity and Probability" for technical details.)

Either way, we have "quasi-determinism"; everything is deterministic, except perhaps factored-out residuals that a simple stochastic model suffices for.

Free will: non-realism

A basic argument against free will: free will for an agent implies that the agent could have done something else. This already implies a "possibility"-like modality; if such a modality is not real, free will fails. If on the other hand, possibility is real, then, according to standard modal logics such as S4, any logical tautology must be necessary. If an agent is identified with a particular physical configuration, then, given the same physics / inputs / stochastic bits (which can be modeled as non-temporal extra parameters, per previous discussion), there is only one possible action, and it is necessary, as it is logically tautological. Hence, a claim of "could" about any other action fails.

Possible ways out: consider giving the agent different inputs, or different stochastic bits, or different physics, or don't identify the agent with its configuration (have "could" change the agent's physical configuration). These are all somewhat dubious. For one, it is dogmatic to assume that the universe has high Kolmogorov complexity; if it doesn't, then modeling decisions as having corresponding "stochastic bits" can't in general be valid. Free will believers don't tend to agree on how to operationalize "could", their specific formalizations tend to be dubious in various ways, and the formalizations do not agree much with normal free will intuitions. The obvious bullet to bite here is, there either is no modal "could", or if there is, there is none that corresponds to "free will", as the notion of "free will" bakes in confusions.

Decision theory: non-realism

We reject causal decision theory (CDT), because it relies on causal counterfactuals. We reject any theory of "logical counterfactuals", because the counterfactual must be illogical, contradicting modal logics such as S4. Without applying too much creativity, what remain are evidential decision theory (EDT) and non-realism, i.e. the claim that there is not in general a fact of the matter about what action by some fixed agent best accomplishes some goal.

To be fair to EDT, the smoking lesion problem is highly questionable in that it assumes decisions could be caused by genes (without those genes changing the decision theory, value function, and so on), contradicting implementation of EDT. Moreover, there are logical formulations of EDT, which ask whether it would be good news to learn that one's algorithm outputs a given action given a certain input (the one you're seeing), where "good news" is taken across a class of possible universes, not just the one you have evidence of; these may better handle "XOR blackmail" like problems.

Nevertheless, I won't dogmatically assume based on failure of CDT and logical counterfactual theories that EDT works; EDT theorists have to do a lot to make EDT seem to work in strange decision-theoretic thought experiments. This work can introduce ontological extras such as infinitesimal probabilities, or similarly, pseudo-Bayesian conditionals on probability 0 events. From a bullet-biting perspective, this is all highly dubious, and not really necessary.

We can recover various "practical reason" concepts as statistical predictions about whether an agent will succeed at some goal, given evidence about the agent, including that agent's actions. For example, as a matter of statistical regularity, some people succeed in business more than others, and there is empirical correlation with their decision heuristics. The difference is that this is a third-personal evaluation, rather than a first-personal recommendation: we make no assumption that third-person predictive concepts relating to practical reason translate to a workable first-personal decision theory. (See also "Decisions are not about changing the world, they are about learning what world you live in", for related analysis.)

Morality: non-realism

This shouldn't be surprising. Moral realism implies that moral facts exist, but where would they exist? No proposal of a definition in terms of physics, math, and so on has been generally convincing, and they vary quite a lot. G.E. Moore observes that any precise definition of morality (in terms of physics and so on) seems to leave an "open question" of whether that is really good, and compelling to the listener.

There are many possible minds (consider the space of AGI programs), and they could find different things compelling. There are statistical commonalities (e.g. minds will tend to make decisions compatible with maintaining an epistemology and so on), but even commonalities have exceptions. (See "No Universally Compelling Arguments".)

Suppose you really like the categorical imperative and think rational minds have a general tendency to follow it. If so, wouldn't it be more precise to say "X agent follows the categorical imperative" than "X agent acts morally"? This bakes in fewer intuitive confusions.

As an analogy, suppose some people refer to members of certain local bird species as a "forest spirit", due to a local superstition. You could call such a bird a "forest spirit" by which you mean a physical entity of that bird species, but this risks baking in a superstitious confusion.

In addition, the discussion of free will and decision theory shows that there are problems with formulating possibility and intentional action. If, as Kant says, "ought implies can", then contrapositively "not can implies not ought"; if modal analysis shows that alternative actions for a given agent are not possible, then no alternative actions can be "ought". (Alternatively, if modal possibility is unreal, then "ought implies can" is confused to begin with). This is really not the interpretation of "ought" intended by moral realists; it's redundant with the actual action.

Theory of mind: epistemic reductive physicalism

Chalmers claims that mental properties are "further facts" on top of physical properties, based on the zombie argument: it is conceivable that a universe physically identical to ours could exist, but with no consciousness in it. Ontological minimality suggests not believing in these "further facts", especially given how dubious theories of consciousness tend to be. This seems a lot like eliminativism.

We don't need to discard all mental concepts, though. Some mental properties such as logical inference and memory have computational interpretations. If I say my computer "remembers" something, I specify a certain set of physical configurations that way: the ones corresponding to computers with that something in the memory (e.g. RAM). I could perhaps be more precise than "remembers", by saying something like "functionally remembers".

A possible problem with eliminativism is that it might undermine the idea that we know things, including any evidence for eliminativism. It is epistemically judicious to have some ontological status for "we have evidence of this physical theory" and so on. The idea with reductive physicalism is to correspond such statements with physical ones. Such as: "in the universe, most agents who use this or that epistemic rule are right about this or that". (It would be a mistake to assume, given a satisficing epistemology evaluation over existent agents, that we "could" maximize epistemology with a certain epistemic rule; that would open up the usual decision-theoretic complications. Evaluating the reliability of our epistemologies is more like evaluating third-personal practical reason than making first-personal recommendations.)

That might be enough. If it's not enough then ontological minimality suggests adding as little as possible to physicalism to express epistemic facts. We don't need a full-blown theory of consciousness to express meaningful epistemic statements.

Personal identity: empty individualism, similarity as successor

If a machine scans you and makes a nearly-exact physical copy elsewhere, is that copy also you? Paradoxes of personal identity abound. Whether that copy is "really you" seems like a non-question; if it had an answer, where would that answer be located?

Logically, we have a minimal notion of personal identity from mathematical identity (X=X). So, if X denotes (some mathematical object corresponding to) you at some time, then X=X. This is an empty notion of individualism, as it fails to hold that you are the same as recent past or future versions of yourself.

What's fairly simple and predictive to say above X=X is that a near-exact copy of you is similar to you. As you are similar to near past and future versions of yourself, as two prints of a book are similar, and as two world maps are similar. There are also directed properties (rather than symmetric similarity), such as you remembering the experiences of past versions of yourself but not vice versa; these are reduce to physical properties, not further properties, as in the theory of mind section.

It's easy to get confused about which entities are "really the same person". Ontological minimality suggests there isn't a general answer, beyond trivial reflexive identities (X=X). The successor concept is, then, something like similarity. (And getting too obsessed with "how exactly to define similarity?" misses the point; the use of similarity is mainly predictive/evidential, not metaphysical.)

Anthropic probability: non-realism, graph structure as successor

In the Sleeping Beauty problem, is the correct probability ½ or ⅓? It seems the argument is over nothing real. Halfers and thirders agree on a sort of graph structure of memory: the initial Sleeping Beauty "leads to" one or two future states, depending on the coin flip, in terms of functional memory relations. The problem has to do with translating the graph structure to a probability distribution over future observations and situations (from the perspective of the original Sleeping Beauty).

From physics and identification of basic mental functions, we get a graph-like structure; why add more ontology? Enough thought experiments of memory wipes, upload copying, and so on, suggest that the linear structure of memory and observation is not always valid.

This slightly complicates the idea of physical theories being predictive, but it seems possible to operationalize prediction without a full notion of subjective probability. We can ask questions like, "do most entities in the universe who use this or that predictive model make good predictions about their future observations?". The point here isn't to get a universal notion of good predictions, but rather one that is good enough to get basic inferences, like learning about universal physical laws.

Mathematics: formalism

Are mathematical facts, such as "Fermat's Last Theorem is true", real? If so, where are they? Are they in the physical universe, or at least partially in a different realm?

Both of these are questionable. If we try to identify "for all n,m: n + S(m) = S(n + m)" with "in the universe, it is always the case that adding n objects to S(m) objects yields S(n + m) objects", we run into a few problems. First, it requires identifying objects in physics. Second, given a particular definition of object, physics might not be such that this rule always holds: maybe adding a pile of sand to another pile of sand reduces the number of objects (as it combines two piles into one), or perhaps some objects explode when moved around; meanwhile, mathematical intuition is that these laws are necessary. Third, the size of the physical universe limits how many test cases there can be; hence, we might un-intuitively conclude something like "for all n,m both greater than Graham's number, n=m", as the physical universe has no counter-examples. Fourth, the size of the universe limits the possible information content of any entity in it, forcing something like ultrafinitism.

On the other hand, the idea that the mathematical facts live even partially outside the universe is ontologically and epistemically questionable. How would we access these mathematical facts, if our behaviors are determined by physics? Why even assume they exist, when all we see is in the universe, not anything outside of it?

Philosophical formalism does not explain "for all n,m: n + S(m) = S(n + m)" by appealing to a universal truth, but by noting that our formal system (in this case, Peano arithmetic) derives it. A quasi-invariant holds: mathematicians tend to in practice follow the rules of the formal system. And mathematicians use one formal system rather than another for physical, historical reasons. Peano arithmetic, for example, is useful: it models numbers in physics theories and in computer science, yielding predictions due to the structure of the inferences having some correspondence with the structure of physics. Though, utility is a contingent fact about our universe; what problems are considered useful to solve varies with historical circumstances. Formal systems are also adopted for reasons other than utility, such as the momentum of past practice or the prestige of earlier work.

The thing we avoid with philosophical formalism is confusions over "further facts", such as the Continuum Hypothesis, which has been shown to be independent of ZFC. We don't need to think there is a real fact of the matter about whether the Continuum Hypothesis is true.

Formalism is suggestive of finitism and intuitionism, although these are additional principles of formal systems; we don't need to conclude something like "finitism is true" per se. The advantage of such formal systems is that they may be a bit more "self-aware" as being formal systems; for example, intuitionism is less suggestive that there is always a fact of the matter regarding undecidable statements (like a Gödelian sentence), as it does not accept the law of the excluded middle. But, again, these are particular formal systems, which have advantages and disadvantages relative to other formal systems; we don't need to conclude that any of these are "the correct formal system".

Conclusion

The positions sketched here are not meant to be a complete theory of everything. They are a deliberately stripped-down "kernel" view, obtained by repeatedly biting bullets rather than preserving intuitions that demand extra ontology. Across causality, laws of physics, time, free will, decision theory, morality, mind, personal identity, anthropic probability, and mathematics, the same method has been applied:

  • Strip away purported "further facts" not needed for empirical adequacy.
  • Treat models as mathematical tools for describing the world's structure, not as windows onto modal or metaphysical realms.
  • Accept that some familiar categories like "could," "ought," "the same person," or "true randomness" may collapse into redundancy or dissolve into lighter successors such as statistical regularity or similarity relations.

This approach sacrifices intuitive richness for structural economy. But the payoff is clarity: fewer moving parts, fewer hidden assumptions, and fewer places for inconsistent intuitions to be smuggled in. Even if the kernel view is incomplete or false in detail, it serves as a clean baseline — one that can be built upon, by adding commitments with eyes open to their costs.

The process is iterative. For example, I stripped away a causal counterfactual ontology to get a DAG structure; then stripped away the timing of stochasticity into a-temporal uniform bits; then suggested that residuals not determined by simple physical laws (in a high Kolmogorov complexity universe) need not be "truly stochastic", just well-predicted by a simple stochastic model. Each round makes the ontology lighter while preserving empirical usefulness.

It is somewhat questionable to infer from lack of success to define, say, optimal decision theories, that no such decision theory exists. This provides an opportunity for falsification: solve the problem really well. A sufficiently reductionist solution may be compatible with the philosophical kernel; otherwise, an extension might be warranted.

I wouldn't say I outright agree with everything here, but the exercise has shifted my credences toward these beliefs. As with the Christian fundamentalist analogy, resistance to biting particular bullets may come from revising too few beliefs at once.

A practical upshot is that a minimal philosophical kernel can be extended more easily without internal conflict, whereas a more complex system is harder to adapt. If someone thinks this kernel is too minimal, the challenge is clear: propose a compatible extension, and show why it earns its ontological keep.