One occasionally sees such remarks as, "What good does it do to go around being angry about the nonexistence of God?" (on the one hand) or "Babies are natural atheists" (on the other).  It seems to me that such remarks, and the rather silly discussions that get started around them, show that the concept "Atheism" is really made up of two distinct components, which one might call "untheism" and "antitheism".

A pure "untheist" would be someone who grew up in a society where the concept of God had simply never been invented - where writing was invented before agriculture, say, and the first plants and animals were domesticated by early scientists.  In this world, superstition never got past the hunter-gatherer stage - a world seemingly haunted by mostly amoral spirits - before coming into conflict with Science and getting slapped down.

Hunter-gatherer superstition isn't much like what we think of as "religion".  Early Westerners often derided it as not really being religion at all, and they were right, in my opinion.  In the hunter-gatherer stage the supernatural agents aren't particularly moral, or charged with enforcing any rules; they may be placated with ceremonies, but not worshipped.  But above all - they haven't yet split their epistemology.  Hunter-gatherer cultures don't have special rules for reasoning about "supernatural" entities, or indeed an explicit distinction between supernatural entities and natural ones; the thunder spirits are just out there in the world, as evidenced by lightning, and the rain dance is supposed to manipulate them - it may not be perfect but it's the best rain dance developed so far, there was that famous time when it worked...

If you could show hunter-gatherers a raindance that called on a different spirit and worked with perfect reliability, or, equivalently, a desalination plant, they'd probably chuck the old spirit right out the window.  Because there are no special rules for reasoning about it - nothing that denies the validity of the Elijah Test that the previous rain-dance just failed.  Faith is a post-agricultural concept.  Before you have chiefdoms where the priests are a branch of government, the gods aren't good, they don't enforce the chiefdom's rules, and there's no penalty for questioning them.

And so the Untheist culture, when it invents science, simply concludes in a very ordinary way that rain turns out to be caused by condensation in clouds rather than rain spirits; and at once they say "Oops" and chuck the old superstitions out the window; because they only got as far as superstitions, and not as far as anti-epistemology.

The Untheists don't know they're "atheists" because no one has ever told them what they're supposed to not believe in - nobody has invented a "high god" to be chief of the pantheon, let alone monolatry or monotheism.

However, the Untheists do know that they don't believe in tree spirits.  And we shall even suppose that the Untheists don't believe in tree spirits, because they have a sophisticated and good epistemology - they understand why it is in general a bad idea to postulate ontologically basic mental entities.

So if you come up to the Untheists and say:

"The universe was created by God -"

"By what?"

"By a, ah, um, God is the Creator - the Mind that chose to make the universe -"

"So the universe was created by an intelligent agent.  Well, that's the standard Simulation Hypothesis, but do you have actual evidence confirming this?  You sounded very certain -"

"No, not like the Matrix!  God isn't in another universe simulating this one, God just... is.  He's indescribable.  He's the First Cause, the Creator of everything -"

"Okay, that sounds like you just postulated an ontologically basic mental entity.  And you offered a mysterious answer to a mysterious question.  Besides, where are you getting all this stuff?  Could you maybe start by telling us about your evidence - the new observation you're trying to interpret?"

"I don't need any evidence!  I have faith!"

"You have what?"

And at this very moment the Untheists have become, for the first time, Atheists.  And what they just acquired, between the two points, was Antitheism - explicit arguments against explicit theism.  You can be an Untheist without ever having heard of God, but you can't be an Antitheist.

Of course the Untheists are not inventing new rules to refute God, just applying their standard epistemological guidelines that their civilization developed in the course of rejecting, say, vitalism.  But then that's just what we rationalist folk claim antitheism is supposed to be, in our own world: a strictly standard analysis of religion which turns out to issue a strong rejection - both epistemically and morally, and not after too much time.  Every antitheist argument is supposed to be a special case of general rules of epistemology and morality which ought to have applications beyond religion - visible in the encounters of science with vitalism, say.

With this distinction in hand, you can make a bit more sense of some modern debates - for example, "Why care so much about God not existing?" could become "What is the public benefit from publicizing antitheism?"  Or "What good is it to just be against something?  Where is the positive agenda?" becomes "Less antitheism and more untheism in our atheism, please!"  And "Babies are born atheists", which sounds a bit odd, is now understood to sound odd because babies have no grasp of antitheism.

And as for the claim that religion is compatible with Reason - well, is there a single religious claim that a well-developed, sophisticated Untheist culture would not reject?  When they have no reason to suspend judgment, and no anti-epistemology of separate magisteria, and no established religions in their society to avoid upsetting?

There's nothing inherently fulfilling about arguing against Goddism - in a society of Untheists, no one would ever give the issue a second thought.  But in this world, at least, insanity is not a good thing, and sanity is worth defending, and explicit antitheism by the likes of Richard Dawkins would surely be a public service conditioning on the proposition of it actually working.  (Which it may in fact be doing; the next generation is growing up increasingly atheist.)  Yet in the long run, the goal is an Untheistic society, not an Atheistic one - one in which the question "What's left, when God is gone?" is greeted by a puzzled look and "What exactly is missing?"

Moderation Guidelines: Reign of Terror - I delete anything I judge to be annoying or counterproductiveexpand_more

One of my favourite posts here in a while. When talking with theists I find it helpful to clarify that I'm not so much against their God, rather my core problem is that I have different epistemological standards to them. Not only does this take some of the emotive heat out of the conversation, but I also think it's the point where science/rationalism/atheism etc. is at its strongest and their system is very weak.

With respect to untheistic society, I remember when a guy I knew shifted to New Zealand from the US and was disappointed to find that relatively few people were interested in talking to him about atheism. The reason, I explained, is that most people simply aren't sufficiently interested in religion to be bothered with atheism. This is a society where the leaders of both major parties in the last election publicly stated that they were not believers and... almost nobody cared.

I enjoyed this post very much as well as I am interested in this topic. I am not a mathmatician and only had entry level college philosophy, so 80% of the discussion is over my head. I wanted to say that your comment that "most people aren't sufficiently interested in religion to be bothered with atheism" in New Zeland was very helpful.

This may make no logical sense, but the meaning I took away is that if that if an individual is not sufficiently interested in religion (or feel he has sufficient reason to disbelieve Christianity in my case) then that individual should not be bothered that he is an atheist. I know the point of these discussions is to discuss and not compliment, but I wanted to say that your comment helped me tremendously.

So the universe was created by an intelligent agent. Well, that's the standard Simulation Hypothesis [...]

I've been thinking about a slightly different question: is base-level reality physics-like, or optimization-like, and if it's optimization-like, did it start out that way?

Here's an example that illustrates what my terms mean. Suppose we are living in base-level reality which started with the Big Bang and evolution, and we eventually develop an AI that takes over the entire universe. Then I would say that base-level reality started off physics-like, then becomes optimization-like.

But it's surely conceivable that a universe could start off being optimization-like, and this hypothesis doesn't seem to violate Occam's Razor in any obvious way. Consider this related question: what is the shortest program that outputs a human mind? Is it an optimization program, or a physics simulation?

An optimization procedure can be very simple, if computing time isn't an issue, but we don't know whether there is a concisely describable objective function that we are the optimum of. On the other hand, the mathematical laws of physics are also simple, but we don't know how rare intelligent life is, so we don't know how many bits of coordinates are needed to locate a human brain in the universe.

Does anyone have an argument that settles these questions, in either direction?

Suppose we are living in base-level reality which started with the Big Bang and evolution, and we eventually develop an AI that takes over the entire universe. Then I would say that base-level reality started off physics-like, then becomes optimization-like.

Hmm? The base-level that the AI is running on is still physics, right?

[ETA the word "on", which I missed out]

The universe presumably isn't optimised for intelligence, since most organisms are baceria, etc, and isn't optimised for life, since most of it is barren. See Penrose's argument against the Anthropic Principle in Road to Reality.

we don't know whether there is a concisely describable objective function that we are the optimum of

I think Wei_Dai was trying to suggest an objective function beyond our ken.

I'm confused by your comment, but I'll try to answer anyway.

As an agent in environment, you can consider the environment in behavioral semantics: environment is an equivalence class of all the things that behave the same as what you see. Instead of minimal model, this gives a maximal model. Everything about territory remains black box, except the structure imposed by the way you see the territory, by the way you observe things, perform actions, and value strategies. This dissolves the question about what the territory "really is".

Your answer strikes me as unsatisfactory: if we apply it to humans, we lose interest in electricity, atoms, quarks etc. An agent can opt to dig deeper into reality to find the base-level stuff, or it can "dissolve the question" and walk away satisfied. Why would you want to do the latter?

The agent has preferences over these black boxes (or strategies that instantiate them), and digging deeper may be a good idea. To get rid of the (unobservable) structure in environment, the preferences for the elements of environment have to be translated in terms of preferences over the whole situations. The structure of environment becomes the structure of preferences over the black boxes.

Two models can behave the same as what you've seen so far, but diverge in future predictions. Which model should you give greater weight to? That's the question I'm asking.

The current best answer we know seems to be to write each consistent hypothesis in a formal language, and weight longer explanations inverse exponentially, renormalizing such that your total probability sums to 1. Look up aixi, universal prior

In behavioral interpretation, you weight observations, or effects of possible strategies (on observations/actions), not the way territory is. The base level is the agent, and rules of its game with environment. Everything else describes the form of this interaction, and answers the questions not about the underlying reality, but about how the agent sees it. If the distinction you are making doesn't reach the level of influencing what the agent experiences, it's absent from this semantics: no weighting, no moving parts, no distinction at all.

For a salient example: if the agent in the same fixed internal state is instantiated multiple times both in the same environment at the same time, and at different times, or even in different environments, with different probabilities for some notion of that, all of these instances and possibilities together go under one atomic black-box symbol for the territory corresponding to that state of the agent, with no internal structure. The structure however can be represented in preferences for strategies or sets of strategies for the agent.

Vladimir, are you proposing this "behavioral interpretation" for an AI design, or for us too? Is this an original idea of yours? Can you provide a link to a paper describing it in more detail?

I'm generalizing/analogizing from the stuff I read on coalgebras, and in this case I'm pretty sure the idea makes sense, it's probably explored elsewhere. You can start here, or directly from Introduction to Coalgebra: Towards Mathematics of States and Observations (PDF).

There are many similarities (or dualities) between algebras and coalgebras which are often useful as guiding principles. But one should keep in mind that there are also significant differences between algebra and coalgebra. For example, in a computer science setting, algebra is mainly of interest for dealing with finite data elements – such as finite lists or trees – using induction as main definition and proof principle. A key feature of coalgebra is that it deals with potentially infinite data elements, and with appropriate state-based notions and techniques for handling these objects. Thus, algebra is about construction, whereas coalgebra is about deconstruction – understood as observation and modification.

A rule of thumb is: data types are algebras, and state-based systems are coalgebras. But this does not always give a clear-cut distinction. For instance, is a stack a data type or does it have a state? In many cases however, this rule of thumb works: natural numbers are algebras (as we are about to see), and machines are coalgebras. Indeed, the latter have a state that can be observed and modified.


Initial algebras (in Sets) can be built as so-called term models: they contain everything that can be built from the operations themselves, and nothing more. Similarly, we saw that final coalgebras consist of observations only.

Can something be optimization-like without being ontologically mental? In other words, if a higher level is a universal Turing machine that devotes more computing resources to other Turing machines depending on how many 1s they've written so far as opposed to 0s, is that the sort of optimization-like thing we're talking about? I'm assuming you don't mean anything intrinsically teleological.

Yeah, I think if base-level reality started out optimization-like, it's not mind-like, or at least not any kind of mind that we'd be familiar with. It might be something like Schmidhuber's Goedel Machine with a relatively simple objective function.

The physical universe seems to optimize for low-energy / high-entropy states, via some kind of local decision process.

So I think your two options actually coincide.