All of TCB's Comments + Replies

Two sheep plus three sheep equals five sheep. Two apples plus three apples equals five apples. Two Discrete ObjecTs plus three Discrete ObjecTs equals five Discrete ObjecTs.

Arithmetic is a formal system, consisting of a syntax and semantics. The formal syntax specifies which statements are grammatical: "2 + 3 = 5" is fine, while "2 3 5 + =" is meaningless. The formal semantics provides a mapping from grammatical statements to truth values: "2 + 3 = 5" is true, while "2 + 3 = 6" is false. This mapping relies on axi... (read more)

By this definition, both the continuum hypothesis and the negation of the continuum hypothesis are true in ZFC
This exposition would be much clearer if you reduced / expanded the concepts of "create correspondences between formal and real objects" and "ground a formal system in the territory". Those look like they're hiding important mental algorithms which the original post was trying to get at (Not the dot combining one. Maybe the one which attributes a common cause, a latent mathematical truth variable to explain the similar results of rocks and sheep gathering?). Do those phrases, "make correspondences" and "ground a system", mean that we can stop talking about formal objects and instead talk about the behavior of physical circuits which compute all those formal things, like which strings are well formed, what the result of a grammatical transformation will be, and which truth values get mapped to formulas? As it stands, I don't see your point. You talk about a model which is true but doesn't "say something" about reality. You don't address whether things in reality "say something" about each other prior to humans showing up with their beliefs that reflect reality, i.e. whether there are things in the world that look like computations, things which have mutually informative behavior that isn't a result of intermediary causal chains of physics-stuff jiggling each other. Or maybe you did a little bit when you called sheep a map-level distinction? Physics clearly doesn't act directly on sheep, but that doesn't mean sheep can't be a substrate for computing. Sheep are still there. It is a fact of reality that some fields contain hooved clumps of meat, even if we have to phrase that fact in terms of the response of visual-field segmenting and object-permanence-establishing neurons in the brain a person looking out upon the field. I just wish I knew what you were getting at.

In this article, Eliezer implies that it's the lack of objective morality which makes life seem meaningless under a materialist reductionist model of the universe. Is this the usual source of existential angst? For me, existential angst always came from "life not having a purpose"; I was always bothered by the thought that no higher power was guiding our lives. I ended up solving this problem by realizing that emergent structures such as society can be understood as a "higher power guiding our lives"; while it's not as agenty as God,... (read more)

If I understand Eliezer's conception of morality correctly, he doesn't distinguish between these two things.
I share your sense that existential angst is roughly equivalent to a sense of purposelessness. That said, a sense of purpose can come from a lot of places, not all of them philosophical. I know plenty of people who find a fulfilling sense of purpose in caring for their families, in performing their jobs, or similar things, without reference to more philosophical guiding principles be they theological or not. The happiest period of my life, for example, was between six months and a year after my stroke, when I'd recovered enough to not be profoundly depressed all the time but recovery was still my driving, fundamental, very concrete purpose.

I agree with what seems to be the standard viewpoint here: the laws of morality are not written on the fabric of the universe, but human behavior does follow certain trends, and by analyzing these trends we can extract some descriptive rules that could be called morals.

I would find such an analysis interesting, because it'd provide insight into how people work. Personally, though, I'm only interested in what is, and I don't care at all about what "ought to be". In that sense, I suppose I'm a moral nihilist. The LessWrong obsession with develop... (read more)

Aha! I think I was misreading your post, then; I assumed you were presenting truth-seeking as a reason why you wanted your friends to be atheists, as well as a reason why converting them would be moral. Sorry for assuming you didn't know your own motivations!

Heavens, no. I want my friends to be atheists for purely selfish reasons. It so happens that some of those selfish reasons involve things like "I want my friends to know what's true", but most of them are reasons like "I want this awkward piece of the relationship gone" and "It's a shame none of you believe in casual premarital sex, because I could really go for an orgy right now" and "If I have to hear you talk about how wrong gay marriage is ONE MORE TIME I do declare I shall explode."

In other words, I really do not trust my personal desires as an ethical system, because in a vacuum I'm a pretty unmitigated asshole.

It really depends on your own personal moral system (assuming ethical relativism). In order to answer your question, I would need to know what you consider moral. I'll attempt to infer your morals from your post, and then I'll try to answer your question accordingly.

It sounds from your post like you're torn between two alternatives, both of which you consider moral, but which are mutually exclusive. On one hand, it seems that you're morally devoted to the causes of atheism and truth-seeking; thus, you desire to convert others to this cause. But on the ... (read more)

If you deconvert your friends using Dark Arts-ish methods, but you don't teach them the virtues of truth-seeking, then atheism will become just another religion to them, handed down by new authority figures.

Exactly this. Let's do something better than just authority figures walking around, each one trying to convert people by Dark Arts. Try to find something that is above "my faith vs. your faith".

What I usually do is express that although I consider all religions elaborate fairy tales, in my opinion there is no big harm in believing anything,... (read more)

You're absolutely right that my primary motivation is simply that I WANT to do it. But ethical reasoning is about what is right in spite of my preferences, is it not? So the question of truth-versus-negative-consequences remains an important one. Your point about truth-seeking versus atheism as a religion is a very good one. I do generally think that converting atheists to truth-seekers is easier than converting Catholics to truth-seekers, but I had not considered the possibility that I might, rather than failing entirely (which is not unlikely), fail at the halfway point and end up with atheist zealots for friends, which would DEFINITELY create more problems than it would solve. That was a very thoughtful piece of advice. Thank you.

That sounds awesome and not manipulative at all. =)

I'm all for community-building activities, and I'd love to learn to dance, so I think this is an awesome idea. That said, something about the way this post and its comments are worded rubs me the wrong way entirely, and makes me want to avoid rationalist dance meetups and the LessWrong community in general. Since it seems that your goal is to recruit more rationalists, and I've been a long-time lurker on the outskirts of the rationalist community, I figured that it might be helpful if I explained my negative reaction. I've had similar negative reactions... (read more)

I wrote the following for the meetup booklet - would you say it avoids giving a manipulative impression?
Thanks - this is good feedback, and now that you've put it that way, I actually agree with your criticisms.

I find this post incredibly inspiring, but I feel like it does not directly address one of the main reasons that people do not find scientific explanations emotionally satisfying. When we personify the cosmos, then the universe seems a lot less hostile, and we feel much more connected to it. A primitive man looks up at the sun and thinks, "There is the sun-god, the sky-father, watching over my people." And he is happy because the universe, while capricious, is not apathetic to his life. But a modern man looks up at the sun and thinks "Th... (read more)

Option 1 []. Option 2 [].

Allow me to provide some insight, as an erstwhile "anti-reductionist" in the sense that Eliezer uses it here. (In many senses I am still an anti-reductionist.) I think that what is at work here is the conflict between intuition and analysis. However, before I remark on the relevance of these concepts to the experience of a rainbow, I would like to clarify what I mean by the terms "intuition" and "analysis".

The way I understand the mind, at the very deepest level of our consciousnesses we have our core processes; these are t... (read more)

This post reminds me of evidential markers in linguistics ( Evidential markers are morphemes (e.g. prefixes or suffixes) which, when attached to the verb, describe how the speaker came to belief the fact that he is asserting. These can include things like direct knowledge ("I saw it with my own eyes"), hearsay ("Somebody told me so but I didn't see for myself"), and inference ("I concluded it from my other beliefs"). While evidential markers are less specific than what's described... (read more)

I'm not actually convinced that negative examples are really necessary for learning empirical clusters in thingspace, especially if you're just trying to teach someone a subcategory of a big class they're already familiar with. If someone is already familiar with the concept of "bird" and you want to inform them that there is such a thing as blue jays, it may suffice to show them just a few instances of a blue jay (assuming you don't care whether they learn the terminology). Source: this super cool paper about one-shot learning using hierarchical Bayesian models:

In fact you are correct: Negative examples, as in examples outside the higher-order class, are not used in the teaching of sub-classes of a "higher-order noun". However, in discriminating between sub-classes, examples of other sub-classes serve as negatives for the sub-class you are currently teaching. Please see chapter 11 in TOI, "Hierarchical Class Programs", p 123. We do care that they learn the terminology. When I said they are not accessible through 'simple' verbal rules like: "Listen: a bird is a small feathered flapping winged thing," I mean not that they are deaf or completely without language that you can expand, but that they are very young children (or older children from disadvantaged backgrounds, and I can give you a real horror story from my school demonstrating how little some of these families interact with their children, see end of comment) who would mostly not process and retain even such 'simple'-seeming rules like that. Learners who are not yet familiar with the generalized concept of 'higher-order nouns', and must be shown that the way in which the verbal structure is the same for the statements "this truck is red" and "this truck is a vehicle" and the false statement "this truck is a dog" does not mean that you could find a truck that was not a vehicle in the same way that you could find a truck that's not red, or respond to the statement "this truck is a vehicle" with "no it's not! It's a truck". A sequence for teaching the higher-order classes would start with examples of vehicles (sub-classes you are later going to teach) and differences that are as minimally different as possible (avoiding boundaries that are unclear even in the language of knowledgeable adults). The wording of the first, modeled examples in the sequence could be like, "This is a vehicle/this is not a vehicle". The test example wording could be, "Tell me, vehicle or not-vehicle?" Once firm, you move on to the sequence for teaching the first sub-class. Model perhaps

Interesting point. I certainly agree that concepts/words are not actually atomic, or Platonic ideals, or anything like that. Concrete concepts, in particular, seem to correspond to "empirical clusters in thing-space", or probability distributions over classes of objects in the real world (though of course, even objects in the real world aren't really atomic).

Despite this, most people still view themselves as thinking symbolically, and many people believe themselves to be logical reasoning agents. After reading the first couple chapters of Jayn... (read more)

This article presents evidence that symbols exist in our minds independent of words.

Actually, it seems extremely unlikely that words would be required for symbolic thinking, considering that any animal advanced enough to base its actions on thought rather than pure reflex would need to have some kind of symbolic representation of the world.

Concepts exist without words, since words are just one part of a concept, and people with left temporal brain damage can lose access to a word without losing access to the concept. A "symbol" sometimes means something atomic, which concepts are not. We probably have no symbols in our brains, in this strict sense.

I did this a few years ago, but I'm not sure exactly how. I wanted to think less verbally because I worried that my thoughts were too constrained by words, which kept me at the very surface level of my consciousness and perhaps inhibited my access to deeper parts of my mind. I think that part of the transformation came about simply because I wanted it to (power of suggestion). It probably also helped that I started watching a lot more films and doing more math. I don't remember the exact process by which I transformed my thought-structure.

Something tha... (read more)

Those are valid concerns. Regarding the first, that's why I emphasized the ritual component of sex in a repressed society. I suspect that such a society would have very strict rituals for sex: it must occur only at specific times in specific locations, and in the presence of certain stimuli. Some examples of stimuli are candles or lacy lingerie or dim lighting. An example of a time is night. I've heard lots of comments to the effect that having sex in the middle of the day would be strange, and that sex is strictly a nighttime activity. This could be... (read more)

The nightime/daytime issue seems to be more an issue of sex being taboo than being a fetish. And one thing that seems clear is that a lot of fetishes specifically revolve around breaking taboos.

Regarding the second issue, keep in mind that the degree of imprinting that occurs when someone is actively having sex is likely to be higher than simply the level of imprinting one would get from simply associating the fetish with sexually attractive images. It might not take more than a few times having sex with a specific fetish for it to imprint. This is further... (read more)

Slightly off-topic thought regarding penny jars and fetish formation:

I've heard that fetishes are more prevalent in cultures where sex is repressed. I always wondered why this would be the case (assuming that it is in fact true). One explanation is associations: if people are raised to think sex is dirty, or that sex is a necessary but base bodily function akin to using the bathroom, then they might fetishize urine or excrement. And if people are raised to think that sex is beastly and animalistic, they might fetishize things that are related to animals... (read more)

I'm concerned that this sort of explanation could just as easily explain the exact opposite and so doesn't really give us much information. For example, one could imagine that in societies with less sexual repression, people are more likely to have sex or engage in sexual activity in a variety of circumstances and so have more opportunities to imprint on non-standard objects or situations. Moreover, the more open a society is about sex the more likely people are to hear about some fetish and decide to try it out just to see what it is like, and then get imprinted to it.

Oops, you are right; I meant to type pasteurization! I also think that homogenizing milk is bad, but I believe that with lower probability. I'll edit my post, and thanks for the correction. =)

I would love to see an LW sequence on machine learning! I imagine that LW would have a lot of interesting things to say about the philosophical aspects of ML in addition to the practical aspects.

I'm not sure I'd be qualified to contribute much to such a sequence, since I am just an undergrad, but I did have an outline in mind for an intuitive introduction to MLE and EM. If people would find that interesting, I could certainly post it on LW once it was written up!

I'm fairly inexperienced in ML, so all the models I've worked with are simple enough that the... (read more)

Cyan's observation about mixtures of conjugate priors being conjugate kills the example I had in mind. Ill think for a bit and let you know if I think of any examples. If I haven't replied in a couple weeks, remind me and ill make sure to reply. Dirichlet processes aren't inherently hierarchical, they are just self-conjugate, so you can make the output of one the input to the other. If you connect them up in a tree structure, you get a hierarchical dirichlet process.
Andrew Gelman wrote a comment [] on someone else's paper that might prove to be a useful introduction to hierarchical models.

After rereading this, I agree with you that I emphasized the beta distribution too heavily. This wasn't my intention; I just picked it because it was the simplest conjugate prior I could find. In the next draft of this document, I'll make sure to stress that the beta distribution is just one of many great conjugate priors!

I am a bit confused about what the second point means. Do you mean that conjugate priors are insufficient for capturing the actual prior knowledge possessed?

I did not know that it was controversial to claim that alpha = beta = 1 expres... (read more)

The improper alpha = beta = 0 prior, sometimes known as Haldane's prior, is derived using an invariance argument in Jaynes's 1968 paper Prior Probabilities []. I actually don't trust that argument -- I find the critiques of it here [] compelling. Jeffreys priors are derived from a different invariance argument; Wikipedia has a pretty good article [] on the subject. I have mostly used the uniform prior myself in the past, although I think in the future I'll be using the Jeffreys prior as a default for the binomial likelihood. But the maximum entropy argument for the uniform prior is flawed: differential entropy [] is not an extension [] of discrete Shannon entropy to continuous distributions. The correct generalization [] is to relative entropy []. Since the measure is arbitrary, the maximum entropy argument is missing an essential component.

Thank you very much for the compliments, and for the honest criticism!

I am still thinking about your comment, and I intend to write a detailed response to it after I have thought about your criticisms more completely. In the meantime, though, I wanted to say that the feedback is very much appreciated!

Perhaps I have a different system of morality than other people who have commented on this topic, but I personally judge actions as "moral" or "immoral" based on the intentions of the do-er rather than the consequences. (Assuming morality is relative and not an absolute component of the universe, this seems like a valid moral system.)

If the atheists who run this website are doing so to make money by exploiting the perceived stupidity of their customers, this seems immoral to me. On the other hand, if they are running the service becau... (read more)

In general, I do not think that when several different motivations impart desires to do several different things, and it happens to be physically impossible to fulfill all those desires because it is impossible to do all those things, it makes sense to talk about conflicting emotions. There is no conflict, as there would be if I were a perpetual motion machine or something violating the laws of physics. Each action I take is done under the influence of all of my emotions and motivations. This is even more true when different impulses give desires that are fulfilled by the exact same action. It does not fit with my view of human nature to say that a human, who has both the altruistic and petty motives available mentally and in close emotional proximity, does something only because of one desire and not the other.

I suppose I am assuming that the universe operates under some set of formal rules (though they might not be deterministic) independently of our ability to describe the universe using formal rules. I would also say that our inability to comprehend a given contradiction is related to the fact that we are inside the system. If God were outside the system he would not necessarily have this problem.

I disagree with your second point, though. Sure, 1 and 2 are labels for concepts that exist within a formal system we've developed, and sure, we can create an iso... (read more)

The universe with the 10-feet torus topology would certainly be a different universe governed by different laws. Still, one could conceive of a formal system of addition which would be exactly same as our present one, only it would not apply to distances (in a straightforward way). The same way as we can conceive the addition mod 2 arithmetics. As for the seeming contradiction, if you define "p being x feet away from q" as "there is a geodetic of length x connecting p and q", then obviously "I am ~40,000 km far from Istanbul while I am in Istanbul" isn't a contradiction, although it may look like one on the first sight. If you define distance as the length of the shortest geodetic, then it is a contradiction. Once again, this is a feature of language, not of the world. I have no problem with the idea that God could switch to a different formal system governing the world, perhaps even one we cannot describe now formally and consider it impossible, but that would only mean that certain formal systems, such as standard arithmetic, would have less practical applications, while others, maybe the mod 2 arithmetics, or something entirely exotic, would have more. It wouldn't make "1+1=0" a theorem of standard arithmetics. In the same way, we have rules which attach adjectives "round" and "square" to objects and these rules (implicitly) specify that these categories are exclusive. Perhaps, in the new world, there would be objects which may lead us to generalise the notions of "square" and "round" to have some overlap; but then, we will not be speaking about "square" and "round", as we understand the terms today.

I find this post very interesting, but I disagree with your examples about God. This comment is rather lengthy, and rather off-topic, so I apologize, but I wanted to bring it up because your post features these questions so prominently as examples.

Specifically, I don't think that the answer to the questions about God can be written off so easily as "no". It seems to me that the questions "Could God draw a square circle?" "Could God create a stone so large that even He could not lift it?" are asking about the bounds on omnipo... (read more)

I think this post is right on. I think we are IN this universe with a brain to match it, with 3-d, separation of time and space appropriate to non-relativistic speeds, and so on strongly coded in. In terms of any powerful god, she either "lives" in a much larger universe than ours, which kicks the can down the road (is there an omnipotent god who created that universe?) or she essentially comprises the entire universe. What other way to have an entity which "knows" how every particle moves at every instant other than having that entity be the universe? The most powerful, most accurate "simulation" of any system is the system itself. Obviously, the system itself can't get a wrong answer from a bad approximation somewhere, every other simulation can. I'm talking of simulations because omniscience presumably means the model god carries in her mind is complete and completely accurate. The model in god's mind is as big and complex and at least as fast as the system itself. But be that as it may, the only way you can play the linguistic trick of saying square circles are not real so god can still be omnipotent without being able to make them, you have, it seems to me, created a higher level physics which constrains all the universes that might be created. But where can the higher level physics come from? Is it just there, in which case our god is not the creator of the UNIVERSE universe, just of a very constrained universe that follows a bunch of atheistically determined rules. The can is kicked down the road. So if you are interested in a god-the-creator which has had no cans kicked down the road, I don't see how you can rule out ANYTHING. Things we can't concieve of are not ruled out, certainly things we can sorta concieve of like square circles and married bachelors can't be ruled out. How could this god create a square circle? Of course, I don't know. But I'd imagine that when you saw the square circle you would know it, even if you couldn't reconcile it with ever
Perhaps I shall spell this out better, but the impossibility is linguistic. A cleaner example I mention is: Where "bachelor" means "man who is not married," God could not create a married bachelor. A married bachelor is not a thing. If you break down the definitions of circle and square, you'll see that a "square circle" is not a thing. A heavy stone that has no mass (or a heavy stone that is not heavy), or a circle that is not circular, or any other number of direct contradictions seem impossible, not as limits on power, but mostly as limits on language. That's the point I'm getting at.
Contradictions are feature of a language (or some more formal system used to describe the universe), not of the universe. What we call physical laws are regularities which allow us to compress the observed data a bit - e.g. instead of keeping a list of planet positions at each moment, it is enough to have the initial positions, velocities and few equations of motion. Absence of contradictions is not such a law. (It is easy to imagine what violation of a particular physical law would look like, but try to imagine how a contradiction would look like. What would you observe if there was a lizard on your table and simultaneously there was no lizard on your table?) This is exactly changing the language, and very uninteresting to theologians when, as you correctly note, mere mathematicians can do it. "1+1=2" is a string in some formal system which acquires its meaning by isomorphism with real world situations. You can redefine your alphabet to exchange the symbols "2" and "4", which would make "1+1=4" true, but its meaning would be absolutely the same as the meaning of "1+1=2" before the redefinition. It has nothing to do with fundamental laws of the universe, whatever they are.
I think for this to be meaningful I'd need to know what your working definition of "omnipotent" was.

I am aware that my definition of Occam's razor is not the "official" definition. However, it is the definition which I see used most often in discussions and arguments, which is why I chose it. The fact that this definition of Occam's razor is common supports my claim that humans consider it a good heuristic.

Forgive me for my ignorance, as I have not studied Kolmogorov complexity in detail. As you suggest, it seems that human understanding of a "simple description" is not in line with Kolmogorov complexity.

I think the intention of my... (read more)

Rather than this, I'm suggesting that natural language is not in line with complexity of the "minimum description length []" sort. Human understanding in general is pretty good at it, actually - it's good enough to intuit, with a little work, that gravity really is a simpler explanation than "intelligent falling, " and that the world is simpler than solipsism that just happens to replicate the world. Although humans may consider verbal complexity "a good heuristic," humans can still reason well about complexity even when the heuristic doesn't apply.

Perhaps I am missing something here, but I don't see why utilitarianism is necessarily superior to rule-based ethics. An obvious advantage of a rule-based moral system is the speed of computation. Situations like the trolley problem require extremely fast decision-making. Considering how many problems local optima cause in machine learning and optimization, I imagine that it would be difficult for an AI to assess every possible alternative and pick the one which maximized overall utility in time to make such a decision. Certainly, we as humans frequent... (read more)

Yes, rule-based systems might respond faster, and that is sometimes preferable. Let me back up. I categorize ethical systems into different levels of meta. "Personal ethics" are the ethical system an individual agent follows. Efficiency, and the agent's limited knowledge, intelligence, and perspective, are big factors. "Social ethics" are the ethics a society agrees on. AFAIK, all existing ethical theorizing supposes that these are the same, and that an agent's ethics, and its society's ethics, must be the same thing. This makes no sense; casual observation shows this is not the case. People have ethical codes, and they are seldom the same as the ethical codes that society tells them they should have. There are obvious evolutionary reasons for this. Social ethics and personal ethics are often at cross-purposes. Social ethics are inherently dishonest, because the most effective way of maximizing social utility is often to deceive people, We expect, for instance, that telling people there is a distinction between personal ethics and social ethics should be against every social ethics in existence. (I don't mean that social ethics are necessarily exploiting people. Even if you sincerely want the best outcome for people, and they have personal ethics such that you don't need to deceive them into cooperating, many will be too stupid or in too much of a hurry to get good results if given full knowledge of the values that the designers of the social ethics were trying to optimize. Evolution may be the designer.) "Meta-ethics" is honest social ethics - trying to figure out what we should maximize, in a way that is not meant for public consumption - you're not going to write your conclusions on stone tablets and give them to the masses, who wouldn't understand them anyway. When Eliezer talks about designing Friendly AI, that's meta-ethics (I hope). And that's what I'm referring to here when I talk about encoding human values into an AI. Roughly, meta-ethics is "correct