All of Gray_Area's Comments + Replies

For what it's worth, I find plenty to disagree with Eleazar about, on points of both style and substance, but on death I think he has it exactly right. Death is a really bad thing, and while humans have diverse psychological adaptations for dealing with death, it seems the burden of proof is on people who do NOT want to make the really bad thing go away in the most expedient way possible.

This is an amusing empirical test for zombiehood -- do you agree with Daniel Dennett?

Only zombies will agree they don't have qualia? Not if their programming catches on and disguises itself better. Better for a zombie to hide as someone like Searle, who's constantly insisting that he has qualia (which is true) and that there is no possible scientific explanation for this ever (which is false).

"The idea that Bayesian decision theory being descriptive of the scientific process is very beautifully detailed in classics like Pearl's book, Causality, in a way that a blog or magazine article cannot so easily convey."

I wish people would stop bringing up this book to support arbitrary points, like people used to bring up the Bible. There's barely any mention of decision theory in Causality, let alone an argument for Bayesian decision theory being descriptive of all scientific process (although Pearl clearly does talk about decisions being modeled as interventions).

"Would you care to try to apply that theory to Einstein's invention of General Relativity? PAC-learning theorems only work relative to a fixed model class about which we have no other information."

PAC-learning stuff is, if anything far easier than general scientific induction. So should the latter require more samples or less?

"Eliezer is almost certainly wrong about what a hyper-rational AI could determine from a limited set of observations."

Eliezer is being silly. People invented computational learning theory, which among other things, shows the minimum number of samples needed to recover a given error rate.

Eliezer, why are you concerned with untestable questions?

Richard: Cox's theorem is an example of a particular kind of result in math, where you have some particular object in mind to represent something, and you come up with very plausible, very general axioms that you want this representation to satisfy, and then prove this object is unique in satisfying these. There are equivalent results for entropy in information theory. The problem with these results, they are almost always based on hindsight, so a lot of the times you sneak in an axiom that only SEEMS plausible in hindsight. For instance, Cox's theorem states that plausibility is a real number. Why should it be a real number?

"The probability of two events equals the probability of the first event plus the probability of the second event."

Mutually exclusive events.

It is interesting that you insist that beliefs ought to be represented by classical probability. Given that we can construct multiple kinds of probability theory, on what grounds should we prefer one over the other to represent what 'belief' ought to be?

"the real reason for the paradox is that it is completely impossible to pick a random integer from all integers using a uniform distribution: if you pick a random integer, on average lower integers must have a greater probability of being picked"

Isn't there a simple algorithm which samples uniformly from a list without knowing it's length? Keywords: 'reservoir sampling.'

People don't maximize expectations. Expectation-maximizing organisms -- if they ever existed -- died out long before rigid spines made of vertebrae came on the scene. The reason is simple, expectation maximization is not robust (outliers in the environment can cause large behavioral changes). This is as true now as it was before evolution invented intelligence and introspection.

If people's behavior doesn't agree with the axiom system, the fault may not be with them, perhaps they know something the mathematician doesn't.

Finally, the 'money pump' argument... (read more)

I was really confused about what point EY made that went over my head but i think I get it now. It totally changes the game to play it infinite amount of times rather than 1 go to win or lose. I made my choices based on 1 game and not a hybrid between the two of them played multiple times. If I play once, choosing 1a is just taking money that's already mine. If I play infinite times, 1b earns money faster because failing can be evened out.

Paul Gowder said:

"We can go even stronger than mathematical truths. How about the following statement?

~(P &~P)

I think it's safe to say that if anything is true, that statement (the flipping law of non-contradiction) is true."

Amusingly, this is one of the more controversial tautologies to bring up. This is because constructivist mathematicians reject this statement.

No, they reject P V ~P. They do not reject ~(P&~P). Only paraconsistent logicians do that. And paraconsistent logicians are silly.

"Sometimes I can feel the world trying to strip me of my sense of humor."

If you are trying to be funny, the customer is always right, I am afraid. The post wasn't productive, in my opinion, and I have no emotional stake in Christianity at all (not born, not raised, not currently).

Eliezer, where do your strong claims about the causal structure of scientific discourse come from?

"As long as you're wishing, wouldn't you rather have a genie whose prior probabilities correspond to reality as accurately as possible?"

Such a genie might already exist.

Every computer programmer, indeed anybody who uses computers extensively has been surprised by computers. Despite being deterministic, a personal computer taken as a whole (hardware, operating system, software running on top of the operating system, network protocols creating the internet, etc. etc.) is too large for a single mind to understand. We have partial theories of how computers work, but of course partial theories sometimes fail and this produces surprise.

This is not a new development. I have only a partial theory of how my car works, but in th... (read more)

Material sciences can give us an estimate on the shattering of a given material given certain criteria. Just because you do not know specific things about it doesn't make it a black box. Of course, that doesn't make the problems with complex systems disappear, it just exposes our ignorance. Which is not a new point here.

"It seems contradictory to previous experience that humans should develop a technology with "black box" functionality, i.e. whose effects could not be foreseen and accurately controlled by the end-user."

Eric, have you ever been a computer programmer? That technology becomes more and more like a black box is not only in line with previous experience, but I dare say is a trend as technological complexity increases.

On further reflection, the wish as expressed by Nick Tarleton above sounds dangerous, because all human morality may either be inconsistent in some sense, or 'naive' (failing to account for important aspects of reality we aren't aware of yet). Human morality changes as our technology and understanding changes, sometimes significantly. There is no reason to believe this trend will stop. I am afraid (genuine fear, not figure of speech) that the quest to properly formalize and generalize human morality for use by a 'friendly AI' is akin to properly formalizing and generalizing Ptolemean astronomy.

Sounds like we need to formalize human morality first, otherwise you aren't guaranteed consistency. Of course formalizing human morality seems like a hopeless project. Maybe we can ask an AI for help!

Formalising human morality is easy! 1. Determine a formalised morality system close enough to the current observed human morality system that humans will be able to learn and accept it, 2. Eliminate all human culture (easier than eliminating only parts of it). 3. Raise humans with this morality system (which by the way includes systems for reducing value drift, so the process doesn't have to be repeated too often). 4. When value drift occurs, goto step 2.

Well shooting randomly is perhaps a bad idea, but I think the best we can do is shoot systematically, which is hardly better (takes exponentially many bullets). So you either have to be lucky, or hope the target isn't very far, so you don't need to a wide cone to take pot shots at, or hope P=NP.

quadratically many, actually. EDIT: well, in the case of actual shooting at least.

billswift said: "Prove it."

I am just saying 'being unpredictable' isn't the same as free will, which I think is pretty intuitive (most complex systems are unpredictable, but presumably very few people will grant them all free will). As far as the relationship between randomness and free will, that's clearly a large discussion with a large literature, but again it's not clear what the relationship is, and there is room for a lot of strange explanations. For example some panpsychists might argue that 'free will' is the primitive notion, and randomness is just an effect, not the other way around.

Tom McGabe: "Evolution sure as heck never designed people to make condoms and birth control pills, so why can't a computer do things we never designed it to do?"

That's merely unpredictability/non-determinism, which is not necessarily the same as free will.

Stefan Pernar said: "I argue that morality can be universally defined."

As Eliezer points out, evolution is blind, and so 'fitness' can have as a side-effect what we would intuitively consider unimaginable moral horrors (much worse than parasitic wasps and cats playing with their food). I think if you want to define 'the Good' in the way you do, you need to either explain how such horrors are to be avoided, or educate the common intuition.

Stephen: the altruist can ask the Genie the same thing as the selfish person. In some sense, though, I think these sorts of wishes are 'cheating,' because you are shifting the computational/formalization burden from the wisher to the wishee. (Sorry for the thread derail.)

"My definition of an intelligent person is slowly becoming 'someone who agrees with Eliezer', so that's all right."

That's not in the spirit of this blog. Status is the enemy, only facts are important.

Scott said: "25MB is enough for pretty much anything!"

Have people tried to measure the complexity of the 'interpreter' for the 25MB of 'tape' of DNA? Replication machinery is pretty complicated, possibly much more so than any genome.

Eliezer, are you familiar with Russell and Wefald's book "Do the Right Thing"?

It's fairly old (1991), but it's a good example of how people in AI view limited rationality.

This reminds me of teaching. I think good teachers understand short inferential distances at least intuitively if not explicitly. The 'shortness' of inference is why good teaching must be interactive.

I think Vygotsky []'s expression "zone of proximal development []" means "one inferential step away", so in theory professional teachers should understand this. I prefer to imagine knowledge like a "tech tree" in a computer game. When teaching one student, it is possible to detect their knowledge base and use their preferred vocabulary. I remember explaining some programming topics to a manager: source code is like a job specification; functions are employees; data are processed materials; exceptions are emergency plans. Problem is, when teaching the whole class, everyone's knowledge base is very different. In theory it shouldn't be so, because they all supposedly learned the same things in recent years, but in reality there are huge differences -- so the teacher basicly has to choose a subset of class as target audience. Writing a textbook is even more difficult, when there is no interaction.

Pascal's wager type arguments fail due to their symmetry (which is preserved in finite cases).

What circles do you run in Eliezer? I meet a fair number of people who work in AI, (you can say I "work in AI" myself) and so far I can't think of a single person who was sure of a way to build general intelligence. Is this attitude you observe a common one among people who aren't actually doing AI research, but who think about AI?

Apparently what works fairly well in Go is to evaluate positions based on 'randomly' running lots games to completion (in other words you evaluate a position as 'good' if in lots of random games which start from this position you win). Random sampling of the future can work in some domains. I wonder if this method is applicable to answering specific questions about the future (though naturally I don't think science fiction novels are a good sampling method).

We'd have to be able to randomly run reality to completion several times.

Watching myself trying to write (or speak), I am coming to realize what a horrendous hack the language processes of the brain are. It is sobering to contemplate what sorts of noise and bias this introduces to our attempts to think and communicate.

I think a discussion of what people mean exactly when they invoke Occam's Razor would be great, though it's probably a large enough topic to deserve its own thread.

The notion of hypothesis parsimony is, I think, a very subtle one. For example, Nick Tarleton above claimed that 'causal closure' is 'the most parsimonious hypothesis.' At some other point, Eliezer claimed the multi-world interpretation of quantum mechanics as the most parsimonious. This isn't obvious! How is parsimony measured? Would some version of Chalmers' dualism really be less parsimonious? How will we agree on a procedure to compare 'hypothesis size?' How much should we value 'God' vs 'the anthropic landscape' favored at Stanford?

"I'm aware that physical outputs are totally determined by physical inputs."

Even this is far from a settled matter, since I think this implies both determinism and causal closure.

I don't really understand what Eliezer is arguing against. Clearly he understands the value of mathematics, and clearly he understands the difference between induction and deduction. He seems to be arguing that deduction is a kind of induction, but that doesn't make much sense to me.

Nick: you can construct a model where there is a notion of 'natural number' and a notion of 'plus' except this plus happens to act 'oddly' when applied to 2 and 2. I don't think this model would be particularly interesting, but it could be made.

"Causality" by Judea Pearl is an excellent formal treatment of the subject central to empirical science.

Perhaps 'a priori' and 'a posteriori' are too loaded with historic context. Eliezer seems to associate a priori with dualism, an association which I don't think is necessary. The important distinction is the process by which you arrive at claims. Scientists use two such processes: induction and deduction.

Deduction is reasoning from premises using 'agreed upon' rules of inference such as modus ponens. We call (conditional) claims which are arrived at via deduction 'a priori.'

Induction is updating beliefs from evidence using rules of probability (Bayes th... (read more)

"It appears to me that "a priori" is a semantic stopsign; its only visible meaning is "Don't ask!""

No, a priori reasoning is what mathematicians do for a living. Despite operating entirely by means of semantic stopsigns, mathematics seems nevertheless to enjoy rude health.

Eliezer: I am using the standard definition of 'a priori' due to Kant. Given your responses, I conclude that either you don't believe a priori claims exist (in other words you don't believe deduction is a valid form of reasoning), or you mean by arithmetic statements "2+2=4" something other than what most mathematicians mean by them.

Eliezer: "Gray Area, if number theory isn't in the physical universe, how does my physical brain become entangled with it?"

I am not making claims about other universes. In particular I am not asserting platonic idealism is true. All I am saying is "2+2=4" is an a priori claim and you don't use rules for incorporating evidence for such claims, as you seemed to imply in your original post.

A priori reasoning does take place inside the brain, and neuroscientists do use a posteriori reasoning to associate physical events in the brain with a priori reasoning. Despite this, a priori claims exist and have their own rules for establishing truth.

Eliezer: When you are experimenting with apples and earplugs you are indeed doing empirical science, but the claim you are trying to verify isn't "2+2=4" but "counting of physical things corresponds to counting with natural numbers." The latter is, indeed an empirical statement. The former is a statement about number theory, the truth of which is verified wrt some model (per Tarski's definition).

The core issue is whether statements in number theory, and more generally, mathematical statements are independent of physical reality or entailed by our physical laws. (This question isn't as obvious as it might seem, I remember reading a paper claiming to construct a consistent set of physical laws where 2 + 2 has no definite answer). At any rate, if the former is true, 2+2=4 is outside the province of empirical science, and applying empirical reasoning to evaluate its 'truth' is wrong.

I don't think this is at all the core issue.

Eliezer's original post stated that beliefs need to come from mind-reality entangling processes.

If math is a part of "reality", then Eliezer's point stands and empirical reasoning makes perfect sense.

If math is not a part of "reality", then we would expect it to influence nothing at all, including our beliefs. Or even suppose that knowledge came from somewhere and could influence belief but still did not otherwise correlate with reality: Then it would be irrelevant. This, of course, is not the... (read more)

There are some points of view that sometimes do require mathematical statements to be dependent on reality (i.e. constructivism, actual versus potential infinity debate, etc). Sometimes it is intuitive to require mathematics to behave this way, i.e. 'natural' numbers are called that for a reason, and they better behave like the apples or I'm postulating a change in nomenclature. P.S. Ii seems to me the OP's wording wasn't precise enough. I can very well imagine a situation in which some basic addition would yield non obvious results (like addition inside modulo N number space).

Why not just say e is evidence for X if P(X) is not equal to P(X|e)?

Incidentally, I don't really see the difference between probabilistic dependence (as above) and entanglement. Entanglement is dependence in the quantum setting.

That definition does not always coincide with what is described in the article; something can be evidence even if P(X|e) = P(X). Imagine that two cards from a shuffled deck are placed face-down on a table, one on the left and one on the right. Omega has promised to put a monument on the moon iff they are the same color. Omega looks at the left card, and then the right, and then disappears in a puff of smoke. What he does when he's out of sight is entangled with the identity of the card on the right. Change the card to one of a different color and, all else being equal, Omega's action changes. But, if you flip over the card on the right and see that it's red, that doesn't change the degree to which you expect to see the monument when you look through your telescope. P(monument|right card is red) = P(monument) = 25/51 It does change your conditional beliefs, though, such as what the world would be like if the left card turned out to also be red: P(monument|left is red & right is red) > P(monument|left is red)
9Ronny Fernandez12y
"This should not be confused with the technical sense of "entanglement" used in physics - here I'm just talking about "entanglement" in the sense of two things that end up in correlated states because of the links of cause and effect between them." That's literally in the third paragraph. I think you mean, if P(x)<P(x|e) then e is evidence for x. That is a good definition for evidence, but it doesn't function on the same level as Yudkowsky's above. Yudkowsky is explaining not just what function evidence has in truth finding, he is also explaining how evidence is built into a physical system, e.g., camera, human, or other entanglement device. The Bayesian def of evidence you gave tells us what evidence is, but it doesn't tell us how evidence works, which Yudkowsky's does.
Quantum wave amplitudes behave in some ways like probabilities and in other ways unlike probabilities. Because of this, some concepts have analogues, while others don't. But no concepts are exactly equivalent. For example, evidence isn't integrally linked to complex numbers, while entanglement is.
Trivially, because P(X|e) could be less than P(X)

Eliezer said: "These are blog posts, I've got to write them quickly to pump out one a day."

I am curious what motivated this goal.

In computer science there is a saying 'You don't understand something until you can program it.' This may be because programming is not forgiving to the kind of errors Eliezer is talking about. Interestingly, programmers often use the term 'magic' (or 'automagically') in precisely the same way Eliezer and his colleague did.

Some other vague concepts people disagree on: 'cause,' 'intelligence,' 'mental state,' and so on.

I am a little suspicious of projects to 'exorcise' vague concepts from scientific discourse. I think scientists are engaged in a healthy enough enterprise that eventually they will be able to sort out the uselessly vague concepts from the 'vague because they haven't been adequately understood and defined yet'.

I ll try a silly info-theoretic description of emergence:

Let K(.) be Kolmogorov complexity. Assume you have a system M consisting of and fully determined by n small identical parts C. Then M is 'emergent' if M can be well approximated by an object M' such that K(M') << n*K(C).

The particulars of the definition aren't even important. What's important is this is (or can be) a mathematical, rather than a scientific definition, something like the definition of derivative. Mathematical concepts seem more about description, representation, and modeling ... (read more)

Robin Hanson said: "Actually, Pearl's algorithm only works for a tree of cause/effects. For non-trees it is provably hard, and it remains an open question how best to update. I actually need a good non-tree method without predictable errors for combinatorial market scoring rules."

To be even more precise, Pearl's belief propagation algorithm works for the so-called 'poly-tree graphs,' which are directed acyclic graphs without undirected cycles (e.g., cycles which show up if you drop directionality). The state of the art for exact inference in bay... (read more)

Eliezer said: "I encounter people who are quite willing to entertain the notion of dumber-than-human Artificial Intelligence, or even mildly smarter-than-human Artificial Intelligence. Introduce the notion of strongly superhuman Artificial Intelligence, and they'll suddenly decide it's "pseudoscience"."

It may be that the notion of strongly superhuman AI runs into people's preconceptions they aren't willing to give up (possibly of religious origins). But I wonder if the 'Singularians' aren't suffering from a bias of their own. Our cu... (read more)