All of Nubulous's Comments + Replies

Slight variant: Humour is a form of teaching, in which interesting errors are pointed out. It doesn't need to involve an outsider, and there's no particular class of error, other than that the participants should find the error important.
If the guy sitting behind you starts moaning and grunting, if it's a mistake (e.g. he's watching porn on his screen and has forgotten he's not alone) then it's funny, whereas if it's not a mistake, and there's something wrong with him, then it isn't.
Humour as teaching may explain why a joke isn't funny twice - you can only learn a thing once. Evolutionarily, it may have started as some kind of warning, that a person was making a dangerous mistake, and then getting generalised.

Why would anyone choose the map rather than the territory as their foundation?

I couldn't agree more, which is why I was attempting to discourage people from doing so.

Why engage in science if you are not willing to accept the inferences that it makes about reality? Am I not going to believe in atoms because it doesn't match what I see with my eyes?

But the justification for any physical theory is precisely that it predicts what you see with your own eyes. Indeed, that's what a physical theory is - a means of predicting what you will experience. Atom... (read more)

Evidence implies observation. Observation implies conscious experience. So your evidence for a world independent of conscious experience turns out to be ... conscious experience. I expect you can see why that isn't going to work.

No, I can't. Conscious experience is our evidence for the existence of the real world. The hypothesis that the real world exists seems favoured heavily by Occam's razor. If there was no world out there, life would probably be a lot more like dreaming is.
In what sense does this "not work"? All of modern technology was designed and constructed under the paradigm that there is a world independent of conscious experience - the competing framework has produced bupkis.

The only proposed explanation of consciousness I've seen on Less Wrong is "maybe if we arrange stuff in the right way, consciousness will happen". Even if true, it's not enough of an explanation to enable argument about it.


Dennett presents a resolutely functionalist description of experience, then tells us that nothing resembling qualia can be found within it, to the great surprise of no-one at all.

think that qualia are real things

To believe that the phenomenal world, the world you actually live in, is a fiction, while an invented &... (read more)

“To believe that the phenomenal world, the world you actually live in, is a fiction, while an invented "physical" world, for which no evidence exists, is the real world, is not merely wrong, it's an irrationality which makes a complete mockery of the goals of this website.” This seems to be the root of the problem. How do you start to argue with this statement? Why would anyone choose the map rather than the territory as their foundation? Why engage in science if you are not willing to accept the inferences that it makes about reality? Am I not going to believe in atoms because it doesn't match what I see with my eyes? If there is no evidence of the physical world then why don't you walk through walls? Do you have any explanations of illusions? Talk about making a mockery of rationality! If we want to be rational then lets start with: consciousness is real and important but not yet explained by science, however we assume (at least for now) that the explanation is possible in materialistic terms. We can make this assumption because science is making steady progress in understanding brain function, (starting a decade or so ago) and when science makes steady progress it usually ends up with an explanation in materialistic terms.
There is evidence that the "real" world exists, for most reasonable uses of the term "evidence".

When alleged rationalists experience an "irk", because someone has reminded them that their theories describe a world utterly unlike the one that actually exists, we call this "cognitive dissonance". When they vote it down we call it "denial".

3Eliezer Yudkowsky14y
Downvoting a post on consciousness doesn't mean you think consciousness has been successfully explained. It means that you're not interested in seeing more posts about consciousness.
Perhaps it would be instructive to think for a moment about why these people, who probably experience the world just as well as you do, have come to accept proposed explanations of consciousness. It would also be nice if you'd engage these proposed explanations instead of saying that anyone who disagrees is in denial. Dennet clearly thinks a lot about why other people think that qualia are real things that must be explained. He also makes it a point of engaging these intuitions and showing that they often fall apart under scrutiny rather than assuming that somehow they all must be correct.

Since we can presumably generate the appropriate signals in the optic nerve from scratch if we choose, light and its wavelength have nothing whatsoever to do with color.

Downvoted for strange non sequitur. We could theoretically pipe in the appropriate electrical impulses to the part of your brain responsible for auditory processing, but that doesn't mean hearing has "nothing whatsoever" to do with sound.

This site is full of people interested in implementing intelligence (and even themselves) on a new substrate .... but they're not going to be interested in the relationship between physics and thought ?

Articles should be legible to the audience. You can't just throw in a position written in terms that require special knowledge not possessed by the readers. It may be interesting, but then the goal should be exposition, showing importance and encouraging study.

It's great when thought is considered mechanistically, in terms of physics. It's also instructive to build ontology around knowability. There is a path across levels of abstraction between physics and intuition, and arguably a shorter path between intuition and logic. But mixing precision of physics ... (read more)

We're all interested in the 'relationship' between thought and reality, but I think it's unlikely that thought exists at the simple, fundamental level of reality that is studied by physicists.

Indeed. (I thought it would be a bit of a spoiler to be more specific)

I found this interesting pdf of a discussion involving Jaynes (and Dennett) and it makes clear what he believed, which was that the change was mostly cultural, and that uncontacted tribes might be bicameral, but there were none left. ( I'm not sure this is true - anyone reading this have an anthropologist handy ? )
Also contains a very odd fact (?) about children.

EDIT: Oops, didn't notice it was on Jaynes' own website. So presumably quite a lot more stuff there.

Are you talking about the bit about imaginary friends on page 5?
Just a nitpick: Jaynes is dead; the site you refer to is that of the Julian Jaynes Society, which promotes his ideas about bicameralism.

How does Jaynes explain the lack of this kind of thinking among peoples who have culture and genes unchanged in the last 3000 years ?

To quote the pdf you dug up:
Jaynes postulates, in passing, a genetic basis for bicamerality (which I assume you mean by "this kind of thinking"), for instance p.311: "...there was probably a strong genetic basis for this type of remaining bicamerality. It is, I think, the same genetic basis that remains with us as part of the etiology of schizophrenia". Does that help ? To defend or critique Jaynes properly (well, any better) I'd have to reread him. I picked up his book out of curiosity a few years ago when I was going through Dennett's consciousness books; he cites Jaynes approvingly a few times. However, Dennet does not directly cites his bicameral theory, just his contention that language (more specifically "a capacity for self-exhortation") played a key role in the development of minds capable of formulating plans. The other hook was Stephenson's "Snow Crash", which features Jaynes' theory prominently.

It wasn't intended to be a refutation. The technical claims of the papers may be correct, they just aren't, as the linked article claims, about consciousness.

How does the above constitute refuting, as opposed to ignoring, the content behind the link?

If you want to integrate the phenomenal into your ontology, is there any reason you've stopped short of phenomenalism ?

EDIT: Not sarcasm - quite serious.

(Phenomenalism defined.) Phenomenalism (whether solipsistic or multi-person) doesn't explain where phenomena come from or why they have their specific forms. If you can form a causal model of how the thing which experiences appearances is induced to have those experiences, you may as well do so. From an ontological perspective, you could say it's phenomenalism which stops short of providing an explanation.

I came up with the following while pondering the various probability puzzles of recent weeks, and I found it clarified some of my confusion about the issues, so I thought I'd post it here to see if anyone else liked it:

Consider an experiment in which we toss a coin, to chose whether a person is placed into a one room hotel or duplicated and placed into a two room hotel. For each resulting instance of the person, we repeat the procedure. And so forth, repeatedly. The graph of this would be a tree in which the persons were edges and the hotels nodes. Ea... (read more)

Because if you agree that the correct way to measure the probability is as the occurrence ratio along the path, the degree of splitting is only significant to the extent that it affects the occurrence ratio, which in this case it doesn't. The coin toss chooses equiprobably which hotel comes next, then it's on to the next coin toss to equiprobably choose which hotel comes next, and so forth. So each path has on average equal numbers of each hotel, going forwards.

But you're not a hotel, you're an observer. Why does the number of hotels matter but not the number of observers? If the tire fire is replaced with an empty hotel, you still can't end up in it. It seems like your function for ending up in a future, based on the number of observers in that future, goes as follows: If there's zero, the prior likelihood gets multiplied by zero. If there's one, the prior likelihood gets multiplied by one. If there's more than one, the prior likelihood still only gets multiplied by one. This function seems more complicated than just multiplying the prior probability by the number of observers, which is what I do. My reasoning is, even on a going forward basis, if there's a line connecting me to a world with one future self, and no line connecting me to a world without a future self, there must be 14 lines connecting me to a future with 14 future selves. Is there some reason to prefer your going-forward interpretation over mine, despite the fact that mine is simpler and agrees with the going-backwards perspective?

When we speak of a subjective probability in a person-multiplying experiment such as this, we (or at least, I) mean "The outcome ratio experienced by a person who was randomly chosen from the resulting population of the experiment, then was used as the seed for an identical experiment, then was randomly chosen from the resulting population, then was used as the seed.... and so forth, ad infinitum".

I'm not confident that we can speak of having probabilities in problems which can't in theory be cast in this form.

In other words, the probabilit... (read more)

But again I must ask, on the going-forward basis, why is the number of people in each world irrelevant? I grant you that the WORLD splits into even thirds, but the people in it don't, they split 1000000 / 1 / 0. Where are you getting 1 / 1 / 0?

I don't know what I meant either. I remember it making perfect sense at the time, but that was after 35 hours without sleep, so.....

The answer to the second part is no, I would expect a 50:50 chance in that case.
In case you were thinking of this as a counterexample, I also expect a 50:50 chance in all the cases there from B onwards. The claim that the probabilities are unchanged by the coin toss is wrong, since the coin toss changes the number of participants, and we already accepted that the number of participants was a factor in the probability when we assigned the 99% probability in the first place.

So, if omega picks a number from 1 to 3, and depending on the result makes: A. a hotel with a million rooms B. a hotel with one room C. a pile of flaming tires you'd say that a person has a 50% chance of finding themselves in situation A or B, but a 0% chance of being in C? Why does the number of people only matter when the number of people is zero? Doesn't that strike you as suspicious?

You're reading a little more into what I said than was actually there. I was just remarking on the change of dependence between the parts of the problem, without having thought through what the consequences would be.

Now that I have thought it through, I agree with the presumptuous philosopher in this case. However I don't agree with him about the size of the universe. The difference being that in the hotel case we want a subjective probability, whereas in the universe case we want an objective one. Subjectively, there's a very high probability of findin... (read more)

I don't understand what you mean by subjective and objective probabilities. Would you still agree with the philosopher in my problem if omega flipped a coin (or looked at binary digit 5000 of pi) and then built the small hotel OR the big hotel?

The most obvious difference is that the original problem involved the smaller or the larger set of people whereas this one uses the smaller and the larger.

Ah, so the difference isn't that I used hotels instead of universes, it's that I used hotels instead of POSSIBLE hotels. In other words, your likelihood of being in a hotel depends on the number of "you"s in the hotel, but your likelihood of being in a possible hotel does not, is that what you're saying? Unless the number of "you"s is zero. Then it clearly does depend on the number. Isn't this just packing and unpacking?

I fail to see why that is the general case.

If you have two people to start with, and one when you've finished, without any further stipulation about which people they are, then you have lost a person somewhere. To come to a different conclusion would require an additional rule, which is why it's the general case.
That additional rule would have to specify that a duplicate doesn't count as a second person. But since that duplicate could subsequently go on to have a separate different life of its own, the grounds for denying it personhood seem quite weak.... (read more)

If a person were running on a inefficiently designed computer with transistors and wires much larger than they needed to be, it would be possible to peel away and discard (perhaps) half of the atoms in the computer without affecting it's operation or the person. This would be much like ebborian reproduction, but merely a shedding of atoms. In any sufficiently large information processing device, there are two or more sets of atoms (or whatever its made of) processing the same information, such they they could operate independently of each other if they weren't spatially intertwined. Why are they one person when spatially intertwined, but two people when they are apart? That they 'could have' gone on independently is a counterfactual in the situation that they are both receiving the same inputs. You 'could' be peeled apart into two people, but both halves of your parts are currently still making up 1 person. Personhood is in the pattern. Not the atoms or memory or whatever. There's only another person when there is another sufficiently different pattern. merge is equivalent to 'spatially or logically reintegrate, then shed atoms or memory allocation as desired'

If you mean that a quantitative merge on a digital computer is generally impossible, you may be right. But the example I gave suggests that merging is death in the general case, and is presumably so even for identical merges, which can be done on a computer.

I fail to see why that is the general case. For that matter, I fail to see why losing some(many, most) of my atoms and having them be quickly replaced by atoms doing the exact same job should be viewed as me dying at all.

When you wake up, you will almost certainly have won (a trillionth of the prize). The subsequent destruction of winners (sort of - see below) reduces your probability of being the surviving winner back to one in a billion.

Merging N people into 1 is the destruction of N-1 people - the process may be symmetrical but each of the N can only contribute 1/N of themself to the outcome.

The idea of being (N-1)/N th killed may seem a little odd at first, but less so if you compare it to the case where half of one person's brain is merged with half of a different p... (read more)

This. How does Yudkosky's careless statement "Just as computer programs or brains can split, they ought to be able to merge" not immediately light up as the weakest link of the entire post? If you think merging ought to work, then why not also think that quantum suicide ought to work?
Suppose that, instead of winning the lottery, you want your friend to win the lottery. (Or you want your random number generator to crack someone's encryption key, or you want a meteor to fall on your hated enemy, etc.) Then each of the trillion people would experience the full satisfaction from whatever random result happened.
In the case where the people are computer programs, none of that works.

The reason all these problems are so tricky is that they assume there's a "you" (or a "that guy") who has a view of both possible outcomes. But since there aren't the same number of people for both outcomes, it isn't possible to match up each person on one side with one on the other to make such a "you".
Compensating for this should be easy enough, and will make the people-counting parts of the problems explicit, rather than mysterious.

I suspect this is also why the doomsday argument fails. Since it's not possible to define a... (read more)

Isn't the quantum part of Quantum Russian Roulette a red herring, in that the only part it plays is to make copies of the money ? All the other parts of the thought-experiment work just as well in a single world where people-copiers exist.

To make the situations similar, suppose our life insurance company has been careless, and we get a payout for each copy that dies. Do you have someone press [COPY], then kill all but one of the copies before they wake ?

Doesn't "harm", to a consequentialist, consist of every circumstance in which things could be better, but aren't ? If a speck in the eye counts, then why not, for example, being insufficiently entertained ?

If you accept consequentialism, isn't it morally right to torture someone to death so long as enough people find it funny ?

I'm picking on this comment because it prompted this thought, but really, this is a pervasive problem: consequentialism is a gigantic family of theories, not just one. They are all still wrong, but for any single counterexample, such as "it's okay to torture people if lots of people would be thereby amused", there is generally at least one theory or subfamily of theories that have that counterexample covered.

Perhaps the problem here is that you're assuming that utility(probability, outcome) is the same as probability*utility(outcome). If you don't assume this, and calculate as if the utility of extra life decreased with the chance of getting it, the problem goes away, since no amount of life will drive the probability down below a certain point. This matches intuition better, for me at least.

EDIT: What's with the downvotes ?

In circumstances where the law of large numbers doesn't apply, the utility of a probability of an outcome cannot be calculated from jus... (read more)

I for one don't follow your math; those 2 figures do look the same to me. Could you give some examples of how they give different answers?

I think you may be confusing the microstate and macrostate here - the microstate may branch every-which-way, but the macrostate, i.e. the computer and its electronic state (or whatever it is the deciding system is), is very highly conserved across branching, and can be considered classically deterministic (the non-conserving paths appear as "thermodynamic" misbehaviour on the macro scale, and are hopefully rare). Since it is this macrostate which represents the decision process, impossible things don't become possible just because branching is occurring.

For the other perspective see: Small fluctuations are often rapidly magnified into macroscopic fluctuations. Computers sometimes contain elements designed to accelerate this process - in the form of entropy generators - which are used to seed random number generators - e.g. see: I don't think anyone is talking about impossible things becoming possible. The topic is whether considered paths in a decision can be legitimately considered to be possibilities - or whether they are actually impossible.

Whether you consider this as sabotage or not depends on what you think the goal of the site's authors was. It certainly wasn't to help find inconsistencies in people's thinking, given the obvious effort that went into constructing questions that had multiple conflicting interpretations.

there are plausible interpretations under which I would disagree and plausible interpretations under which I would agree.


Also, almost every question is so broken as to make answering it completely futile. So much so that it's hard to believe it was an accident.

I find it hard to believe that you could really think the most likely explanation of the flaws you perceive are that Aaronson and the students that implemented this purposely introduced flaws and are trying to sabotage the work. So why do you utter such nonsense? And did it not occur to you that disagreeing that children should have the vote could be resolved by being neutral on everybody having the vote, which is what I did after realizing that there are plausible interpretations under which I would disagree and plausible interpretations under which I would agree.

Mostly agree is a higher degree of agreement than Agree ?

To Somewhat agree that everyone should have the vote and Disagree that children should have the vote is inconsistent ?

Obviously this is the work of the Skrull "Scott Aaronson", whose thinking is not so clear.

Also, almost every question is so broken as to make answering it completely futile. So much so that it's hard to believe it was an accident.

My metaphor lobes appear to be on fire.

Without objective measures of utility, what could it even mean to speak of someone's utility judgements as being biased or wrong ?

I don't know. What I was referring to was that people's estimates of their future utility of some course of action are not constant. And they often vary in such a way that one choice (dieting, exercising, saving...) appear rational when you are planning for it, and when you evaluate it in retrospect, but is unappealing at the time that you actually do it.

Warrigal gave a good recognition algorithm

Even though no bird, in the history of the world, has ever been recognised using it ?

When someone proposes a new algorithm, "this algorithm has never been used" doesn't sound to me like a valid critique. More substantively, Cuvier proposed similar outlandish-sounding algorithms tuned to recognizing animals by teeth and bone fragments, which have enjoyed widespread use ever since. A small anecdote: one of Cuvier's students once dressed in a devil's costume and entered his room at night to scare him. Cuvier opened his eyes, said "Horns? Hooves? You can't eat me, you're a herbivore" and went back to sleep.

Can you give a concrete example of someone screwing up due to hyperbolic accounting in a case where there's an objective measure of utility to compare the person's estimates against ?

There are no objective measures of utility. But just about everyone who has failed a diet or exercise schedule could be seen as failing beause of hyperbolic discounting.

But if agent X will (deterministically) choose action a_1, then when he asks what would happen “if” he takes alternative action a­_2, he’s asking what would happen if something impossible happens.


would happen if something impossible happens.

But since this is the decision process that produces the "happens", both "happens" are the same "happens". In other words, it reduces to:

asking if something impossible happens.

Which is correct. Because the CSA deterministically chooses the best option, checking each action to s... (read more)

I've only just heard of PCT, so I don't know if this is familiar to everyone already, or whether it's what the PCT people had in mind all along and I'm just the last to find out, but it seems to me that PCT explains, if not the how, then at least the why of consciousness. If all actions arise from errors against a model, then the upper layers of human decision-making would consist of a simulated person living in a simulated world, which is indeed what we seem to be.

I had assumed that microscopic reversibility and a large set of measurements were all that was required. Could you explain where my assumption is wrong ?

3Eliezer Yudkowsky14y
Quantum mechanics (in any interpretation, not just many-worlds) makes this impossible even in principle; the necessary information can't be retrieved, and may not even be present in any one quantum outcome. Even under classical mechanics, you need exact measurements of essentially the whole universe, including photons on their way to infinity, meaning that you need sensors and computers that are larger than and outside of the universe.
Yes, that's it! Thank you so much. It's definitely from that 50's pulp anthology, which I'm sure is packed away in a box somewhere. The 50's were great for science fiction when you consider the magnitude of the ideas they loved to deal with... often far more sophisticated and penetrating than the military SF of today or even the time travel or alien encounters of the 80s and 90s.

The thing I love about lesswrong is that you're never more than one step away from an epistemological landmine, and even a simple ordinary question like "can we raise the dead" ends up as "is a person the same person just because you have no way of knowing that they aren't the same person ?".

What's the current view on whether there's enough information available to reconstruct the dead ?

(by which I mean the unfrozen dead)

Get hold of their DNA, and you might be able to get somewhere fairly soon. E.g. see my "Celebrity cloning" video.
You mean without brute-forcing it by creating every conceivable person? (I'm sure I read something in Deutsch to the effect that infinite computing power might be available in certain universes...)
5Eliezer Yudkowsky14y
Resurrecting the ancient dead requires that our models of physics be wrong in character, not just detail.
To actually retrieve the computation they embodied? That which made them, well, them? My personal guess is that, barring the possibility of "it turns out something like time travel is possible after all", and even then, the trickiness of somehow reaching back to before the machine was built, or some other improbable major physics revolution that would let us retrieve information from the past, well, sadly, I'd have to guess no. Of course, I'd be ecstatic if it turned out there was a way. I'm just not expecting it. :(
No consensus. Many people think we will eventually invent tech to relaunch cryonically frozen people, and many other people disagree, but pretty much no one sees any hope for reconstructing people whose brains have already decomposed.

It's a little worrying that the people trying to save us from the Robopocalypse don't have a website that can spot double-posting....

I can't think of a good explanation for anyone picking the $500

For a person who doesn't expect to get many more similar betting chances, the expectation value of the big win is unphysical.

Yes. Probably that must be one of the reasons. I also read someone that humans generally don't distinguish much between between probabilities lower than 5%. That is, everything below 5% is treated as a low probability event. Even I, with good mathematical training, I guess if I would prefer $100K at 100% to $1B at 1%. Although the second alternative has 100X "expected" pay off, I don't "expect" myself to be lucky enough to get it. :) And although I'd definitely prefer $1M@15% to $500@100% if you'd multiply it by thousand, I think, I'd take $500K@100% rather than $1B@15% (in this case of course, Bill Gates would laugh at me... :) )