All of Cyan's Comments + Replies

This is a field in which the discoverer of the theorem that rational agents cannot disagree was given the highest possible honours...

I can't say I disagree.

Of course experimental design is very important in general. But VAuroch and I agree that when two designs give rise to the same likelihood function, the information that comes in from the data are equivalent. We disagree about the weight to give to the information that comes in from what the choice of experimental design tells us about the experimenter's prior state of knowledge.

you're ignoring critical information

No, it practical terms it's negligible. There's a reason that double-blind trials are the gold standard -- it's because doctors are as prone to cognitive biases as anyone else.

Let me put it this way: recently a pair of doctors looked at the available evidence and concluded (foolishly!) that putting fecal bacteria in the brains of brain cancer patients was such a promising experimental treatment that they did an end-run around the ethics review process -- and after leaving that job under a cloud, one of them was still ... (read more)

Double-blind trials aren't the gold standard, they're the best available standard. They still don't replicate far too often, because they don't remove bias (and I'm not just referring to publication bias). Which is why, when considering how to interpret a study, you look at the history of what scientific positions the experimenter has supported in the past, and then update away from that to compensate for bias which you have good reason to think will show up in their data. In the example, past results suggest that, even if the trial was double-blind, someone who is committed to achieving a good result for the treatment will get more favorable data than some other experimenter with no involvement. And that's on top of the trivial fact that someone with an interest in getting a successful trial is more likely to use a directionally-slanted stopping rule if they have doubts about the efficacy than if they are confident it will work, which is not explicitly relevant in Eliezer's example.
Hold on- aren't you saying the choice of experimental rule is VERY important (i.e. double blind vs. not double blind,etc)? If so you are agreeing with VAuroch. You have to include the details of the experiment somewhere. The data does not speak for itself.

You're going to have a hard time convincing me that... vectors are a necessary precursor for regression analysis...

So you're fitting a straight line. Parameter estimates don't require linear algebra (that is, vectors and matrices). Super. But the immediate next step in any worthwhile analysis of data is calculating a confidence set (or credible set, if you're a Bayesian) for the parameter estimates; good luck teaching that if your students don't know basic linear algebra. In fact, all of regression analysis, from the most basic least squares estimator ... (read more)

Consciousness is the most recent module, and that does mean [that drawing causal arrows from consciousness to other modules of human mind design is ruled out, evolutionarily speaking.]

The causes of the fixation of a genotype in a population are distinct from the causal structures of the resulting phenotype instantiated in actual organisms.

Sure, I agree with all of that. I was just trying to get at the root of why "nobody asked [you] to take either vow".

The fact that I haven't taken a literal vow is true, but they meaning of what I was saying goes beyond that point. The root is that nobody asked me in a metaphorical way to take a vow either. Eliezer asked for curiosity instead of a solemn vow in the talk about rationalist virtues. There are reasons why that's the case.

Before I also haven't heard anybody speak about taking those kinds of vows to oneself.

It's not literal. It's an attempt at poetic language, like The Twelve Virtues of Rationality.

I think the "The Twelve Virtues of Rationality" actually makes an argument that those things are virtues. It's start is also quite fitting: "The first virtue is curiosity. A burning itch to know is higher than a solemn vow to pursue truth." It argues against the frame of vows. Withdrawing into mysticism where everything goes is bad. Obfuscating is bad. It's quite easy to say something that gives rationalist applause lights. Critical thinking and actually thinking through the implications of using the frame of a vow is harder. Getting less wrong about what it happens to think rational is hard. Mystic writing that's too vague to be questioned doesn't really have a place here.

I don't disagree with this. A lot of the kind of math Scott lacks is just rather complicated bookkeeping.

(Apropos of nothing, the work "bookkeeping" has the unusual property of containing three consecutive sets of doubled letters: oo,kk,ee.)

I have the sort of math skills that Scott claims to lack. I lack his skill at writing, and I stand in awe (and envy) at how far Scott's variety of intelligence takes him down the path of rationality. I currently believe that the sort of reasoning he does (which does require careful thinking) does not cluster with mathy things in intelligence-space.

Look at his latest post: "hey wait a second, there is bias by censoring!" The "hard/conceptual part" is structuring the problem in the right way to notice something is wrong, the "bookkeeping" part is e.g. Kaplan-Meier / censoring-adjustment-via-truncation.

Scott's technique for shredding papers' conclusions seem to me to consist mostly of finding alternative stories that account for the data and that the authors have overlooked or downplayed. That's not really a math thing, and it plays right to his strengths.

Causal stories in particular.

I actually disagree that having a good intuitive grasp of "stories" of this type is not a math thing, or a part of the descriptive statistics magisterium (unless you think graphical models are descriptive statistics). "Oh but maybe there is confounder X" quickly becomes a maze of twisty passages where it is easy to get lost.

"Math things" is thinking carefully.

I think equating lots of derivation mistakes or whatever with poor math ability is: (a) toxic and (b) wrong. I think the innate ability... (read more)

Maybe for the bit about signalling in the last paragraph...? Just guessing here; perhaps Kawoomba will fill us in.

I like it when I can just point folks to something I've already written.

The upshot is that there are two things going on here that interact to produce the shattering phenomenon. First, the notion of closeness permits some very pathological models to be considered close to sensible models. Second, the optimization to find the worst-case model close to the assumed model is done in a post-data way, not in prior expectation. So what you get is this: for any possible observed data and any model, there is a model "close" to the assumed one that predict... (read more)

Thanks for your link!

It's a rather confusing way of referring to a "biased point of view". Saying that "Person A has privilege" wrt. some issue is a claim that A's overall observations and experiences are unrepresentative, and so she should rely on others' experiences as much as on her own.

That's not quite correct; I think it's best to start with the concept of systematic oppression. Suppose for the sake of argument that some group of people is systematically oppressed, that is, on account of their group identity, the system in which they find themselves... (read more)

As far as internet distractions go, lesswrong is hardly the worst offender. Although more than 10 minutes a day on LW is probably too much.

I'm a SSC fan and highly sympathetic to SJ goals and ideals. One of the core LW meetup members in my city can't stand to read SSC on account of what he perceives to be constant bashing of SJ. (I've already checked and verified that his perception of the proportion of SJ bashing in SSC posts is a massive overestimate, probably caused by selection bias.) As a specific example of verbiage that he considers typical of SSC he cited:

And the people who talk about “Nice Guys” – and the people who enable them, praise them, and link to them – are blurring the alre

... (read more)

Embarrassingly, I didn't have the "who feeds Paris" realization until last year -- well after I thought I had achieved a correct understanding of and appreciation for basic microeconomic thought.

Nice choice of username. :-)

Same special-snowflake level credible limits, but for different reasons. Swimmer963 has an innate drive to seek out and destroy (whatever she judges to be) her personal inadequacies. She wasn't very strategic about it in teenager-hood, but now she has the tools to wield it like a scalpel in the hands of a skilled surgeon. Since she seems to have decided that a standard NPC job is not for her, I predict she'll become a PC shortly.

You're already a PC; your strengths are a refusal to tolerate mediocrity in the long-term (or let us say, in the "indefinite" term, in multiple senses) and your vision for controlling and eradicating disease.

FWIW, in my estimation your special-snowflake-nature is somewhere between "more than slightly, less than somewhat" and "potential world-beater". Those are wide limits, but they exclude zero.

Ooh ooh, do mine!

Hikikomori no more? If so (as seems likely what with the girlfriend and all), it gladdens me to hear it.

It's a little bit complicated; I'm a night owl and my girlfriend has insomnia, so we spend a lot of evenings at home together. My sleep schedule is seriously out of sync with the rest of the world right now. A typical day might consist of getting out of bed around 4 PM or so, lounging around a bit, taking my mom out to "dinner" some time around 9-10 PM, go home, visiting with my girlfriend at her place, going back home at around 2AM and start putting my mom to bed, finally finish putting my mom to bed around 4 AM, playing video games until 6 AM, and then falling asleep. (My mom has multiple sclerosis and is in a wheelchair; she needs a lot of help and doing anything with her takes a very long time.)

In the biz we call this selection bias. The most fun example of this is the tale of Abraham Wald and the Surviving Bombers.

While maybe not essential, the "anti-" aspect of the correlations induced by anthropic selection bias at least seems important. Obviously, the appropriate changes of variables can make any particular correlation go either positive or negative. But when the events all measure the same sort of thing (e.g., flooding in 2014, flooding in 2015, etc.), the selection bias seems like it would manifest as anti-correlation. Stretching an analogy beyond its breaking point, I can imagine these strange anti-correlations inducing something like anti-ferromagnetism.

I was working in protein structure prediction.

I confess to being a bit envious of this. My academic path after undergrad biochemistry took me elsewhere, alas.

Try it -- the first three chapters are available online here. The first one is discursive and easy; the math of the second chapter is among of most difficult in the book and can be safely skimmed; if you can follow the third chapter (which is the first one to present extensive probability calculations per se) and you understand probability densities for continuous random variables then you'll be able to understand the rest of the book without formal training.


The stated core goal of MIRI/the old SIAI is to develop friendly AI. With regards to that goal, the sequences are advertising.

Kinda... more specifically, a big part of what they are is an attempt at insurance against the possibility that there exists someone out there (probably young) with more innate potential for FAI research than EY himself possesses but who never finds out about FAI research at all.

Lumifer wrote, "Pretty much everyone does that almost all the time." I just figured that given what we know of heuristics and biases, there exists a charitable interpretation of the assertion that makes it true. Since the meat of the matter was about deliberate subversion of a clear-eyed assessment of the evidence, I didn't want to get into the weeds of exactly what Lumifer meant.

But we do run biological computations (assuming that the exercise of human intelligence reduces to computation) to make em technology possible.

Since we're just bouncing short comments off each other at this point, I'm going to wrap up now with a summary of my current position as clarified through this discussion. The original comment posed a puzzle:

Brain emulations seem to represent an unusual possibility for an abrupt jump in technological capability, because we would basically be ‘stealing’ the technology rather than designing it from scratch. ...If th

... (read more)

Making intelligence-implementing computations substrate-independent in practice (rather than just in principle) already expands our capabilities -- being able to run those computations in places pink goo can't go and at speeds pink goo can't manage is already a huge leap.

Even if it is a huge leap to achieve that, until you run the computations, it is unclear to me how they could have contributed to that leap.

I'm just struck by how the issue of guilt here turns on mental processes inside someone's mind and not at all on what actually happened in physical reality.

Mental processes inside someone's mind actually happen in physical reality.

Just kidding; I know that's not what you mean. My actual reply is that it seems manifestly obvious that a person in some set of circumstances that demand action can make decisions that careful and deliberate consideration would judge to be the best, or close to the best, possible in prior expectation under those circumstances... (read more)

Because the solution has an immediate impact on the exercise of intelligence, I guess? I'm a little unclear on what other problems you have in mind.

The impact on exercise of intelligence doesn't seem to come until the ems are already discontinuously better (if I understand), so can't seem to explain the discontinuous progress.

That's because we live in a world where... it's not great, but better than speculating on other people's psychological states.

I wanted to put something like this idea into my own response to Lumifer, but I couldn't find the words. Thanks for expressing the idea so clearly and concisely.

I wasn't talking about faster progress as such, just about a predictable single large discontinuity in our capabilities at the point in time when the em approach first bears fruit. It's not a continual feedback, just an application of intelligence to the problem of making biological computations (including those that implement intelligence) run on simulated physics instead of the real thing.

I see. In that case, why would you expect applying intelligence to that problem to bring about a predictable discontinuity, but applying intelligence to other problems not to?

I would say that I don't do that, but then I'd pretty obviously be allowing the way I desire the world to be to influence my assessment of that actual state of the world. I'll make a weaker claim -- when I'm engaging conscious effort in trying to figure out how the world is and I notice myself doing it, I try to stop. Less Wrong, not Absolute Perfection.

Pretty much everyone does that almost all the time. So, is everyone blameworthy? Of course, if everyone is blameworthy then no one is.

That's a pretty good example of the Fallacy of Gray right there.

How do you know? Especially since falsely holding that belief would be an example.

Hmm.. let me think...

The materialist thesis implies that a biological computation can be split into two parts: (i) a specification of a brain-state; (ii) a set of rules for brain-state time evolution, i.e., physics. When biological computations run in base reality, brain-state maps to program state and physics is the interpreter, pushing brain-states through the abstract computation. Creating an em then becomes analogous to using Futamura's first projection to build in the static part of the computation -- physics -- thereby making the resulting program s... (read more)

I'm not sure I followed that correctly, but I take it you are saying that making brain emulations involves biological intelligence (the emulation makers) acting on biological intelligence (the emulations). Which is quite right, but it seems like intelligence acting on intelligence should only (as far as I know) produce faster progress if there is some kind of feedback - if the latter intelligence goes on to make more intelligence etc. Which may happen in the emulation case, but after the period in which we might expect particularly fast growth from copying technology from nature. Apologies if I misunderstand you.

It won't have source code per se, but one can posit the existence of a halting oracle without generating an inconsistency.

My intuition -- and it's a Good one -- is that the discontinuity is produced by intelligence acting to increase itself. It's built into the structure of the thing acted upon that it will feed back to the thing doing the acting. (Not that unique an insight around these parts, eh?)

Okay, here's a metaphor(?) to put some meat on the bones of this comment. Suppose you have an interpreter for some computer language and you have a program written in that language that implements partial evaluation. With just these tools, you can make the partial evaluator (i) act... (read more)

In the emulation case, how does intelligence acting on itself come into the picture? (I agree it might do after there are emulations, but I'm talking about the jump from capabilities prior to the first good emulation to those of emulations).

Fungible. The term is still current within economics, I believe. If something is fungible, it stands to reason that one can funge it, nu?

As Vaniver mentioned, it relates to exploring trade-offs among the various goals one has / things one values. A certain amount of it arises naturally in the planning of any complex project, but it seems like the deliberate practice of introspecting on how one's goals decompose into subgoals and on how they might be traded off against one another to achieve a more satisfactory state of things is an idea that is novel, distinct, and conceptually intricate enough to deserve its own label.

Yeesh. These people shouldn't let feelings or appearances influence their opinions of EY's trustworthiness -- or "morally repulsive" ideas like justifications for genocide. That's why I feel it's perfectly rational to dismiss their criticisms -- that and the fact that there's no evidence backing up their claims. How can there be? After all, as I explain here, Bayesian epistemology is central to LW-style rationality and related ideas like Friendly AI and effective altruism. Frankly, with the kind of muddle-headed thinking those haters display, the... (read more)

That was pretty subtle, actually. You had my blood boiling at the end of the first paragraph and I was about to downvote. Luckily I decided to read the rest.

He had doubts, he extinguished them, and that's what makes him guilty.

This is not the whole story. In the quote

He had acquired his belief not by honestly earning it in patient investigation, but by stifling his doubts.

you're paying too much heed to the final clause and not enough to the clause that precedes it. The shipowner had doubts that, we are to understand, were reasonable on the available information. The key to the shipowner's... I prefer not to use the word "guilt", with its connotations of legal or celestial judgment -- let us s... (read more)

Pretty much everyone does that almost all the time. So, is everyone blameworthy? Of course, if everyone is blameworthy then no one is.
The next passage confirms that this is the author's interpretation as well: And clearly what he is guilty of (or if you prefer, blameworthy) is rationalizing away doubts that he was obligated to act on. Given the evidence available to him, he should have believed the ship might sink, and he should have acted on that belief (either to collect more information which might change it, or to fix the ship). Even if he'd gotten lucky, he would have acted in a way that, had he been updating on evidence reasonably, he would have believed would lead to the deaths of innocents. The Ethics of Belief is an argument that it is a moral obligation to seek accuracy in beliefs, to be uncertain when the evidence does not justify certainty, to avoid rationalization, and to help other people in the same endeavor. One of his key points is that 'real' beliefs are necessarily entangled with reality. I am actually surprised he isn't quoted here more.

tl;dr: No, the subject of the site is wider than that.

Long version: IIRC, EY originally conceived of rationality as comprising two relatively distinct domains: epistemic rationality, the art and science of ensuring the map reflects the territory, and instrumental rationality, the art and science of making decisions and taking actions that constrain the future state of the universe according to one's goals. Around the time of the fork of CFAR off of SIAI-that-was, EY had expanded his conception of rationality to include a third domain: human rationality, th... (read more)

Nice capsule summary of LW. One minor suggestion about a personal hobby-horse: Might a simple but less jargon-y word/phrase replace "funging" here? (I'm actually not 100% sure what it means here since I'm used to always seeing "against" after "funge"...) [Edited to delete an extra "to".]

clearly advertising propaganda

It's not clear to me -- I'm not even sure what you think it's advertising!

( ETA: I wrote a bunch of irrelevant stuff, but then I scrolled up and saw (again, but it somehow slipped my mind even though I friggin' quoted it in the grandparent, I'm going senile at the tender age of 36) that you specifically think it's advertising for CFAR, so I've deleted the irrelevant stuff. )

Advertising for CFAR seems like a stretch, because -- although very nice things are said about Anna Salamon -- the actual product CFAR sells isn't mentioned at all.

My conclusion: there might be an interesting and useful post to be written about how epistemic rationality and techniques for coping with ape-brain intersect, and ShannonFriedman might be capable of writing it. Not there yet, though.

Entire subject of this site, surely?

...a long advertisement for CFAR...

...containing an immediately useful (or at least, immediately practicable) suggestion, as, er, advertised.

Yes. I will come back to this and fill in the missing piece, as I said to hairyfigment when they brought it to my attention. To me the conclusion is obvious, but I can see how it is not to people who are not me, now that this has been pointed out to me. I want to take my time to figure out how to word it properly, and have been very busy with work. I will be getting to it either later tonight or tomorrow. That said, I personally find it laughable that hairyfigment linked a piece that is clearly advertising propaganda IMHO after claiming that my post sounded like advertisement. Perhaps if I call myself an executive director this would not bother people? :) I had better be careful or I'm going to get this post entirely deleted... ;)

Meh. That's only a problem in practice, not in principle. In principle, all prediction problems can be reduced to binary sequence prediction. (By which I mean, in principle there's only one "area".)

And is thinking in terms of that principle leading us astray in practice? After all, humans don't learn social interactions by reducing it to bit sequence prediction...

I invite you to spell out the prediction that you drew about the evolution of human intelligence from your theory of humor and how the recently published neurology research verified it.

Sure, from page 22 of the 2nd paper: "... Now, the reason for this, and what makes it especially interesting, is in what it reveals about the origins of our humor instinct, which is that it clearly evolved not only before language, but also before we had higher brain functions that allowed hypothetical scenarios or sophisticated deception." Our humor instinct comes from a part of our brain that was evolutionarily programmed in a time where our intellect expressed itself in terms of "A," then "B", and get "C." Put the animal in the fire, wait until it smells good, then eat it. Grab the stick, hit the branch, and the fruit will fall." This study was announced a few days ago in Harvard magazine (, discussing the basic parts of the brain compared to the more "advanced" ones in humans. "...the neurons in the sensory and motor areas seem to be playing a game of telephone, in which information follows serial paths, the cells in the association areas use a communications strategy more like the Internet—with lots of simultaneous connections and pathways. Buckner and Krienen looked for a simple way to explain this phenomenon. Association areas not only evolved later in humans, they also form later in an individual’s development." ---------------------------------------- The functioning of our "misplacement" instinct, as I found it in my studying of humor, seemed to indicate precisely that our early form of intelligence, and the laughter that came from, functioned only in terms of A-B-C and recognizing errors in those types of sequences, and our advanced functions that allowed more abstract and hypothetical thinking must have occurred later. The study seems to have used different methods to have arrived at the same conclusion.

What if it was very hard to produce an intelligence that was of high performance across many domains?... There are a few strong counters to this - for instance, you could construct good generalists by networking together specialists...

In fact, we already know the minimax optimal algorithm for combining "expert" predictions (here "expert" denotes an online sequence prediction algorithm of any variety); it's the weighted majority algorithm.

The challenge is not to combine different algorithms in the same area, but in different areas. A social bot and a stock market predictor - how should they interface best? And how would you automate the construction of interfaces?
Load More