I apologize. I don't know how to make links in comments.
Where can I read Eliezer's old non-Sequence blog posts? I recently read Eliezer on The Weighted Majority Algorithm and found it very useful, but this post isn't even contained in The Original Sequences, as far as I can tell.
Is there a way to access his old writings which is more efficient than scrolling down on his user page for a very long time?
I don't think Aumann's agreement theorem is a good way to motivate your normative judgments, though I basically agree with your conclusions. I read Duncan's post as well and did not really understand why he called you out. You both seem non-malevolent to me.
Bayesianism generalizes logical reasoning to uncertain claims, subject to certain consistency assumptions. Obviously humans are not ideal Bayesians. But in a deeper sense, maybe we're not supposed to be. Not in an instrumental sense where being Bayesian is incompatible with some kind of good life, but rather in an epistemic sense. Maybe there is some mathematical theory of reasoning, we'll call it Glorpism, of which humans are an approximation, and it is easier for humans to become more Glorpish than it is for us to become more Bayesian, and becoming more Glorpish is powerful and general in the sense we expect epistemic rationality to be. Glorpism may not have agreement guarantees in the way that Bayesianism does.
Sam Eisenstat's Condensation is an example of something like this, although I don't think it's The Thing. Importantly, Condensation only has the translation theorem to the extent that models are hierarchically organized in a nice way, which does not always hold. (Apologies for any errors, feel free to correct me.)
I also think a purely functionalist account of reasoning error deletes a lot of information. For example, a Ruby that says, "oh, my bad" upon being confronted with evidence from computer analysis of photographs that the different images are all grey is different from a Ruby who changes the topic or flies into a rage. Among the first type of Ruby, those that systematically downgrade or restructure how they assign credence to their color-intuitions after admitting their error is different from those who "bounce back" to their original epistemic state. The best one of these is well-modelled by mistake theory. The worst two, conflict theory.
In real life, I think honest humans often agree to disagree. I do not fully understand why this is and consider this an important problem in the theory of powerful reasoners. I think part of it is that humans perform reasoning using words. Honest words correspond to natural categories but natural categories have an intrinsic misgeneralization problem. If you have two objects, korgs and spangs, which both have exactly half the properties each of bleggs and rubes, but different sets of these properties, then honest people might categorize them differently as bleggs and rubes. But this process is happening below the level of introspective access, so dissolving the question / debucketing has to be done "out loud" in the chamber of consciousness. The act of debucketing / rectifying definitions is a constraint problem with the constraints supplied by one's introspection on hypotheticals. In general this can take exponential time in the number of traits used to define bleggs and rubes. (I do not have a proof of this, and expect the answer is sensitive to the formulation of the problem. This last claim is purely mathematical intuition.)
Also, our equivalent of Bayesian evidence is our sense-data, which is stored in an extremely unreliable compression system.
Seconding this. I am very smart but not as smart as famous 20th-century scientists.
Modernity is full of so much crystallized intelligence that the gains you receive from fluid intelligence are largely disguised gains-from-trade or straight up gifts-from-benevolent-people. Hanging out with and learning from people smarter than me has revealed how I am objectively similar to stupid people, and then I can treat people stupider than me the way I want to be treated by people smarter than me: I want to be taught, but not condescended to.
"The odds of a Cyberbuddhist Rationalist ending up in this situation with the Anthropic Principle at work are pretty good."
This is textbook hindsight bias (the textbook is the Sequences.)
Making deductions based on anthropics is intrinsically small data (more accurately it is data which is strongly self-correlated, GIGO) because we do not have empirical access to other possible worlds. Small data / GIGO data comes from priors / sense of beauty / sense of parsimony.
Human sense of beauty / parsimony predictably errs towards anthropomorphization, means-end conflation, and wishful thinking. You are human. Being enlightened may help your priors in this matter but not sufficiently to overcome whatever facts about neuroscience consistently produce those errors. Materialism trumps spiritual revelation, e.g. brain damage influencing spiritual attainment.
This post isn't in the reference class [probability theory], [futurism], or [analytic philosophy]. It's in the reference class [religious doctrine].
Honestly, I hope the value proposition of this post is to examine whether the LessWrong community will call out bullshit from respected posters.
My prediction that Maitreya is lsusr was correct :).
All methodology is from the first section of the appendix of the linked paper. The paper cited pages 81-87 of Genetics and Analysis of Quantitative Traits. I read from chapter 4 up until those pages to understand the method conceptually. Every niceness assumption is made except for "no shared environment." For example, "no assortative mating."
Changing some notation: , i.e. we normalize so that the total "variance due to genes" is 1. We assume that the variance due to shared environment is the same for twins and non-twins, This is a standard assumption in ACE, and it seems reasonable. will represent the "nonlinear part of the effect due to genes," i.e. that due to epistasis and dominance. is the effect due to alleles, what you called .
Facts:
always. (1)
When :
. (2)
Note that , so the upper bound becomes trivial when either score is .
Explanation:
We can decompose . We can decompose this further into . That is, we're taking the nonlinear part and decomposing it into interactions involving alleles across loci and dominance effects in loci.
To understand dominance effects, note that a locus can have 0, 1, or 2 instances of an allele. The respective phenotypes resulting from these might not be produced by any linear function on alleles, because not every three points are colinear. The dominance term is the error resulting from a linear regression. If we were haploid, we wouldn't have to deal with this.
So for example, refers to phenotype effects that only appear when there is a specific combination of two alleles at three separate loci, and are multilinear in the alleles occurring at five other loci.
Interpreting this in context, .
Now we can justify our conclusions. Note that the third term is at most but has no lower bound. When we have to write it out, we'll call it .
Remember that is exactly the proportion of variance due to genes that cannot be captured by a polygenic score, the "phantom heritability." The paper is concerned with how substantial means , so that if the polygenic score is close to people will assume there is missing heritability when it reality the polygenic score is perfect and the heritability is simply nonlinear.
The ACE Estimate:
. This disagrees with the figure in the appendix of the paper. I believe they made an arithmetic error, but it is possible I made a conceptual error.
Recalling , . Those constant terms in the sum go as low as and arbitrarily close to , so by taking the bounds and dividing we recover (1).
The Rule of Thumb:
. Remember the bounds on , we can write , where . Combining this with the middle term we have where Doing the arithmetic
. Picking yields (2).
Comments:
I don't yet rigorously understand how is decomposed into epistasis and dominance. The book gives only an intuition and not a proof. It is very ad hoc.
Edit: As of yesterday, I now understand.
Thanks.