Posts

Sorted by New

Wiki Contributions

Comments

Did evolution need to understand information encoding in the brain before it achieved full GI?

I don't think a high replication rate necessarily implies the experiments were boring. Suppose you do 10 experiments, but they're all speculative and unlikely to be true: let's say only one of them is looking at a true effect, BUT your sample sizes are enormous and you have a low significance cutoff. So you detect the one effect and get 9 nulls on the others. When people try to replicate them, they have a 100% success rate on both the positive and the negative results.

The fraction of attempts that will fail due to random chance depends on the power, and replicators tend to go for very high levels of power, so typically you'd have about 5% false negatives or so in the replications.

There's also a new contest on hypermind called "The Long Fork Project," predicting the impact of Trump vs Biden. $20k in prize money.

I don't think that paper allows any such estimate because it's based on published results, which are highly biased toward "significant" findings. It's why, for example, in psychology meta-analyses have effect sizes 3x larger than those of registered replications. For an estimate of the replicability of a field you need something like the Many Labs project (~54% replication, median effect size 1/4 of the original study).

There is no real question about whether most published research findings are false or not. We know that's the case due to replication attempts. Ioannides' paper isn't really _about_ plugging in specific numbers, or showing that a priori that must be the case, so I think you're going at it from a slightly wrong angle.

Great links, thanks.


The Augur launch has unfortunately been a complete catastrophe, as the high transaction costs of ETH right now make it so that simply making a trade costs about $30...I hope they manage to come up with some sort of solution.

Could you point out where he does that exactly? Here's the transcript: https://intelligence.org/2018/02/28/sam-harris-and-eliezer-yudkowsky/

The whole thing hangs on footnote #4, and you don't seem to understand what realists actually believe. Of course they would dispute it, and not just "some" but most philosophers.

> If we were fully rational (and fully honest), then we would always eventually reach consensus on questions of fact.

The things you cite right before this sentence say the exact opposite. This is only possible give equal priors, and there's no reason to assume ratioanl and honest people would have equal priors about...anything.

>Lee Kuan Yew gained very strong individual power over a small country, and unlike the hundreds of times in the history of Earth when that went horribly wrong, Lee Kuan Yew happened to know some economics.

Actually, this isn't a one-off. Monarchies in general achieve superior economic results (https://twin.sci-hub.cc/6b4aea0cae94d2f4fd6c2e459dab6881/besley2017.pdf ):

>We assemble a unique dataset on leaders between 1874 and 2004 in which we classify them as hereditary leaders based on their family history. The core empirical finding is that economic growth is higher in polities with hereditary leaders but only if executive constraints are weak. Moreover, this holds across of a range of specifications. The finding is also mirrored in policy outcomes which affect growth. [...] The logic that we have exploited is essentially that put forward in Olson (1993) who emphasized that hereditary rule can provide a means improving inter-temporal incentives in government.

Load More