All of Thrasymachus's Comments + Replies

REVISED: A drowning child is hard to find

The malaria story has fair face validity if one observes the wider time series (e.g.). Further, the typical EA 'picks' for net distribution are generally seen as filling around the edges of the mega-distributors.

FWIW: I think this discussion would be clearer if framed in last-dollar terms.

If Gates et al. are doing something like last dollar optimisation, trying to save as many lives as they can allocating across opportunities both now and in the future, leaving the right now best marginal interventions on the table would imply they expect to ex... (read more)

Please Critique Things for the Review!

I also buy the econ story here (and, per Ruby, I'm somewhat pleasantly surprised by the amount of reviewing activity given this).

General observation suggests that people won't find writing reviews that intrinsically motivating (compare to just writing posts, which all the authors are doing 'for free' with scant chance of reward, also compare to academia - I don't think many academics find peer review/refereeing one of the highlights of their job). With apologies for the classic classical econ joke, if reviewing was so valuable, ho... (read more)

4Raemon1yHelpful thoughts, thanks! I definitely don't expect the money to be directly rewarding in a standard monetary sense. (In general I think prizes do a bad job of providing expected monetary value). My hope for the prize was more to be a strong signal of the magnitude of how much this mattered, and how much recognition reviews would get. It's entirely plausible that reviewing is sufficiently "not sufficiently motivating" that actually, the thing to do is pay people directly for it. It's also possible that the prizes should be lopsided in favor of reviews. (This year the whole process was a bit of an experiment so we didn't want to spend too much money on it, but it might be that just adding more funding to subsidize things is the answer) But I had some reason to think "actually things are mostly fine, it's just that the Review was a new thing and not well understood, and communicating more clearly about it might help." My current sense is: * There have been some critical reviews, so there is at least some motivation latent motivation to do so. * There are people on the site who seem to be generally interested in giving critical feedback, and I was kinda hoping that they'd be up for doing so as part of a broader project. (Some of them have but not as many as I'd hoped. To be fair, I think the job being asked for the 2018 Review is harder than what they normally do) * One source of motivation I'd expected to tap into (which I do think has happened a bit) is "geez, that might be going into the official Community Recognized Good Posts Book? Okay, before it wasn't worth worrying about Someone Being Wrong On the Internet, but now the stakes are raised and it is worth it."
Polio and the controversy over randomized clinical trials

Sure - there's a fair bit of literature on 'optimal stopping' rules for interim results in clinical trials to try and strike the right balance.

It probably wouldn't have helped much for Salk's dilemma: Polio is seasonal and the outcome of interest is substantially lagged from the intervention - which has to precede the exposure, and so the 'window of opportunity' is quickly lost; I doubt the statistical methods for conducting this were well-developed in the 50s; and the polio studies were already some of the largest trial... (read more)

Polio and the controversy over randomized clinical trials
Mostly I just find it fascinating that as late as the 1950s, the need for proper randomized blind placebo controls in clinical trials was not universally accepted, even among scientific researchers. Cultural norms matter, especially epistemic norms.

This seems to misunderstand the dispute. Salk may have had an overly optimistic view of the efficacy of his vaccine (among other foibles your source demonstrates), but I don't recall him being a general disbeliever in the value of RCTs.

Rather, his objection is consonant with consensus guidelines for medical... (read more)

3Pattern1yThere are ways of handling that epistemically, although they're more complicated - if enough evidence is acquired quickly, the harmful part of the trial is stopped - whichever that is.
Neural Annealing: Toward a Neural Theory of Everything (crosspost)

I'm afraid I couldn't follow most of this, but do you actually mean 'high energy' brain states in terms of aggregate neural activity (i.e. the parentheticals which equate energy to 'firing rates' or 'neural activity')? If so, this seems relatively easy to assess for proposed 'annealing prompts' - whether psychedelics/meditation/music/etc. tend to provoke greater aggregate activity than not seems open to direct calorimetry, leave alone proxy indicators.

Yet the steers on this tend very equivocal (e.g. the ev... (read more)

4lsusr1yI think this post is referring to "high energy" not in terms of electrochemical neural activity but instead as a metaphor for optimization in machine learning. Machine learning is the process of minimizing an error function. We can conceptualize this error function as a potential gradient such as a gravity well or electrostatic potential. Minimizing the energy of a particle in this potential gradient is mathematically equivalent to minimizing the error function. The advantage of referring to this as "energy" instead of "error" is it lets you borrow other terms like kinetic energy (in both the classical and quantum sense) which makes search algorithms intuitively easy to understand. The post is referring to this kind of entropic energy.
The unexpected difficulty of comparing AlphaStar to humans

Thanks for this excellent write-up!

I'm don't have relevant expertise in either AI or SC2, but I was wondering whether precision might still be a bigger mechanical advantage than the write-up notes. Even if humans can (say) max out at 150 'combat' actions per minute, they might misclick, not be able to pick out the right unit in a busy and fast battle to focus fire/trigger abilities/etc, and so on. The AI presumably won't have this problem. So even with similar EAPM (and subdividing out 'non-combat' EAPM which need not be... (read more)

5maximkazhenkov2yI think that's where the central issue lies with games like Starcraft or Dota; their strategy space is perhaps not as rich and complex as we have initially expected. Which might be a good reason to update towards believing that the real world is less exploitable (i.e. technonormality?) as well? I don't know. However, I think it would be a mistake to write off these RTS games as "solved" in the AI community the same way chess/Go are and move on to other problem domains. AlphaStar/OpenAI5 require hundreds of years of training time to reach the level of human top professionals, and I don't think it's an "efficiency" problem at all. Additionally, in both cases there are implicit domain knowledge integrated into the training process: In the case of AlphaStar, the AI was first trained on human game data and, as the post mentions, competing agents are subdivided into strategy spaces defined by human experts: In the case of OpenAI5, the AI is still constrained to a small pool of heroes, the item choices are hard-coded by human experts, and it would have never discovered relatively straightforward strategies (defeating Roshan to receive a power-up, if you're familiar with the game) were it not for the programmers' incentivizing in the training process. It also received the same skepticism in the gaming community (in fact, I'd say the mechanical advantage of OpenAI5 was even more blatant than with AlphaStar). This is not to belittle the achievements of the researchers, it's just that I believe these games still provide fantastic testing grounds for future AI research, including paradigms outside deep reinforcement learning. In Dota, for example, one could change the game mode to single draft to force the AI out of a narrow strategy-space that might have been optimal in the normal game. In fact, I believe (~75% confidence) the combinatorial space of heroes in a single draft Dota game (and the corresponding optimal-strategy-space) to be so large that, without a paradigm sh
1ErickBall2yI wonder if you could get around this problem by giving it a game interface more similar to the one humans use. Like, give it actual screen images instead of lists of objects, and have it move a mouse cursor using something equivalent to the dynamics of an arm, where the mouse has momentum and the AI has to apply forces to it. It still might have precision advantages, with enough training, but I bet it would even the playing field a bit.

Combining the two doesn't solve the 'biggest problems of utilitarianism':

1) We know from Arrhenius's impossibility theorems you cannot get an axiology which can avoid the repugnant conclusion without incurring other large costs (e.g. violations of transitivity, dependence of irrelevant alternatives). Although you don't spell out 'balance utilitarianism' enough to tell what it violates, we know it - like any other population axiology - will have very large drawbacks.

2) 'Balance utilitarianism' seems a long way fr... (read more)

Asymmetric Weapons Aren't Always on Your Side

(Very minor inexpert points on military history, I agree with the overall point there can be various asymmetries, not all of which are good - although, in fairness, I don't think Scott had intended to make this generalisation.)

1) I think you're right the German army was considered one of the most effective fighting forces on a 'man for man' basis (I recall pretty contemporaneous criticism from allied commanders on facing them in combat, and I think the consensus of military historians is they tended to outfight American, British, and Ru... (read more)

The Schelling Choice is "Rabbit", not "Stag"

It's perhaps worth noting that if you add in some chance of failure (e.g. even if everyone goes stag, there's a 5% chance of ending up -5, so Elliott might be risk-averse enough to decline even if they knew everyone else was going for sure), or some unevenness in allocation (e.g. maybe you can keep rabbits to yourself, or the stag-hunt-proposer gets more of the spoils), this further strengthens the suggested takeaways. People often aren't defecting/being insufficiently public spirited/heroic/cooperative if they aren't 'going to hun... (read more)

3Ratheka2yAgree - talking things out, making everything as common knowledge as possible, and people who strongly value the harder path and who have resources committing some to fence off the worst cases of failure, seem to be necessary prerequisites to staghunting.
Drowning children are rare
A marginalist analysis that assumes that the person making the decision doesn’t know their own intentions & is just another random draw of a ball from an urn totally misses this factor.

Happily, this factor has not been missed by either my profile or 80k's work here more generally. Among other things, we looked at:

  • Variance in impact between specialties and (intranational) location (1) (as well as variance in earnings for E2G reasons) (2, also, cf.)
  • Areas within medicine which look particularly promising (3)
  • Why 'direct' clinical impact (ei
... (read more)
Drowning children are rare

[I wrote the 80k medical careers page]

I don't see there as being a 'fundamental confusion' here, and not even that much of a fundamental disagreement.

When I crunched the numbers on 'how much good do doctors do' it was meant to provide a rough handle on a plausible upper bound: even if we beg the question against critics of medicine (of which there are many), and even if we presume any observational marginal response is purely causal (and purely mediated by doctors), the numbers aren't (in EA terms) that exciting in terms of di... (read more)

2Douglas_Knight1yI just want to register disagreement.

Something that nets out to a small or no effect because large benefits and harms cancel out is very different (with different potential for impact) than something like, say, faith healing, where you can’t outperform just by killing fewer patients. A marginalist analysis that assumes that the person making the decision doesn’t know their own intentions & is just another random draw of a ball from an urn totally misses this factor.

What are the advantages and disadvantages of knowing your own IQ?

It looks generally redundant in most cases to me: Given how pervasive IQ-correlations are, I think most people can get a reasonable estimate of their IQ by observing their life history so far. E.g.

  • Educational achievement
  • Performance on other standardised tests
  • Job type and professional success
  • Peer esteem/reputation

Obviously, none of these are perfect signals, but I think taking them together usually gives a reasonable steer to a credible range not dramatically larger than test-restest correlations of an IQ test. An IQ test would still provide additional info... (read more)

How good is a human's gut judgement at guessing someone's IQ?

Googling around phrases like 'perception of intelligence' seems to be a keyword for a relevant literature. On a very cursory skim (i.e. no more than what you see here) it seems to suggest "people can estimate intelligence of strangers better than chance (but with plenty of room for error and bias), even with limited exposure". E.g.:

Perceived Intelligence Is Associated with Measured Intelligence in Men but Not Women (Note in this study the assessment was done purely on looking at a photograph of someone's face)

Accurate Intelligence ... (read more)

Epistemic Tenure

As you say, Bob's good epistemic reputation should count when he says something that appears wild, especially if he has a track record that endorses him in these cases ("We've thought he was crazy before, but he proved us wrong"). Maybe one should think of Bob as an epistemic 'venture capitalist', making (seemingly) wild epistemic bets which are right more often than chance (and often illuminating even if wrong), even if they aren't right more often than not, and this might be enough to warrant further attention ("we... (read more)

What are the open problems in Human Rationality?

FWIW: I'm not sure I've spent >100 hours on a 'serious study of rationality'. Although I have been around a while, I am at best sporadically active. If I understand the karma mechanics, the great majority of my ~1400 karma comes from a single highly upvoted top level post I wrote a few years ago. I have pretty sceptical reflexes re. rationality, the rationality community, etc., and this is reflected in that (I think) the modal post/comment I make is critical.

On the topic 'under the hood' here:

I sympathise with the desire to... (read more)

4ChristianKl2yGiven that the OP counts the Good Judgment project as part of the movement I think that certainly qualifies. It's my understanding that while the Good Judgment project made progress on the question of how to think about the right probability, we still lack ways for people to integrate the making of regular forecasts into their personal and professional lives.

I know of a lot of people who continued studying and being interested in the forecasting perspective. I think the primary reason why there has been less writing from that is just that LessWrong was dead for a while, and so we've seen less writeups in general. (I also think there were some secondary factors that also contributed, but that the absence of a publishing platform was the biggest)

What are the open problems in Human Rationality?

There seem some foundational questions to the 'Rationality project', and (reprising my role as querulous critic) are oddly neglected in the 5-10 year history of the rationalist community: conspicuously, I find the best insight into these questions comes from psychology academia.

Is rationality best thought of as a single construct?

It roughly makes sense to talk of 'intelligence' or 'physical fitness' because performance in sub-components positively correlate: although it is hard to say which of an elite ultramarathoner, Judoka,... (read more)

1Rudi C1yMy personal experience is that I had most of what enables me to be epistemically rational from childhood (so probably genetic), but that the exposure to behavioral economics, science and other rationality-adjacent memes early in my life significantly boosted that genetic seed. Another personal observation: I have never felt someone I know has improved their rationality. Though I also don’t know almost anyone who even cares about becoming more rational.
2Richard_Ngo2yThis point seems absolutely crucial; and I really appreciate the cited evidence.
Open Thread January 2019

On Functional Decision Theory (Wolfgang Schwarz)

I recently refereed Eliezer Yudkowsky and Nate Soares's "Functional Decision Theory" for a philosophy journal. My recommendation was to accept resubmission with major revisions, but since the article had already undergone a previous round of revisions and still had serious problems, the editors (understandably) decided to reject it. I normally don't publish my referee reports, but this time I'll make an exception because the authors are well-known figures from outside academia, and I
... (read more)
2ryan_b2yI feel like rejection-with-explanation is still an improvement over the norm. Maybe pulling back and attacking the wrong intuitions Schwarz is using directly and generally would be worthwhile.

Relevant excerpt for why exactly it was rejected:

The standards for deserving publication in academic philosophy are relatively simple and self-explanatory. A paper should make a significant point, it should be clearly written, it should correctly position itself in the existing literature, and it should support its main claims by coherent arguments. The paper I read sadly fell short on all these points, except the first. (It does make a significant point.) [...]
I still think the paper could probably have been published after a few rounds of major revisions
... (read more)

I'm not gonna go comment on his blog because his confusion about the theory (supposedly) isn't related to his rejection of the paper, and also because I think talking to a judge about the theory out of band would bias their judgement of the clarity of the writing in future (it would come to seem more clear and readable to them than it is, just as it would to me) and is probably bad civics, but I just have to let this out because someone is wrong on the internet, damnit

FDT says you should not pay because, if you were the kind of person who doesn&#
... (read more)
Why is so much discussion happening in private Google Docs?

I'm someone who both prefers and practises the 'status quo'.

My impression is the key feature of this is limited (and author controlled) sharing. (There are other nifty features for things like gdocs - e.g. commenting 'on a line' - but this practice predates gdocs). The key benefits for 'me as author' are these:

1. I can target the best critics: I usually have a good idea of who is likely to help make my work better. If I broadcast, the mean quality of feedback almost certainly goes down.

2. I can leverage existing relatio... (read more)

if Alice sees Bob make good remarks etc., she’s more interested in ‘running a draft by him’ next time, or to respond positively if Bob asks her to look something over

This dynamic contributes to anxiety for me to comment in Google Docs, and makes it less fun than public commenting (apparently the opposite of many other people). I feel like if I fail to make a good contribution, or worse, make a dumb comment, I won't be invited to future drafts and will end up missing a lot of good arguments, or entire articles because many drafts don't get published unti

... (read more)
I don't see 'comments going to waste' issue as the greatest challenge

I think this underestimates the challenge. Empirically, people don't crosspost those comments. Periodically saying "hey it'd be good if you crossposted those private comments" won't change the underlying incentive structure.

(Similarly, the fact that one 'could' keep an eye out for posts and comments from outsiders won't change the fact that people generally don't)

Genetically Modified Humans Born (Allegedly)

Once again I plead that when you see that an expert community looks like they don't know what their doing, it is usually more accurate to 'reduce confidence' in your understanding rather than their competence. The questions were patently not 'about forms', and covered pretty well the things I would have in mind (I'm a doctor, and I have fairly extensive knowledge of medical ethics).

To explain:

  • Although 'institutional oversight' in medicine is often derided (IRB creep, regulatory burden, and so on and so forth), one o
... (read more)
4ryan_b2yNothing you are saying comes as a surprise, but my confidence in the process remains reduced. The problem here is that having a procedure for establishing whether a patient is informed, all of the weight rests on the procedure, and virtually none on the practitioner. This is the same for all fields of expertise. I have read many of these kinds of forms as a patient. What we want them to be for is informing the patient; what they are actually for is defending against the accusation that the patient was not informed. The audience was very much preoccupied with how the procedure for informing the patient was conducted, and seemed to consider this the biggest red flag. I find the fact that the lead scientist kept referring to a form to be the biggest red flag, because it suggests he didn't engage the ethical issues directly. Suppose for a moment that he did a much better job informing the patients - proper training, third party verified composition, etc. I don't think this would have any implications at all for how He Jiankui engaged with the question of whether it was right to do this, but I do expect the audience to have been largely mollified. I see this as a problem.
No Really, Why Aren't Rationalists Winning?

I don't see the 'why aren't you winning?' critique as that powerful, and I'm someone who tends critical of rationality writ-large.

High-IQ societies and superforecasters select for demonstrable performance at being smart/epistemically rational. Yet on surveying these groups you see things like, "People generally do better-than-average by commonsense metrics, some are doing great, but it isn't like everyone is a millionaire". Given the barrier to entry to the rationalist community is more, "sincere interest" ... (read more)

No standard metric for CFAR workshops?

Another thing I'd be particularly interested in is longer term follow-up. It would be impressive if the changes to conscientiousness etc. observed in the 2015 study persist now.

I'd be hesitant to defend Great Man theory (and so would apply similar caution) but I think it can go some way, especially for defending a fragility of history hypothesis.

In precis (more here):

1. Conception of any given person seems very fragile. If parents decide to conceive an hour earlier or later (or have done different things earlier in the day, etc. etc.), it seems likely another one of the 100 million available sperm fuses than the one which did. The counterpart seems naturally modelled by a sibling, and siblings are considerably different from... (read more)

Historical mathematicians exhibit a birth order effect too

I not sure t-tests are the best approach to take compared to something non-parametric, given smallish sample, considerable skew, etc. (this paper's statistical methods section is pretty handy). Nonetheless I'm confident the considerable effect size (in relative terms, almost a doubling) is not an artefact of statistical technique: when I plugged the numbers into a chi-squared calculator I got P < 0.001, and I'm confident a permutation technique or similar would find much the same.

Informational hazards and the cost-effectiveness of open discussion of catastrophic risks

0: We agree potentially hazardous information should only be disclosed (or potentially discovered) when the benefits of disclosure (or discovery) outweigh the downsides. Heuristics can make principles concrete, and a rule of thumb I try to follow is to have a clear objective in mind for gathering or disclosing such information (and being wary of vague justifications like ‘improving background knowledge’ or ‘better epistemic commons’) and incur the least possible information hazard in achieving this.

A further heuristic which seems right to me is one shoul... (read more)

Informational hazards and the cost-effectiveness of open discussion of catastrophic risks

Thanks for writing this. How best to manage hazardous information is fraught, and although I have some work in draft and under review, much remains unclear - as you say, almost anything could have some some downside risk, and never discussing anything seems a poor approach.

Yet I strongly disagree with the conclusion that the default should be to discuss potentially hazardous (but non-technical) information publicly, and I think your proposals of how to manage these dangers (e.g. talk to one scientist first) generally err too lax. I provide the substance of... (read more)

Thanks for this and subsequent comment which generally helped me to update my views on the problem and become even more cautious in discussing things.

Some thoughts appeared in my mind while reading, maybe I will have more thoughts later:

1. It looks like that all the talk about infohazards could be boiled down to just one thesis: "biorisk is much more serious x-risk than AI safety, but we decided not to acknowledge it, as it could be harmful".

2. Almost all work in AI safety is based on "red-teaming": someone comes with an idea X how to... (read more)

4Thrasymachus3y0: We agree potentially hazardous information should only be disclosed (or potentially discovered) when the benefits of disclosure (or discovery) outweigh the downsides. Heuristics can make principles concrete, and a rule of thumb I try to follow is to have a clear objective in mind for gathering or disclosing such information (and being wary of vague justifications like ‘improving background knowledge’ or ‘better epistemic commons’) and incur the least possible information hazard in achieving this. A further heuristic which seems right to me is one should disclose information in the way that maximally disadvantages bad actors versus good ones. There are a wide spectrum of approaches that could be taken that lie between ‘try to forget about it’, and ‘broadcast publicly’, and I think one of the intermediate options is often best. 1: I disagree with many of the considerations which push towards more open disclosure and discussion. 1.1: I don’t think we should be confident there is little downside in disclosing dangers a sophisticated bad actor would likely rediscover themselves. Not all plausible bad actors are sophisticated: a typical criminal or terrorist is no mastermind, and so may not make (to us) relatively straightforward insights, but could still ‘pick them up’ from elsewhere. 1.2: Although a big fan of epistemic modesty (and generally a detractor of ‘EA exceptionalism’), EAs do have an impressive track record in coming up with novel and important ideas. So there is some chance of coming up with something novel and dangerous even without exceptional effort. 1.3: I emphatically disagree we are at ‘infohazard saturation’ where the situation re. Infohazards ‘can’t get any worse’. I also find it unfathomable ever being confident enough in this claim to base strategy upon its assumption (cf. eukaryote’s comment [] ). 1.4: There are some
Societal Growth Requires Rehabilitation

This seems right to me, and at least the 'motte' version of growth mindset accepts that innate ability may set pretty hard envelopes on what you can accomplish regardless of how energetic/agently you pursue self improvement (and this can apply across a range of ability - although it seems cruel and ludicrous to suggest someone with severe cognitive impairment can master calculus, it also seems misguided to suggest someone in middle age can become a sports star if they really go for it). As you say, taking growth mindset 'too far' has a ... (read more)

The Berkeley Community & The Rest Of Us: A Response to Zvi & Benquo
A healthy topology of the field should have approximately power-law distribution of hub sizes. This should be true also for related research fields we are trying to advance, like AI alignment or x-risk. If the structure is very far from that (e.g. one or two very big hubs, than nothing, than a lot of two orders of magnitude smaller groups fighting for mere existence), the movement should try to re-balance, supporting growth of medium-tier hubs.

Although my understanding of network science is abecedarian, I'm unsure of both whether this feature is diag... (read more)

6Jan_Kulveit3y1) Thanks for the pointer to the data, I have to agree that if the surveys are representative of EA / rationalist community, than actually there are enough medium sized hubs. When plotting it, the data seem to look reasonably power-lawy [] - (an argument for a greater centralization could have the form of arguing for a different exponent). I'm unsure about what the data actually show - at least my intuitive impression is much more activity is going on in Bay area than suggested by the surveys. A possible reason may be the surveys count equally everybody above some relatively low level of engagement (willingness to fill a survey), and if we had data weighted by engagement/work effort/... it would look very different. If the complains that hubs are "sucking in" the most active people from smaller hubs, than big differences between "population size" and "results produced" can be a consequence (effectively wasting the potential of some medium sized hubs, because some key core people left, damaging the local social structure of the hub) 2) Yes there are many effects leading to power laws (and influencing their exponents). In my opinion, rather than trying to argue from the first principles which of these effects are good and bad, it may be more useful to find comparable examples (e.g. of young research fields, or successful social movements), and compare their structures. My feel is rationality/EA/AI safety communities are getting it somewhat wrong. Certain 'jobs' seem to have this property: a technical AI researcher in (say) Japan probably can have greater EV working in an existing group (most of which are in the bay) rather than trying to seed a new AI safety group in Japan. This certainly seems to be the prevalent intuition in the field, based on EV guesstimates, etc., and IMO could be wrong. Or, speculation, possibly isn't wrong _per se_, but does not take into acc


It also risks a backfire effect. If one is in essence a troll happy to sneer at what rationalists do regardless of merit (e.g. "LOL, look at those losers trying to LARP enders game!"), seeing things like Duncan's snarky parenthetical remarks would just spur me on, as it implies I'm successfully 'getting a rise' out of the target of my abuse.

It seems responses to criticism that is unpleasant or uncharitable are best addressed specifically to the offending remarks (if they're on LW2, this seems like pointing out the fall... (read more)

I also think I got things about right, but I think anyone else taking an outside view would've expected roughly the same thing.

I think you might be doing yourself a disservice. I took the majority of contemporary critcism was more directed towards (in caricature) 'this is going to turn into a nasty cult!' than (what I took your key insight to be) 'it will peter out because the commander won't actually have the required authority'.

So perhaps the typical 'anyone else' would have alighted on the wrong outside view, or ... (read more)

Bravo - I didn't look at the initial discussion, or I would have linked your pretty accurate looking analysis (on re-skimming, Deluks also had points along similar lines). My ex ante scepticism was more a general sense than a precise pre-mortem I had in mind.

Although I was sufficiently sceptical of this idea to doubt it was 'worth a shot' ex ante,(1) I was looking forward to being pleasantly surprised ex post. I'm sorry to hear it didn't turn out as well as hoped. This careful and candid write-up should definitely be included on the 'plus' side of the ledger for this project.

With the twin benefits of no skin in the game and hindsight. I'd like to float another account which may synthesize a large part of 'why it didn't work'.

Although I understand DAB wasn'... (read more)

4adifferentface3yThis was roughly [] my ex ante analysis.
I assume the legal 'fact on the ground' is that the participants of DAB were co-signatories on a lease, making significant financial contributions, with no mechanism for the designated 'commander' to kick people out unilaterally.

This is approximately correct--not all of us were on the lease, and not all of us were making significant financial contributions. But someone who was on the lease and was making a significant financial contribution could have made it highly difficult to evict them, even if everyone else in the house wanted them... (read more)

Comments on Power Law Distribution of Individual Impact

This new paper may be of relevance (H/T Steve Hsu). The abstract:

The largely dominant meritocratic paradigm of highly competitive Western cultures is rooted on the belief that success is due mainly, if not exclusively, to personal qualities such as talent, intelligence, skills, efforts or risk taking. Sometimes, we are willing to admit that a certain degree of luck could also play a role in achieving significant material success. But, as a matter of fact, it is rather common to underestimate the importance of external forces in individual successful storie
... (read more)
4habryka3yHuh, I am surprised that this got published. The model proposed seems almost completely equivalent to the O-ring paper that has a ton of literature on it, that had roughly the same results. And it doesn’t have any empirical backing, so that’s even more confusing. I mean, it‘s a decent illustration, but it does really seem to not be saying anything new in this space. They also weirdly overstate their point. The correlation between luck and talent heavily depends on the number of iterations and initial distribution parameters their model assumes, and they seem to just have arbitrarily fixed them for their abstract, and later in the paper they basically say “if you change these parameters, the correlation of talent with success goes up drastically, and the resulting distribution still fits the data”. I.e. the only interesting thing that they’ve shown is that if you have repeated trials with probabilities drawn from a normal distribution, you get a heavy-tailed distribution, which is a trivial statistical fact addressed in hundreds of papers.
Meta-tations on Moderation: Towards Public Archipelago

I endorse Said's view, and I've written a couple of frontpage posts.

I also add that I think Said is a particularly able and shrewd critic, and I think LW2 would be much poorer if there was a chilling effect on his contributions.

[Meta] New moderation tools and moderation guidelines

I'm also mystified at why traceless deletition/banning are desirable properties to have on a forum like this. But (with apologies to the moderators) I think consulting the realpolitik will spare us the futile task of litigating these issues on the merits. Consider it instead a fait accompli with the objective to attract a particular writer LW2 wants by catering to his whims.

For whatever reason, Eliezer Yudkowsky wants to have the ability to block commenters and have the ability to do traceless deletion on his own work, and he's been quite clear t... (read more)

-1Ben Pace3yAs usual Greg, I will always come to you first if I ever need to deliver well-articulated sick burn that my victim needs to read twice before they can understand ;-) Edit: Added a smiley to clarify this was meant as a joke.

Yeah, I didn't want to make this a thread about discussing Eliezer's opinion, so I didn't put that front and center, but Eliezer only being happy to crosspost things if he has the ability to delete things was definitely a big consideration.

Here is my rough summary of how this plays into my current perspective on things:

1. Allowing users to moderate their own posts and set their own moderation policies on their personal blogs is something I wanted before we even talked to Eliezer about LW2 the first time.

2. Allowing users to moderate ... (read more)

How the LW2.0 front page could be better at incentivizing good content

FWIW, I struggle to navigate the front page to look at good posts (I struggle to explain why - I think I found 'frontpage etc.' easier for earlier versions). What I do instead is look at the comments feed and click through to articles that way, which seems suboptimal, as lots of comments may not be a very precise indicator of quality.


FWIW, this aptly describes my own adverse reaction to the OP. "I have this great insight, but I not only can't explain it to you, but I'm going to spend the balance of my time explaining why you couldn't understand it if I tried to explain it" sounds awfully close to bulveristic stories like, "If only you weren't blinded by sin, you too would see the glory of the coming of the lord".

That the object level benefits offered seem to be idiographic self-exhaltations augur still poorer (i.e. I cut through confusion so m... (read more)

Comments on Power Law Distribution of Individual Impact

I was unaware of the range restriction, which could well compress SD. That said, if you take the '9' scorers as '9 or more', then you get something like this (using 20-25)

Mean value is around 7 (6.8), 7% get 9 or more, suggesting 9 is at or around +1.5SD assuming normality, so when you get a sample size in the thousands, you should start seeing scores at 11 or so (+3SD) - I wouldn't be startled to find Ben has this level of ability. But scores at (say) 15 or higher (+6SD) should only be seen with extraordinarily rarely.

If you use ... (read more)

4Ben Pace3yQuick sanity check: 4.5SD = roughly 1 in 300,000 (according to wikipedia) UK population = roughly 50 million So there'd be 50 * 3 = 150 people in the UK who should be able to get scores at ~21 or more. Which seems quite plausible to me. Also I know a few IMO people, I bet we could test this.
Comments on Power Law Distribution of Individual Impact

I'm aware of normalisation, hence I chose things which have some sort of 'natural cardinal scale' (i.e. 'how many Raven's do you get right' doesn't really work, but 'how many things can you keep in mind at once' is better, albeit imperfect).

Not all skew entails a log-normal (or some similar - assumedly heavy tailed) distribution. This applies to your graph for digit span you cite here. The mean of the data is around 5, and the SD is around 2. Having ~11% at +1SD (7) and about 3% at +2SD (9) is a lot closer to n... (read more)

5habryka3yHuh, I notice that I am confused about noone in the sample having a larger digit span than 9. Do we know whether they didn't just stop measuring after 9?
In defence of epistemic modesty

Sorry you disliked the post so much. But you might have liked it more had you looked at the bit titled 'community benefits to immodesty', where I talk about the benefits of people striking out outside expert consensus (but even if they should act 'as if' their contra-expert take was correct, they should nonetheless defer to it for 'all things considered' views).

In defence of epistemic modesty

No. I chose him as a mark of self-effacement. When I was younger I went around discussion forums about philosophy, and people commonly named themselves after ancient greats like Socrates, Hume, etc. Given Thrasymachus's claim to fame is being rude, making some not great arguments, and getting spanked by Socrates before the real discussion started (although I think most experts think Socrates's techne based reply was pretty weak), I thought he would be a more accurate pseudonym,

1[deleted]4yGood answer :)
Contra double crux

Sorry for misreading your original remark. Happy to offer the bet in conditional, i.e.:

Conditional on CFAR producing results of sufficient quality for academic publication (as judged by someone like Christiano or Karnofsky) these will fail to demonstrate benefit on a pre-specified objective outcome measure

Contra double crux

Thanks for your reply. Given my own time constraints I'll decline your kind offer to discuss this further (I would be interested in reading some future synthesis). As consolation, I'd happily take you up on the modified bet. Something like:

Within the next 24 months CFAR will not produce results of sufficient quality for academic publication (as judged by someone like Christiano or Karnofsky) that demonstrate benefit on a pre-specified objective outcome measure

I guess 'demonstrate benefit' could be stipulated as 'p<0.05 on some a

... (read more)
2habryka4yAh, the 4-1 to one bet was a conditional one: I don't know CFAR's current plans well enough to judge whether they will synthesize the relevant evidence. I am only betting that if they do, the result will be positive. I am still on the fence of taking a 4-1 bet on this, but the vast majority of my uncertainty here comes from what CFAR is planning to do, not what the result would be. I would probably take a 5-1 bet on the statement as you proposed it.
Contra double crux

I hope readers will forgive a 'top level' reply from me, it's length, and that I plan to 'tap out' after making it (save for betting). As pleasant as this discussion is, other demands pull me elsewhere. I offer a summary of my thoughts below - a mix of dredging up points I made better 3-4 replies deep than I managed in the OP, and to reply to various folks at CFAR. I'd also like to bet (I regret to decline Eli's offer for reasons that will become apparent, but I hope to make some agreeable counter-offers).

I persist in thr

... (read more)
-3Conor Moreton4yThis comment combines into one bucket several different major threads that probably each deserve their own bucket (e.g. the last part seems like strong bets about CFAR's competence that are unrelated to "is double crux good"). Personally I don't like that, though it doesn't seem objectively objectionable.
5habryka4yI agree with the gist of the critique of double crux as presented here, and have had similar worries. I don't endorse everything in this comment, but think taking it seriously will positively contribute to developing an art of productive disagreement. I think the bet at the end feels a bit fake to me, since I think it is currently reasonable to assume that publishing a study in a prestigious psychology journal is associated with something around 300 person hours of completely useless bureaucratic labor [], and I don't think it is currently worth it for CFAR to go through that effort (and neither I think is it for almost anyone else). However, if we relax the constraints to only reaching the data quality necessary to publish in a journal (verified by Carl Shulman or Paul Christiano or Holden Karnofsky, or whoever we can find who we would both trust to assess this), I am happy to take you up on your 4-to-1 bet (as long as we are measuring the effect of the current set of CFAR instructors teaching, not some external party trying to teach the same techniques, which I expect to fail). I sadly currently don't have the time to write a larger response about the parts of your comment I disagree with, but think this is an important enough topic that I might end up writing a synthesis on things in this general vicinity, drawing from both yours and other people's writing. For now, I will leave a quick bullet list of things I think this response/argument is getting wrong: * While your critique is pointing out true faults in the protocol of double crux, I think it has not yet really engaged with some of the core benefits I think it brings. You somewhat responded to this by saying that you think other people don't agree what double crux is about, which is indeed evidence of the lack of a coherent benefit, however I claim that if you would dig deeper into those people's opinion, you will find that the core b
Contra double crux

I also notice that I can't predict whether you'll look at the "prioritize discussion based on the slope of your possible update combined with the other party's belief" version that I give here and say "okay, but that's not double crux" or "okay, but the motion of double crux doesn't point there as efficiently as something else" or "that doesn't seem like the right step in the dance, tho."

I regret it is unclear what I would say given what I have written, but it is the former ("okay,

... (read more)
Contra double crux

Hello Dan,

I'm not sure whether these remarks are addressed 'as a reply' to me in particular. That you use the 'marginal tax rate in the UK' example I do suggests this might be meant as a response. On the other hand, I struggle to locate the particular loci of disagreement - or rather, I see in your remarks an explanation of double crux which includes various elements I believe I both understand and object to, but not reasons that argue against this belief (e.g. "you think double crux involves X, but actually it is X*, and thus

... (read more)
5Unnamed4yI agree that it would be good to look at some real examples of beliefs rather than continuing with hypothetical examples and abstract arguments. Your suggestion for what hard data to get isn't something that we can do right now (and I'm also not sure if I disagree with your prediction). We do have some real examples of beliefs and (first attempt at stating) cruxes near at hand, in this comment [] from Duncan and in this post [] from gjm (under the heading "So, what would change my mind? ") and Raemon (under the heading "So: My Actual Cruxes"). And I'd recommend that anyone who cares about cruxes or double crux enough to be reading this three-layers-deep comment, and who has never had a double crux conversation, pick a belief of yours, set a 5 minute timer [], and spend that time looking for cruxes. (I recommend picking a belief that is near the level of actions, not something on the level of a philosophical doctrine.) In response to your question about whether my comments were aimed at you: They were partly aimed at you, partly aimed at other LWers (taking you as one data point of how LWers are thinking about cruxes). My impression is that your model of cruxes and double crux is different from the thing that folks around CFAR actually do, and I was trying to close that gap for you and for other folks who don't have direct experience with double crux at CFAR. For my first comment: the OP had several phrases like "traced to a single underlying consideration" which I would not use when talking about cruxes. Melissa's current belief that she should start a gizmo company isn't based on a single consideration, it's a result of the fact that several different factors line up in a way that makes that specific plan look like an es
Contra double crux

Thanks for presenting this helpful data. If you'll forgive the (somewhat off topic) question, I understand both that you are responsible for evaluation of CFAR, and that you are working on a new evaluation. I'd be eager to know what this is likely to comprise, especially (see various comments) what evidence (if any) is expected to be released 'for public consumption'?

Contra double crux

Thank you for your gracious reply. I interpret a couple of overarching themes in which I would like to frame my own: the first is the 'performance issue' (i.e. 'How good is double crux at resolving disagreement/getting closer to the truth'); the second the 'pedagogical issue' (i.e. 'how good is double crux at the second order task of getting people better at resolving disagreement/getting closer to the truth'). I now better understand you take the main support from double crux to draw upon the latter issue, but I

... (read more)
Contra double crux

I guess my overall impression is that including the cases you specify in a double cruxy style look more like epicycles by my lights rather than helpful augmentations to the concept of double crux.

Non-common knowledge cruxes

I had a sentence in the OP on crux asymmetry along the lines of 'another case may be is X believes they have a crux for B which Y is unaware of'. One may frame this along the lines of an implicit crux of 'there's no decisive consideration that changes my mind about B' for which a proposed 'silver bullet

... (read more)
Contra double crux

I guess there might be a selection effect: 'mature philosophers' might have spent a lot of time hashing our their views at earlier stages (e.g. undergrad, graduate school). So it may not be that surprising to find in the subject of their expertise their credences on the issues are highly resilient such that they change their mind rarely, and only after considerable amounts of evidence gathered over a long time.

Good data would be whether outside of this whether these people are good at hashing out cases where they have less resilient credences, bu

... (read more)
Load More