Mass_Driver

Comments

The Technique Taboo

I agree with this post. I'd add that from what I've seen of medical school (and other high-status vocational programs like law school, business school, etc.), there is still a disproportionate emphasis on talking about the theory of the subject matter vs. building skill at the ultimate task. Is it helpful to memorize the names of thousands of arteries and syndromes and drugs in order to be a doctor? Of course. Is that *more* helpful than doing mock patient interviews and mock chart reviews and live exercises where you try to diagnose a tumor or a fracture or a particular kind of pus? Is it *so* much more helpful that it makes sense to spend 40x more hours on biochemistry than on clinical practice? Because my impression of medical school is that you do go on clinical rounds and do internships and things, but that the practical side of things is mostly a trial-by-fire where you are expected to improvise many of your techniques, often after seeing them demonstrated only once or twice, often with minimal supervision, and usually with little or no coaching or after-the-fact feedback. The point of the internships and residencies seems to be primarily to accomplish low-prestige medical labor, not primarily to help medical students improve their skills.

I'd be curious to hear from anyone who disagrees with me about medical school. I'm not super-confident about this assessment of medical school; I'm much more confident that an analogous critique applies well to law school and business school. Lawyers learn the theory of appellate decision-making, not how to prepare a case for trial or negotiate a settlement or draft a contract. MBAs learn economics and financial theory, not how to motivate or recruit or evaluate their employees.

As far as *why* we don't see more discussion about how to improve technique, I think part of it is just honest ignorance. Most people aren't very self-reflective and don't think very much about whether they're good at their jobs or what it means to be good at their jobs or how they could become better. Even when people do take time to reflect on what makes a good [profession], they may not have the relevant background to draw useful conclusions. Academic authorities often have little or no professional work experience; the median law professor has tried zero lawsuits; the median dean of a business school has never launched a startup; the median medical school lecturer has never worked as a primary care physician in the suburbs.

Some of it may be, as Isnasene points out, a desire to avoid unwanted competition. If people are lazy and want to enjoy high status that they earned a long time ago without putting in further effort, they might not want to encourage comparisons of skill levels.

Finally, as Isusr suggests, some of the taboo probably comes from an effort to preserve a fragile social hierarchy, but I don't think the threat is "awareness of internal contradictions;" I think the threat is simply a common-sense idea of fairness or equity. If authorities or elites are no more objectively skillful than a typical member of their profession, then there is little reason for them to have more power, more money, or easier work. Keeping the conversation firmly fixed on discussion *about* the profession (rather than discussion about *how to do* the profession) helps obscure the fact that the status of elites is unwarranted.

The abruptness of nuclear weapons

I like the style of your analysis. I think your conclusion is wrong because of wonky details about World War 2. 4 years of technical progress at anything important, delivered for free on a silver platter, would have flipped the outcome of the war. 4 years of progress in fighter airplanes means you have total air superiority and can use enemy tanks for target practice. 4 years of progress in tanks means your tanks are effectively invulnerable against their opponents, and slice through enemy divisions with ease. 4 years of progress in manufacturing means you outproduce your opponent 2:1 at the front lines each and overwhelm them with numbers. 4 years of progress in cryptography means you know your opponent's every move and they are blind to your strategy.

Meanwhile, the kiloton bombs were only able to cripple cities "in a single mission" because nobody was watching out for them. Early nukes were so heavy that it's doubtful whether the slow clumsy planes that carried them could have arrived at their targets against determined opposition.

There is an important sense in which fission energy is discontinuously better than chemical energy, but it's not obvious that this translates into a discontinuity in strategic value per year of technological progress.

1) I agree with the very high-level point that there are lots of rationalist group houses with flat / egalitarian structures, and so it might make sense to try one that's more authoritarian to see how that works. Sincere kudos to you for forming a concrete experimental plan and discussing it in public.

2) I don't think I've met you or heard of you before, and my first impression of you from your blog post is that you are very hungry for power. Like, you sound like you would really, really enjoy being the chief of a tribe, bossing people around, having people look up to you as their leader, feeling like an alpha male, etc. The main reason this makes me uncomfortable is that I don't see you owning this desire anywhere in your long post. Like, if you had said, just once, "I think I would enjoy being a leader, and I think you might enjoy being led by me," I would feel calmer. Instead I'm worried that you have convinced yourself that you are grudgingly stepping up as a leader because it's necessary and no one else will. If you're not being fully honest about your motivations for nominating yourself to be an authoritarian leader, what else are you hiding?

3) Your post has a very high ratio of detailed proposals to literature review. I would have liked to see you discuss other group houses in more detail, make reference to articles or books or blog posts about the theory of cohousing and of utopian communities more generally, or otherwise demonstrate that you have done your homework to find out what has worked, what has not worked, and why. None of your proposals sound obviously bad to me, and you've clearly put some thought and care into articulating them, but it's not clear whether your proposals are backed up by research, or whether you're just reasoning from your armchair.

4) Why should anyone follow you on an epic journey to improve their time management skills if you're sleep-deprived and behind schedule on writing a blog post? Don't you need to be more or less in control of your own lifestyle before you can lead others to improve theirs?

Expecting Short Inferential Distances

And if you think you can explain the concept of "systematically underestimated inferential distances" briefly, in just a few words, I've got some sad news for you...

"I know [evolution] sounds crazy -- it didn't make sense to me at first either. I can explain how it works if you're curious, but it will take me a long time, because it's a complicated idea with lots of moving parts that you probably haven't seen before. Sometimes even simple questions like 'where did the first humans come from?' turn out to have complicated answers."

Further discussion of CFAR’s focus on AI safety, and the good things folks wanted from “cause neutrality”

I am always trying to cultivate a little more sympathy for people who work hard and have good intentions! CFAR staff definitely fit in that basket. If your heart's calling is reducing AI risk, then work on that! Despite my disappointment, I would not urge anyone who's longing to work on reducing AI risk to put that dream aside and teach general-purpose rationality classes.

That said, I honestly believe that there is an anti-synergy between (a) cultivating rationality and (b) teaching AI researchers. I think each of those worthy goals is best pursued separately.

Further discussion of CFAR’s focus on AI safety, and the good things folks wanted from “cause neutrality”

Yeah, that pretty much sums it up: do you think it's more important for rationalists to focus even more heavily on AI research so that their example will sway others to prioritize FAI, or do you think it's more important for rationalists to broaden their network so that rationalists have more examples to learn from?

Shockingly, as a lawyer who's working on homelessness and donating to universal income experiments, I prefer a more general focus. Just as shockingly, the mathematicians and engineers who have been focusing on AI for the last several years prefer a more specialized focus. I don't see a good way for us to resolve our disagreement, because the disagreement is rooted primarily in differences in personal identity.

I think the evidence is undeniable that rationality memes can help young, awkward engineers build a satisfying social life and increase their productivity by 10% to 20%. As an alum of one of CFAR's first minicamps back in 2011, I'd hoped that rationality would amount to much more than that. I was looking forward to seeing rationalist tycoons, rationalist Olympians, rationalist professors, rationalist mayors, rationalist DJs. I assumed that learning how to think clearly and act accordingly would fuel a wave of conspicuous success, which would in turn attract more resources for the project of learning how to think clearly, in a rapidly expanding virtuous cycle.

Instead, five years later, we've got a handful of reasonably happy rationalist families, an annual holiday party, and a couple of research institutes dedicated to pursuing problems that, by definition, will provide no reliable indicia of their success until it is too late. I feel very disappointed.

Further discussion of CFAR’s focus on AI safety, and the good things folks wanted from “cause neutrality”

Well, like I said, AI risk is a very important cause, and working on a specific problem can help focus the mind, so running a series of AI-researcher-specific rationality seminars would offer the benefit of (a) reducing AI risk, (b) improving morale, and (c) encouraging rationality researchers to test their theories using a real-world example. That's why I think it's a good idea for CFAR to run a series of AI-specific seminars.

What is the marginal benefit gained by moving further along the road to specialization, from "roughly half our efforts these days happen to go to running an AI research seminar series" to "our mission is to enlighten AI researchers?" The only marginal benefit I would expect is the potential for an even more rapid reduction in AI risk, caused by being able to run, e.g., 4 seminars a quarter for AI researchers, instead of 2 for AI researchers and 2 for the general public. I would expect any such potential to be seriously outweighed by the costs I describe in my main post (e.g., losing out on rationality techniques that would be invented by people who are interested in other issues), such that the marginal effect of moving from 50% specialization to 100% specialization would be to increase AI risk. That's why I don't want CFAR to specialize in educating AI researchers to the exclusion of all other groups.

Further discussion of CFAR’s focus on AI safety, and the good things folks wanted from “cause neutrality”

I dislike CFAR's new focus, and I will probably stop my modest annual donations as a result.

In my opinion, the most important benefit of cause-neutrality is that it safeguards the integrity of the young and still-evolving methods of rationality. If it is official CFAR policy that reducing AI risk is the most important cause, and CFAR staff do almost all of their work with people who are actively involved with AI risk, and then go and do almost all of their socializing with rationalists (most of whom also place a high value on reducing AI risk), then there will be an enormous temptation to discover, promote, and discuss only those methods of reasoning that support the viewpoint that reducing AI risk is the most important value. This is bad partly because it might stop CFAR from changing its mind in the face of new evidence, but mostly because the methods that CFAR will discover (and share with the world) will be stunted -- students will not receive the best-available cognitive tools; they will only receive the best-available cognitive tools that encourage people to reduce AI risk. You might also lose out on discovering methods of (teaching) rationality that would only be found by people with different sorts of brains -- it might turn out that the sort of people who strongly prioritize friendly AI think in certain similar ways, and if you surround yourself with only those people, then you limit yourself to learning only what those people have to teach, even if you somehow maintain perfect intellectual honesty.

Another problem with focusing exclusively on AI risk is that it is such a Black Swan-type problem that it is extremely difficult to measure progress, which in turn makes it difficult to assess the value or success of any new cognitive tools. If you work on reducing global warming, you can check the global average temperature. More importantly, so can any layperson, and you can all evaluate your success together. If you work on reducing nuclear proliferation for ten years, and you haven't secured or prevented a single nuclear warhead, then you know you're not doing a good job. But how do you know if you're failing to reduce AI risk? Even if you think you have good evidence that you're making progress, how could anyone who's not already a technical expert possibly assess that progress? And if you propose to train all of the best experts in your methods, so that they learn to see you as a source of wisdom, then how many of them will retain the capacity to accuse you of failure?

I would not object to CFAR rolling out a new line of seminars that are specifically intended for people working on AI risk -- it is a very important cause, and there's something to be gained in working on a specific problem, and as you say, CFAR is small enough that CFAR can't do it all. But what I hear you saying that the mission is now going to focus exclusively on reducing AI risk. I hear you saying that if all of CFAR's top leadership is obsessed with AI risk, then the solution is not to aggressively recruit some leaders who care about other topics, but rather to just be honest about that obsession and redirect the institution's policies accordingly. That sounds bad. I appreciate your transparency, but transparency alone won't be enough to save the CFAR/MIRI community from the consequences of deliberately retreating into a bubble of AI researchers.

Rationality Quotes Thread February 2016

Does anyone know what happened to TC Chamberlin's proposal? In other words, shortly after 1897, did he in fact manage to spread better intellectual habits to other people? Why or why not?

Help Build a Landing Page for Existential Risk?

Thank you! I see that some people voted you down without explaining why. If you don't like someone's blurb, please either contribute a better one or leave a comment to specifically explain how the blurb could be improved.

Load More