I appreciate Zoe Curzi's revelations of her experience with Leverage. I know how hard it is to speak up when no or few others do, and when people are trying to keep things under wraps.
I haven't posted much publicly about my experiences working as a researcher at MIRI (2015-2017) or around CFAR events, to a large degree because I've been afraid. Now that Zoe has posted about her experience, I find it easier to do so, especially after the post was generally well-received by LessWrong.
I felt moved to write this, not just because of Zoe's post, but also because of Aella's commentary:
I've found established rationalist communities to have excellent norms that prevent stuff like what happened at Leverage. The times where it gets weird is typically when
There's this idea in computer science wherein the maximum theoretical speedup that can be acquired with an arbitrary number of processors is related to the percentage of the program which can be parallelized. If we have two segments of code that take the same amount of time to execute with one CPU core in which the first segment can't be parallelized at all and the second segment is perfectly parallelizable, we can only run the program twice as fast, no matter how many CPU cores we have.
There's a similar idea in economics. It seems like the most powerful and civilizationally relevant feature controlling the medium to long term change in the price of goods is the extent to which the production of that good can be decoupled...
This is an essay about one of those "once you see it, you will see it everywhere" phenomena. It is a psychological and interpersonal dynamic roughly as common, and almost as destructive, as motte-and-bailey, and at least in my own personal experience it's been quite valuable to have it reified, so that I can quickly recognize the commonality between what I had previously thought of as completely unrelated situations.
The original quote referenced in the title is "There are three kinds of lies: lies, damned lies, and statistics."
Gyroscopes are weird.
Except they're not. They're quite normal and mundane and straightforward. The weirdness of gyroscopes is a map-territory confusion—gyroscopes seem weird because my map is poorly made, and predicts that they will do something other than their normal,...
[Epistemic status: The authors of this book make many factual claims that I'm not equipped to conclusively verify. Much of the publicly available information on the food industry comes from agribusinesses themselves or from activists who bitterly oppose them. In this review I've tried to summarize the authors' claims as they've presented them, with the occasional corroborating link, but as a layman I can't offer a much more complex perspective on these topics beyond what I learned from this book. The value judgments expressed in this review are my attempt to capture the authors' point of view, except where otherwise noted. I've absorbed many convincing arguments against factory farming from Effective Altruists over the years though, and as of this writing I've drastically cut back my meat...
(Content warning: self-harm, parts of this post may be actively counterproductive for readers with certain mental illnesses or idiosyncrasies.)
What doesn't kill you makes you stronger. ~ Kelly Clarkson.
No pain, no gain. ~ Exercise motto.
The more bitterness you swallow, the higher you'll go. ~ Chinese proverb.
I noticed recently that, at least in my social bubble, pain is the unit of effort. In other words, how hard you are trying is explicitly measured by how much suffering you put yourself through. In this post, I will share some anecdotes of how damaging and pervasive this belief is, and propose some counterbalancing ideas that might help rectify this problem.
1. As a child, I spent most of my evenings studying mathematics under some amount of supervision from my mother. While...
Many people have recommended the book The Beginning of Infinity: Explanations that Transform the World by David Deutsch to me. I don’t know how, because I can’t imagine any of them actually finished it. Previously on my blog I’ve reviewed books and been critical of aspects of them. But this post is more of a summary of The Beginning of Infinity. I decided to write it this way because this book is very complicated, reasonably long and frequently misunderstood. Deutsch is a physicist at Oxford and a pioneer of quantum computing, but his interests are wide-ranging.
In this book I argue that all progress,
In [Prediction] We are in an Algorithmic Overhang I made technical predictions without much explanation. In this post I explain my reasoning. This prediction is contingent on there not being a WWIII or equivalent disaster disrupting semiconductor fabrication.
I wouldn't be surprised if an AI takes over the world in my lifetime. The idea makes me uncomfortable. I question my own sanity. At first I think "no way could the world change that quickly". Then I remember that technology is advancing exponentially. The world is changing faster than ever has before and the pace is accelerating.
Superintelligence is possible. The laws of physics demand it. If superintelligence is possible then it is inevitable. Why hasn't we built one yet? There are four candidate limitations:
Yesterday* I talked about a potential treatment for Long Covid, and referenced an informal study I’d analyzed that tried to test it, which had seemed promising but was ultimately a let down. That analysis was too long for its own post, so it’s going here instead.
Gez Medinger ran an excellent-for-its-type study of interventions for long covid, with a focus on niacin, the center of the stack I took. I want to emphasize both how very good for its type this study was, and how limited the type is. Surveys of people in support groups who chose their own interventions is not a great way to determine anything. But really rigorous information will take a long time and some of us have to make decisions now, so I...
tl;dr: The LessWrong team is re-organizing as Lightcone Infrastructure. LessWrong is one of several projects we are working on to ensure the future of humanity goes well. We are looking to hire software engineers as well as generalist entrepreneurs in Berkeley who are excited to build infrastructure to ensure a good future.
I founded the LessWrong 2.0 team in 2017, with the goal of reviving LessWrong.com and reinvigorating the intellectual culture of the rationality community. I believed the community had great potential for affecting the long term future, but that the failing website was a key bottleneck to community health and growth.
Four years later, the website still seems very important. But when I step back and ask “what are the key bottlenecks for improving the longterm future?”, just...