A beautifully designed collection of books, each small enough to fit in your pocket. The book set contains over forty chapters by more than twenty authors including Eliezer Yudkowsky and Scott Alexander. This is a collection of opinionated essays exploring argument, aesthetics, game theory, artificial intelligence, introspection, markets, and more, as part of LessWrong's mission to understand the laws that govern reasoning and decision-making, and build a map that reflects the territory.
Boxed Topics, Jenga Towers, And The Spacing Effect.
An undergraduate class on molecular biology teaches you about DNA transcription, the Golgi apparatus, cancer, and integral membrane proteins. Sometimes, these sub-topics are connected. But most often, they're presented in separate chapters, each in its own little box. So let's call these Boxed Topics.
The well-known Stewart calculus textbook teaches you about functions in chapter 1, limits and the definition of derivatives in chapter 2, rules for derivatives in chapter 3, and the relationship of derivatives with graphs in chapter 4. Woe betide you if you weren't entirely clear on the definition of a derivative when it gets used, over and over again, in next week's proofs of derivative rules.
Taking a calculus class can be like building a Jenga Tower. If...
I am pretty sure the author mentions you only learn one third of every discipline of what you would normally study, so your invested time would actually end up the same. If you just wanted to learn calculus then you make a more valid point.
(even though as I'm writing this, it seems to make sense to me to combine a Jenga tower topic calculus with boxed topics instead of Jenga tower topics if you're not planning on learning any other Jenga tower topics.)
Rent control is a type of policy where a maximum cap is put on what a landlord may charge tennents.
I've seen two sources that suggest that there is an academic consensus against rent control:
I'm not sure how much faith to put in these, and how non-controversial this topic is in practice (perhaps there are important subcases where it is a good policy).
Are there strong claims for rent control policies in relevant cases that are supported by a non-trivial amount of economists?
(Yonatan Cale thinks that there is a consensus against rent control. Help me prove him wrong and give him Bayes points!)
I don't think that operationalizing exactly what I mean by a consensus would help a lot. My goal here is to really understand how certain I should be about whether rent control is a bad policy (and what are the important cases where it might not be a good policy, such as the examples ChristianKl gave below).
Science aims to come up with good theories about the world - but what makes a theory good? The standard view is that the key traits are predictive accuracy and simplicity. Deutsch focuses instead on the concepts of explanation and understanding: a good theory is an explanation which enhances our understanding of the world. This is already a substantive claim, because various schools of instrumentalism have been fairly influential in the philosophy of science. I do think that this perspective has a lot of potential, and later in this essay explore some ways to extend it. First, though, I discuss a few of Deutsch's arguments which I don't think succeed, in particular when compared to the bayesian rationalist position defended by Yudkowsky.
To start, Deutsch says that good...
You haven't shown that programmes are hypotheses, and what an SI is doing is assigning different non zero order probabilities, not a uniform one, and it is doing so based on programme length, although we don't know that reality is a programme, and so on.
SI only works for computable universes; otherwise you're out of luck. If you're in an uncomputable universe... I'm not sure what your options are, actually. [If you are in a computable universe, then there must be a program that corresponds to it, because otherwise it would be uncomputable!]
You can't assign... (read more)
Aside from worries over the new strains, I would be saying this was an exceptionally good week.
Both deaths and positive test percentages took a dramatic turn downwards, and likely will continue that trend for at least several weeks. Things are still quite short-term bad in many places, but things are starting to improve. Even hospitalizations are slightly down.
It is noticeably safer out there than it was a few weeks ago, and a few weeks from now will be noticeably safer than it is today.
Studies came out that confirmed that being previously infected conveys strong immunity for as long as we have been able to measure it. As usual, the findings were misrepresented, but the news is good. I put my analysis here in a distinct post, so...
lol imagining Very Serious People telling us to eat out. Like someone named Colonel Angus.
I mean... are other 80s/90s kids laughing at 'eat out to help out' and then feeling old? Because I am.
[Epistemic status: Strong opinions lightly held, this time with a cool graph.]
I argue that an entire class of common arguments against short timelines is bogus, and provide weak evidence that anchoring to the human-brain-human-lifetime milestone is reasonable.
In a sentence, my argument is that the complexity and mysteriousness and efficiency of the human brain (compared to artificial neural nets) is almost zero evidence that building TAI will be difficult, because evolution typically makes things complex and mysterious and efficient, even when there are simple, easily understood, inefficient designs that work almost as well (or even better!) for human purposes.
In slogan form: If all we had to do to get TAI was make a simple neural net 10x the size of my brain, my brain would still look the...
Sure!
As my timelines have been shortening, I've been rethinking my priorities. As have many of my colleagues. It occurs to us that there are probably general considerations that should cause us to weight towards short-timelines plans, or long-timelines plans. (Besides, of course, the probability of short and long timelines) For example, if timelines are short then maybe AI safety is more neglected, and therefore higher EV for me to work on, so maybe I should be systematically more inclined to act as if timelines are short.
We are at this point very unsure what the most important considerations are, and how they balance. So I'm polling the hive mind!
How much influence and ability you expect to have as an individual in that timeline.
For example, I don't expect to have much influence/ability in extremely short timelines, so I should focus on timelines longer than 4 years, with more weight to longer timelines and some tapering off starting around when I expect to die.
How relevant thoughts and planning now will be.
If timelines are late in my life or after my death, thoughts, research, and planning now will be much less relevant to AI trajectory going well, so at this moment in time I should weight timelines in the 4-25 year range more.
I.
This was a triumph
I'm making a note here, huge successNo, seriously, it was awful. I deleted my blog of 1,557 posts. I wanted to protect my privacy, but I ended up with articles about me in New Yorker, Reason, and The Daily Beast. I wanted to protect my anonymity, but I Streisand-Effected myself, and a bunch of trolls went around posting my real name everywhere they could find. I wanted to avoid losing my day job, but ended up quitting so they wouldn't be affected by the fallout. I lost a five-digit sum in advertising and Patreon fees. I accidentally sent about three hundred emails to each of five thousand people in the process of trying to put my blog back up.
I had, not to mince words about it, a really weird year.
The first post on Scott Alexander's new blog on Substack, Astral Codex Ten.
We never had automatic crossposting with SlateStarCodex, so it's not trival to say that we should have it now with the new website.
One of the motivations for You have about five words was the post Politics is the Mindkiller. That post essentially makes four claims:
But, not everyone read the post. And not everyone who read the post stored all the nuance for easy reference in their brain. The thing they remembered, and told their friends about, was "Politics is the mindkiller." Some...
There is something we can frame in two different ways, either "What is it that the mods make exceptions for?" or "What are the real rules?" I assume this comes down to the same question, but the second version is more explicit.
I think the implicit rule that I perceived was, more or less: "Posts should be about important/useful insights (whatever that means). They should try to explain, be based on and provide evidence when talking about the real world, be written in a level-headed way, avoid sneery comments about outgroups (and be timeless, even thou... (read more)
I keep finding cause to discuss the problem of the criterion, so I figured I'd try my hand at writing up a post explaining it. I don't have a great track record on writing clear explanations, but I'll do my best and include lots of links you can follow to make up for any inadequacy on my part.
Before we get to the problem itself, let's talk about why it matters.
Let's say you want to know something. Doesn't really matter what. Maybe you just want to know something seemingly benign, like what is a sandwich?
At first this might seem pretty easy: you know a sandwich when you see it! But just to be sure you ask a bunch of people what they think a sandwich is and if...
Sounds like it's time to become a caveman.
I enjoyed C.S.Lewis’ The Inner Ring, and recommend you read it. It basically claims that much of human effort is directed at being admitted to whatever the local in-group is, that this happens easily to people, and that it is a bad thing to be drawn in to.
Some quotes, though I also recommend reading the whole thing:
...In the passage I have just read from Tolstoy, the young second lieutenant Boris Dubretskoi discovers that there exist in the army two different systems or hierarchies. The one is printed in some little red book and anyone can easily read it up. It also remains constant. A general is always superior to a colonel, and a colonel to a captain. The other is not printed anywhere. Nor is
I'm not sure in which category you would put it, but as a counterpoint, Team Cohesion and Exclusionary Egalitarianism argues that for some groups, exclusion is at least partially essential and that they are better off for it:
... (read more)... you find this pattern across nearly all elite American Special Forces type units — (1) an exceedingly difficult bar to get in, followed by (2) incredibly loose, informal, collegial norms with nearly-infinitely less emphasis on hierarchy and bureaucracy compared to all other military units.
To even "try out" for a Special Forces grou