X

A Map that Reflects the Territory

The best LessWrong essays from 2018, in a set of physical books

A beautifully designed collection of books, each small enough to fit in your pocket. The book set contains over forty chapters by more than twenty authors including Eliezer Yudkowsky and Scott Alexander. This is a collection of opinionated essays exploring argument, aesthetics, game theory, artificial intelligence, introspection, markets, and more, as part of LessWrong's mission to understand the laws that govern reasoning and decision-making, and build a map that reflects the territory.

Hide
Learn More

Recent Discussion

Boxed Topics, Jenga Towers, And The Spacing Effect.

An undergraduate class on molecular biology teaches you about DNA transcription, the Golgi apparatus, cancer, and integral membrane proteins. Sometimes, these sub-topics are connected. But most often, they're presented in separate chapters, each in its own little box. So let's call these Boxed Topics.

The well-known Stewart calculus textbook teaches you about functions in chapter 1, limits and the definition of derivatives in chapter 2, rules for derivatives in chapter 3, and the relationship of derivatives with graphs in chapter 4. Woe betide you if you weren't entirely clear on the definition of a derivative when it gets used, over and over again, in next week's proofs of derivative rules.

Taking a calculus class can be like building a Jenga Tower. If...

7flodorner2h"Beginners in college-level math would learn about functions, the basics of linear systems, and the difference between quantitative and qualitative data, all at the same time." This seems to be the standard approach for undergraduate-level mathematics at university, at least in Europe.
2Pattern17h

I am pretty sure the author mentions you only learn one third of every discipline of what you would normally study, so your invested time would actually end up the same. If you just wanted to learn calculus then you make a more valid point.
(even though as I'm writing this, it seems to make sense to me to combine a Jenga tower topic calculus with boxed topics instead of Jenga tower topics if you're not planning on learning any other Jenga tower topics.)

Rent control is a type of policy where a maximum cap is put on what a landlord may charge tennents.

I've seen two sources that suggest that there is an academic consensus against rent control:

  1. A 2009 review.
  2. 81% of economists agree against rent control.

I'm not sure how much faith to put in these, and how non-controversial this topic is in practice (perhaps there are important subcases where it is a good policy). 

Are there strong claims for rent control policies in relevant cases that are supported by a non-trivial amount of economists?

(Yonatan Cale thinks that there is a consensus against rent control. Help me prove him wrong and give him Bayes points!)

I don't think that operationalizing exactly what I mean by a consensus would help a lot. My goal here is to really understand how certain I should be about whether rent control is a bad policy (and what are the important cases where it might not be a good policy, such as the examples ChristianKl gave below). 

1edoarad14mThat's right, and a poor framing on my part 😊 I am interested in a consensus among academic economists, or in economic arguments for rent control. Specifically because I'm mostly interested in utilitarian reasoning, but I'd also be curious about what other disciplines have to say.
3ChristianKl31mOn aspect of rent control is that it incentives people to invest into the flats that they rent because they can expect that the landlord doesn't simply raise the prices to drive them out of the flat and makes their investment void. There's also an imbalance because for a tenant it's more costly to change move between flats then it is for a landlord to change tenants. Then imbalance can be used to extract higher rents then could be extrated in the absence of switching costs. A good economic analysis should compare that value against the reduced motivation to create more supply. It might still be that the reduced motivations to create more supply is more important, but it's not a slamdunk.
2Fillipe Feitosa1hFrom a Rawlsian "justice as fairness" perspective, it would be reasonable to cripple landlords profits, considering that they are wealthier, and therefore, this inequality would be "just" according to Rawls second justice principle. This reasoning would only be valid IF the landlord is wealthier than the person who rents.

Science aims to come up with good theories about the world - but what makes a theory good? The standard view is that the key traits are predictive accuracy and simplicity. Deutsch focuses instead on the concepts of explanation and understanding: a good theory is an explanation which enhances our understanding of the world. This is already a substantive claim, because various schools of instrumentalism have been fairly influential in the philosophy of science. I do think that this perspective has a lot of potential, and later in this essay explore some ways to extend it. First, though, I discuss a few of Deutsch's arguments which I don't think succeed, in particular when compared to the bayesian rationalist position defended by Yudkowsky.

To start, Deutsch says that good...

1TAG1hWhat are the hard and easy problems? Realism and instrumentalism? I haven't said that SI is incapable of instrumentalism (prediction). Indeed, that might be the only thing it can do. I think the mathematical constraints are clearly insufficient to show that something is a probability, even if they are necessary. If I have a cake of 1m^2, and I cut it up. Then the pieces sum to 1. But pieces of cake aren't probabilities. So every hypothesis has the same probability of "not impossible". Well, no, several times over. You haven't shown that programmes are hypotheses, and what an SI is doing is assigning different non zero order probabilities, not a uniform one, and it is doing so based on programme length, although we don't know that reality is a programme, and so on. Do you think scientists are equally troubled? Even if I no longer have a instrumental need for something, I can terminally value it. But it isn't about me. The rational sphere in general value realism, and make realistic claims. Yudkowsky has made claims about God not existing, and MWI being true that are explicitly based on SI style reasoning. So the cat is out of the bag... SI cannot be defended as something that was only ever intended as an instrumentalist predictor without walking back those claims. You'.re saying realism is an illusion? Maybe that's your philosophy, but it's not the less wrong philosophy. It's obvious that it could, but so what?

You haven't shown that programmes are hypotheses, and what an SI is doing is assigning different non zero order probabilities, not a uniform one, and it is doing so based on programme length, although we don't know that reality is a programme, and so on.

SI only works for computable universes; otherwise you're out of luck. If you're in an uncomputable universe... I'm not sure what your options are, actually. [If you are in a computable universe, then there must be a program that corresponds to it, because otherwise it would be uncomputable!]

You can't assign... (read more)

Aside from worries over the new strains, I would be saying this was an exceptionally good week.

Both deaths and positive test percentages took a dramatic turn downwards, and likely will continue that trend for at least several weeks. Things are still quite short-term bad in many places, but things are starting to improve. Even hospitalizations are slightly down. 

It is noticeably safer out there than it was a few weeks ago, and a few weeks from now will be noticeably safer than it is today. 

Studies came out that confirmed that being previously infected conveys strong immunity for as long as we have been able to measure it. As usual, the findings were misrepresented, but the news is good. I put my analysis here in a distinct post, so...

lol imagining Very Serious People telling us to eat out. Like someone named Colonel Angus.

I mean... are other 80s/90s kids laughing at 'eat out to help out' and then feeling old? Because I am.

2mingyuan2hYes.
1bardstale7hWearing a mask after vaccination would reduce the spread of other diseases such as the flu, thus freeing additional healthcare resources for COVID-19 patients.
1TheMajor10hIt was pointed out to me that it is really not accurate to consider the UK daily COVID numbers as a single data-point. There could be any number of possible explanations for the decrease in the numbers. Some possible explanations include: 1. The current lockdown and measures are sufficient to bring the English variant to R<1. 2. The current measures bring the English variant to an R slightly above 1, and the wild variants to R well below 1, and because nationally the English variant is not dominant yet (even though it is in certain regions) this gives a national R<1. 3. The English strain has spread so aggressively regionally that group immunity effects in the London area have significantly slowed the spread, while not spreading as quickly geographically. Most notably, hypotheses 2 & 3 predict that the stagnation will soon reverse back into acceleration (with hypothesis 3 predicting a far higher rate than 2), as the English variant becomes more prevalent throughout the rest of the UK. Let's hope the answer is door number 1?

[Epistemic status: Strong opinions lightly held, this time with a cool graph.]

I argue that an entire class of common arguments against short timelines is bogus, and provide weak evidence that anchoring to the human-brain-human-lifetime milestone is reasonable. 

In a sentence, my argument is that the complexity and mysteriousness and efficiency of the human brain (compared to artificial neural nets) is almost zero evidence that building TAI will be difficult, because evolution typically makes things complex and mysterious and efficient, even when there are simple, easily understood, inefficient designs that work almost as well (or even better!) for human purposes.

In slogan form: If all we had to do to get TAI was make a simple neural net 10x the size of my brain, my brain would still look the...

4Bucky12hProbably nowadays what Shorty missed [http://www.engineeringchallenges.org/challenges/fusion.aspx] was the difficulty in dealing with the energetic neutrons being created and associated radiation. Then associated maintenance costs etc and therefore price-competitiveness. I chose nuclear fusion purely because it was the most salient example of project-that-always-misses-its-deadlines. (I did my university placement year in nuclear fusion research but still don't feel like I properly understand it! I'm pretty sure you're right though about temperature, pressure and control.) In theory a steelman Shorty could have thought of all of these things but in practice it's hard to think of everything. I find myself in the weird position of agreeing with you but arguing in the opposite direction. For a random large project X, which is more likely to be true: * Project X took longer than expert estimates because of failure to account for Y * Project X was delivered approximately on time In general I suspect that it is the former (1). In that case the burden of evidence is on Shorty to show why project X is outside of the reference class of typical-large-projects and maybe in some subclass where accurate predictions of timelines are more achievable. Maybe what is required is to justify TAI as being in the subclass * projects-that-are-mainly-determined-by-a-single-limiting-factor or * projects-whose-key-variables-are-reliably-identifiable-in-advance I think this is essentially the argument the OP is making in Analysis Part1? *** I notice in the above I've probably gone beyond the original argument - the OP was arguing specifically against using the fact that natural systems have such properties to say that they're required. I'm talking about something more general - systems generally have more complexity than we realize. I think this is importantly different. It may be the case that Longs' argument about brains having such properties is based on an intuition fr
2Daniel Kokotajlo9hI still prefer my analysis above: Fusion is not a case of Shorty being wrong, because a steelman Shorty wouldn't have predicted that we'd get fusion soon. Why? Because we don't have the key variables. Why? Because controlling the plasma is one of the key variables, and the sun has near-perfect control, whereas we are trying to substitute with various designs which may or may not work. Shorty is actually arguing for TAI much sooner than 20 years from now; if TAI comes around the HBHL milestone then it could happen any day now, it's just a matter of spending a billion dollars on compute and then iterating a few times to work out the details, wright-brothers style. Of course we shouldn't think Shorty is probably correct here; the truth is probably somewhere in between. (Unless we do more historical analyses and find that the case of flight is truly representative of the reference class AI fits in, in which case ho boy singularity here we come) And yeah, the main purpose of the OP was to argue that certain anti-short-timelines arguments are bogus; this issue of whether timelines are actually short or long is secondary and the case of flight is just one case study, of limited evidential import. I do take your point that maybe Longs' argument was drawing on intuitions of the sort you are sketching out. In other words, maybe there's a steelman of the arguments I think are bogus, such that they become non-bogus. I already agree this is true in at least one way (see Part 3). I like your point about large projects -- insofar as we think of AI in that reference class, it seems like our timelines should be "Take whatever the experts say and then double it." But if we had done this for flight we would have been disastrously wrong. I definitely want to think, talk, and hear more about these issues... I'd like to have a model of what sorts of technologies are like fusion and what sort are like flight, and why. I like your suggestions: My own (hinted at in the OP) was going to

As my timelines have been shortening, I've been rethinking my priorities. As have many of my colleagues. It occurs to us that there are probably general considerations that should cause us to weight towards short-timelines plans, or long-timelines plans. (Besides, of course, the probability of short and long timelines) For example, if timelines are short then maybe AI safety is more neglected, and therefore higher EV for me to work on, so maybe I should be systematically more inclined to act as if timelines are short.

We are at this point very unsure what the most important considerations are, and how they balance. So I'm polling the hive mind!

How much influence and ability you expect to have as an individual in that timeline.

For example, I don't expect to have much influence/ability in extremely short timelines, so I should focus on timelines longer than 4 years, with more weight to longer timelines and some tapering off starting around when I expect to die.

How relevant thoughts and planning now will be.

If timelines are late in my life or after my death, thoughts, research, and planning now will be much less relevant to AI trajectory going well, so at this moment in time I should weight timelines in the 4-25 year range more.

I.

This was a triumph
I'm making a note here, huge success

No, seriously, it was awful. I deleted my blog of 1,557 posts. I wanted to protect my privacy, but I ended up with articles about me in New Yorker, Reason, and The Daily Beast. I wanted to protect my anonymity, but I Streisand-Effected myself, and a bunch of trolls went around posting my real name everywhere they could find. I wanted to avoid losing my day job, but ended up quitting so they wouldn't be affected by the fallout. I lost a five-digit sum in advertising and Patreon fees. I accidentally sent about three hundred emails to each of five thousand people in the process of trying to put my blog back up.

I had, not to mince words about it, a really weird year.

The first post on Scott Alexander's new blog on Substack, Astral Codex Ten.

3Dirichlet-to-Neumann2hGood news : slate star codex is up again. Bad news : I've been singing "still alive" since this morning and it's driving me crazy.
4Baisius3hDoes Scott's contract with Substack prevent automatic cross-posting here? I really do loathe Substack as a UI.

We never had automatic crossposting with SlateStarCodex, so it's not trival to say that we should have it now with the new website. 

2Dagon2hInoreader lets me subscribe to the feed (URL https://astralcodexten.substack.com/feed/, [https://astralcodexten.substack.com/feed/,] which looks like standard RSS to me), so it doesn't seem that Substack is intentionally limiting access to their site.

One of the motivations for You have about five words was the post Politics is the Mindkiller. That post essentially makes four claims:

  • Politics is the mindkiller. Therefore:
  • If you're not making a point about politics, avoid needlessly political examples.
  • If you are trying to make a point about general politics, try to use an older example that people don't have strong feelings about.
  • If you're making a current political point, try not to make it unnecessarily political by throwing in digs that tar the entire outgroup, if that's not actually a key point.

But, not everyone read the post. And not everyone who read the post stored all the nuance for easy reference in their brain. The thing they remembered, and told their friends about, was "Politics is the mindkiller." Some...

There is something we can frame in two different ways, either "What is it that the mods make exceptions for?" or "What are the real rules?" I assume this comes down to the same question, but the second version is more explicit. 

I think the implicit rule that I perceived was, more or less: "Posts should be about important/useful insights (whatever that means). They should try to explain, be based on and provide evidence when talking about the real world, be written in a level-headed way, avoid sneery comments about outgroups (and be timeless, even thou... (read more)

2Rob Bensinger4hI like the idea of front-paging Zvi's weekly updates. (No opinion on whether other COVID-19 content should be barred from the frontpage.)
2ChristianKl10hhabryka thinks that the value that Zvi's post provide means that utilitarian value of making an expection for them from the general rules is positive.
1Sherrinford2hI still don't fully understand what you are saying, so: 1) What does the word "utilitarian" add to this explanation [https://www.lesswrong.com/posts/bNAEBRWiKHG6mtQX8/avoid-unnecessarily-political-examples?commentId=NGFNfvjqAca4YpEtF] ? 2) What would LessWrong run by "consequentialist calculus" look like, in contrast to "run by utilitarian calculus"? 3) Do you equate "habryka thinks" with the utilitarian calculus that is supposed to run LW?

I keep finding cause to discuss the problem of the criterion, so I figured I'd try my hand at writing up a post explaining it. I don't have a great track record on writing clear explanations, but I'll do my best and include lots of links you can follow to make up for any inadequacy on my part.

Motivation

Before we get to the problem itself, let's talk about why it matters.

Let's say you want to know something. Doesn't really matter what. Maybe you just want to know something seemingly benign, like what is a sandwich?

At first this might seem pretty easy: you know a sandwich when you see it! But just to be sure you ask a bunch of people what they think a sandwich is and if...

Sounds like it's time to become a caveman.

1Chris Hibbert3hBartley is very explicit that you stop claiming to "know" the right way. "This is my current best understanding. These are the reasons it seems to work well for distinguishing good beliefs from unhelpful ones. When I use these approaches to evaluate the current proposal, I find them to be lacking in the following way." If you want to argue that I'm using an inferior method, you can appeal to authority or cite scientific studies, or bully me, and I evaluate your argument. No faith, no commitment, no knowledge.
2Said Achmiz4hThis approach seems wrongheaded to me, from the start. Perhaps I am misunderstanding something. But let’s take your example: HALT! Proceed no further. Before even attempting to answer this question, ask: why the heck do you care [https://www.readthesequences.com/Disguised-Queries]? Why do you want to know the answer to this strange question, “what is a sandwich”? What do you plan to do with the answer? [https://www.gwern.net/Research-criticism#beliefs-are-for-actions] In the absence of any purpose [https://www.readthesequences.com/Lost-Purposes] to that initial question, the rest of that entire section of the post is unmotivated. The sandwich alignment chart? Pointless and meaningless. Attempting to precisely define a “sandwich-like object”? Total waste of time. And so on. On the other hand, if you do have a purpose in mind, then the right answer to “what is a sandwich” depends on that purpose. And the way you would judge whether you got the right answer or not, is by whether, having acquired said answer, you were then able to use the answer to accomplish that purpose. Now, you could then ask: “Ah, but how do you know you accomplished your purpose? What if you’re a brain in a jar? What if an infinitely powerful demon deceived you?”, and so on. Well, one answer to that might be “friend, by all means contemplate these questions; I, in the meantime—having accomplished my chosen purpose—will be contemplating my massive pile of utility, while perched atop same”. But we needn’t go that far; Eliezer has already addressed [https://www.readthesequences.com/Where-Recursive-Justification-Hits-Bottom] the question of “recursive justification”. It does not seem to me as if there remains anything left to say.
2gwern4hhttps://en.wikipedia.org/wiki/Semantic_satiation [https://en.wikipedia.org/wiki/Semantic_satiation]

I enjoyed C.S.Lewis’ The Inner Ring, and recommend you read it. It basically claims that much of human effort is directed at being admitted to whatever the local in-group is, that this happens easily to people, and that it is a bad thing to be drawn in to.

Some quotes, though I also recommend reading the whole thing:

In the passage I have just read from Tolstoy, the young second lieutenant Boris Dubretskoi discovers that there exist in the army two different systems or hierarchies. The one is printed in some little red book and anyone can easily read it up. It also remains constant. A general is always superior to a colonel, and a colonel to a captain. The other is not printed anywhere. Nor is

...

I'm not sure in which category you would put it, but as a counterpoint, Team Cohesion and Exclusionary Egalitarianism argues that for some groups, exclusion is at least partially essential and that they are better off for it:

... you find this pattern across nearly all elite American Special Forces type units — (1) an exceedingly difficult bar to get in, followed by (2) incredibly loose, informal, collegial norms with nearly-infinitely less emphasis on hierarchy and bureaucracy compared to all other military units.

To even "try out" for a Special Forces grou

... (read more)
2Kaj_Sotala1hRelevant paper [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.456.9770&rep=rep1&type=pdf] .
1Pongo4hOh, and I also notice that a social manoeuvring game (the game that governs who is admitted) is a task where performance is correlated with performance on (1) and (2)
2Daniel Kokotajlo5hMan, that's a very important bit of info which I had heard before but which it helps to be reminded of again. The implications for my own line of work are disturbing!