A Map that Reflects the Territory

The best LessWrong essays from 2018, in a set of physical books

A beautifully designed collection of books, each small enough to fit in your pocket. The book set contains over forty chapters by more than twenty authors including Eliezer Yudkowsky and Scott Alexander. This is a collection of opinionated essays exploring argument, aesthetics, game theory, artificial intelligence, introspection, markets, and more, as part of LessWrong's mission to understand the laws that govern reasoning and decision-making, and build a map that reflects the territory.

Learn More

Recent Discussion

I'm an infovore. The failure mode for an infovore is spending too much time reading and not enough time doing. I was bit especially hard by this a few days ago - I hadn't caught up on LW posts in while and had 100+ in my RSS reader. I wanted to read them all but I knew I shouldn't.

I had the same issue with Hacker News for many years. I broke the cycle by signing for for a daily "best of HN" feed, however even this proved to be too much content. I replaced it with a weekly best of digest and that's been working well for me - the digest includes the top 50 posts by votes, so that's the maximum number of enticing nuggets...

Similarly to Hacker Newsletter there is a weekly digest of Lesswrong posts on Rational Newsletter.

“We wanted flying cars, instead we got 140 characters,” says Peter Thiel’s Founders Fund, expressing a sort of jaded disappointment with technological progress. (The fact that the 140 characters have become 280, a 100% increase, does not seem to have impressed him.)

Thiel, along with economists such as Tyler Cowen (The Great Stagnation) and Robert Gordon (The Rise and Fall of American Growth), promotes a “stagnation hypothesis”: that there has been a significant slowdown in scientific, technological, and economic progress in recent decades—say, for a round number, since about 1970, or the last ~50 years.

When I first heard the stagnation hypothesis, I was skeptical. The arguments weren’t convincing to me. But as I studied the history of progress (and looked at the numbers), I slowly came around, and...

Something missing from the top-level post: why stagnation.

I'll just put out that one of the tiny things that most gave me a sense of "fuck" in relation to stagnation was reading an essay written in 1972 that was lamenting the "publish or perish" phenomenon. I had previously assumed that that term was way more recent, and that people were trying to fix it but it would just take a few years. To realize it was 50 years old was kinda crushing honestly.

Here's google ngrams showing how common the phrase "publish or perish" was in books through the last 200 years... (read more)

2MalcolmOcean36mI don't have the detailed knowledge needed to flesh this out, but it occurred to me that there might be a structure of an argument someone could make that would be shaped something like "we got a lot of meaningful changes in the last 70 years, but they didn't create as many nonlinear tipping points as in the previous industrial revolutions." Fwiw, flying cars probably wouldn't hit any such tipping point, though self-driving cars probably would. Widespread nuclear energy might've meant little concern about global warming at this point, but solar & wind have been trucking along slowly enough that there's tons of concern. I think the internet is doing something important for the possibility of running your own 1-2 person business, which is a meaningful tipping point. There are various other tipping points happening as a result of computers and the internet, which is why I think it stands out as @jasoncrawford's only named revolutionary technologies. Anyway, hoping someone can steelman this for me, considering the nonlinear cascades in each era & from each technology, and seeing whether there's indeed something different about pre-1970 and after. I'm not confident there is, to be clear, but I have some intuition that says this might be part of what people are seeing.
1Aaro Salosensaari14hI am going to push back a little on this one, and ask for context and numbers? As some of my older relatives commented when Wolt became popular here, before people started going to supermarkets, it was common for shops to have a delivery / errand boy (this would have been 1950s, and more prevalent before the WW2). It is one thing that strikes out reading biographies; teenage Harpo Marx dropped out from school and did odd jobs as an errand boy; they are ubiquitous part of the background in Anne Frank's diaries; and so on. Maybe it was proportionally more expensive (relative to cost of purchase), but on the other hand, from the descriptions it looks like the deliveries were done by teenage/young men who were paid peanuts.
2lsusr14hWhen I think about home delivery, my reference point is the dao xiao mian 刀削面 knife I bought in 2020 from AliExpress for $3.57 including shipping and delivery to my door. In the 1990s, the simplest way to get an exotic product like that was to fly to China. I'm not just thinking about the ease of sending something from one house to another within my city. I'm thinking about the ease of sending something from an arbitrary residence on Earth to an arbitrary residence on Earth.

What is up with spirituality? I mean, from an atheistic perspective?

In my experience, atheists tend to focus on the empirical question of whether there is an all-powerful supernatural creature behind all that we observe. And yeah, there probably isn’t.

But having won that point, what does one make of the extreme popularity of religion? I think the usual answer given is something like ‘well, we used to be very ignorant and not have good explanations of natural phenomena, plus we tend to see agents in everything because our agent detection software is oversensitive’.

Which might explain the question ‘Why would people think a supernatural agent controls things?’. But what seems like only a corner of religion.

Another big part of religion—and a thing that also occurs outside religion—seems to be...

From what I've read, the hormone Oxytocin appears to be behind many of the emotions people generally describe as "spiritual". While the hormone is still being studied, there is evidence that indicates it can increase feelings of connection to entities larger than the self, increase feelings of love and trust with others, and promote feelings of belonging in groups.

The emotion of elevation, which appears to be linked to oxytocin, is most often caused by witnessing other people do altruistic or morally agreeable actions. This may explain the tendency for man... (read more)

1Stuart Anderson1hTL;DR - Religion exists because Cthulhu. People exist in a universe greater and more hostile than they could ever imagine. Everyone can be snuffed out in a heartbeat for no reason at all. There is no meaning or purpose. Humanity is nothing more than insignificant bacteria on the surface of some irrelevant rock. All of us will suffer and die and be forgotten as the nothings we are. Now, you can choose to contemplate that existential horror and risk insanity, or you can cede to your culture and your biology (because we can induce religious elements with fNMRI and drugs to some degree) and pretend that you matter and that there is order and justice in the world. Most people would rather spend an eternity burning in Hell than a second considering that their consciousness is a temporary accident of chemistry that will end very soon.
1Sam Charles Norton1hI'm newish here (six months or so) so if this comment takes things in a bad direction or is otherwise inappropriate please delete it. It might also be too long. With that clearing of the throat, I would like to suggest the following: * that the atheistic perspectives become more informed about the actual nature of the philosophical/ theological tradition, especially within the dominant (Thomist) tradition of Western Christianity. If this site is seeking to become 'less wrong' through exploration and respectful dialogue then please hear a representative of that tradition when they say 'the existence of God is not an empirical question' and 'God is not a supernatural agent'. I suppose this is a way of saying 'don't assume that pre-Enlightenment thinkers were stupid', which is good advice whether what I am specifically saying here is true or not. Denys Turner (top philosophical/theological professor ex-Cambridge UK) put it like this: atheists (the atheists who make such arguments) haven't even reached the 'theologically necessary level of denial' - it's not that the specific claim being made by the atheist is incorrect, it's that the implication that is believed to follow does not actually follow. (Turner paper is available here if anyone wants to read it https://www.jstor.org/stable/43249944?seq=1 [https://www.jstor.org/stable/43249944?seq=1] ) * that an aspect of spirituality (or 'wisdom traditions') that is missed in that otherwise estimable list is that it involves techniques to enable a closer appropriation of the truth. That is, in order to discern the truth correctly, it is necessary to address internal questions involving matters of character and will. This requires time spent in reflection and most especially it involves the cultivation of the virtue of apatheia or detachment (I believe there to be a significant overlap across different wisdom traditions on this point). Most especially the
4Unnamed4hOne of the standard stories is that it's about social cohesion. Especially with rituals done as a group, and other features like visibly taking on costly restrictions in a way that demonstrates buy-in. Sosis & Alcorta (2003).Signaling, solidarity, and the sacred: The evolution of religious behavior [https://scholar.google.com/scholar?cluster=12628699934355608629].

Pardon me while I make my way to the rooftops.

So I’m sure it’s not that simple especially because of regulatory issues, but… did you hear the one where humanity could have produced enough mRNA vaccine for the entire world by early this year, and could still decide to do it by the end of this year, but decided we would rather save between four and twelve billion dollars? 

If not, there’s a section on that.

Meanwhile, we also can’t figure out how to put the vaccine doses we already have into people’s arms in any reasonable fashion. New policies are helping with that, and we are seeing signs that things are accelerating, but wow is this a huge disaster.

We took some steps this week towards sane policy. Everyone over...

Prompted by your comment, when I wrote more stuff last night, I made it standalone: 

Covid Canada Jan25: low & slow

🇨🇦 People liked my Canada comment on Zvi's post on Jan 14th, so here's another update as a top-level post. I thought I wouldn't have much to say but apparently I wrote some stuff!

(I want to underscore that this is a rambly summary from someone who does not have the same thorough researchy energy or rigorous models as Zvi or many other LWers in many situations. If you have major decisions to make, use this summary as at most a jumping off point. Slightly BC-heavy because I moved to BC a few months ago and have been getting more news here. Also some of my rambles involve info that is probably common-knowledge to most Canadians who are informed whatsoever, I guess because I'm imagining people from other...

Appreciating you chiming in. That's a great point about how different rural communities are doing different. I kind of had the impression some rural areas in the prairies were doing bad, but I didn't off-hand have a sense of where or why. Your rough sketch with vague notions is helpful on that front.

I drove across the country on the way out to BC a couple months ago, and it's indeed hard to imagine the farming areas in the south half of the prairies having much covid spread, whereas it makes sense that resource-extraction areas would for the 2 reasons you describe. That plus exponentials/nonlinearities seems sufficient to explain most of the discrepancy, maybe.

2MalcolmOcean1hHuh yeah, weird. It's like, what are they waiting for with AstraZeneca? It is worth noting that I think ~40,000 doses per day is according to plan at this phase, a plan which calls for like a million doses a week as of the start of April. Which sounds like a lot but is still way too slow! (A million a day would be awesome.) But the failure to ramp up continues to be a failure of intending to ramp up, it seems. I'll be quite concerned if we fail to ramp up to even the unambitious levels planned for April. I don't know to what extent useful prep is happening to ensure that we're ready to go hard once we get more doses.
4lsusr13hI am always excited to read anything that examines a global situation from a non-USA-centric perspective. Thank you for writing a concise, quality post based on facts.
1William_S19hHere is a regularly updated version of the vaccine chart https://covid19tracker.ca/vaccinegap.html [https://covid19tracker.ca/vaccinegap.html]

Reminder of the rules of Stag Hunt:

  • Each player chooses to hunt either Rabbit or Stag
  • Players who choose Rabbit receive a small reward regardless of what everyone else chooses
  • Players who choose Stag receive a large reward if-and-only-if everyone else chooses Stag. If even a single player chooses Rabbit, then all the Stag-hunters receive zero reward.

From the outside, the obvious choice is for everyone to hunt Stag. But in real-world situations, there’s lots of noise and uncertainty, and not everyone sees the game the same way, so the Schelling choice is Rabbit.

How does one make a Stag hunt happen, rather than a Rabbit hunt, even though the Schelling choice is Rabbit?

If one were utterly unscrupulous, one strategy would be to try to trick everyone into thinking that Stag is...

I haven't seen a strong argument that "stag hunt" is a good model for reality. If you need seven people to hunt stag the answer isn't to have seven totally committed people, who never get ill, have other things to do, or just don't feel like it. I'd rather have ten people who who are 90% committed, and be ready to switch to rabbit the few days when only six show up.

4DonyChristie9hWhat do you think we are missing?
7G Gordon Worley III10hAs a society, we've actually codified some of these ideas in advice idioms like "fake it till you make it". We've also codified the equal and opposite advice, e.g. "you can fool all the people some of the time and some of the people all the time, but you cannot fool all the people all the time [https://abrahamlincolnassociation.org/you-can-fool-all-of-the-people-lincoln-never-said-that/] ". This means we can not only engage in this kind of coordination behavior without understanding the mechanism, we have generalized the coordination mechanism to apply across domains such that no deeper understanding is needed, i.e. you don't even need to notice that the strategy works, only have adopted a norm that you're already applying in multiple domains that allows you to coordinate without realizing it.
14Vanessa Kosoy11hI think that all of ethics works like this: we pretend to be more altruistic / intrinsically pro-social than we actually are, even to ourselves [https://www.lesswrong.com/posts/YcdArE79SDxwWAuyF/the-treacherous-path-to-rationality?commentId=7tkhp6rwv2BpmkHj3#comments] . And then there are situations like battle of the sexes, where we negotiate the Nash equilibrium while pretending it is a debate about something objective that we call "morality".

[Epistemic status: Strong opinions lightly held, this time with a cool graph.]

I argue that an entire class of common arguments against short timelines is bogus, and provide weak evidence that anchoring to the human-brain-human-lifetime milestone is reasonable. 

In a sentence, my argument is that the complexity and mysteriousness and efficiency of the human brain (compared to artificial neural nets) is almost zero evidence that building TAI will be difficult, because evolution typically makes things complex and mysterious and efficient, even when there are simple, easily understood, inefficient designs that work almost as well (or even better!) for human purposes.

In slogan form: If all we had to do to get TAI was make a simple neural net 10x the size of my brain, my brain would still look the...

They define "one data point" as "one token," which is fine.  But it seems equally defensible to define "one data point" as "what the model can process in one forward pass," which is ~1e3 tokens.  If the authors had chosen that definition in their paper, I would be showing you a picture that looked identical except with different numbers on the data axis, and you would conclude from the picture that the brain should have around 1e12 data points to match its 1e15 params!

Holy shit, mind blown! Then... how ar... (read more)

2Daniel Kokotajlo3hThanks, and I look forward to seeing your reply! I'm partly responding to things people have said in conversation with me. For example, the thing Longs says that is a direct quote from one of my friends commenting on an early draft! I've been hearing things like this pretty often from a bunch of different people. I'm also partly responding to Ajeya Cotra's epic timelines report. It's IMO the best piece of work on the topic there is, and it's also the thing that bigshot AI safety people (like OpenPhil, Paul, Rohin, etc.) seem to take most seriously. I think it's right about most things but one major disagreement I have with it is that it seems to put too much probability mass on "Lots of special sauce needed" hypotheses. Shorty's position--the "not very much special sauce" position--applied to AI seems to be that we should anchor on the Human Lifetime anchor. If you think there's probably a little special sauce but that it can be compensated for via e.g. longer training times and bigger NNs, then that's something like the Short Horizon NN hypothesis. I consider Genome Anchor, Medium and Long-Horizon NN Anchor, and of course Evolution Anchor to be "lots of special sauce needed" views. In particular, all of these views involve, according to Ajeya, "Learning to Learn:" I'll quote her in full: I interpret her as making the non-bogus version of the argument from efficiency here. However, (and I worry that I'm being uncharitable?) I also suspect that the bogus version of the argument is sneaking in a little bit, she keeps talking about how evolution took millions of generations to do stuff, as if that's relevant... I certainly think that even if she isn't falling for the bogus arguments herself, it's easy for people to fall for them, and this would make her conclusions seem much more reasonable than they are. In particular, she assigns only 5% weight to the human lifetime anchor--the hypothesis that Shorty is promoting--and only 20% weight to the short-horizon NN ancho
3Veedrac12hThanks, I think I pretty much understand your framing now. I think the only thing I really disagree with is that “"can use compute to automate search for special sauce" is pretty self-explanatory.” I think this heavily depends on what sort of variable you expect the special sauce to be. Eg. for useful, self-replicating nanoscale robots, my hypothetical atomic manufacturing technology would enable rapid automated iteration, but it's unclear how you could use that to automatically search for a solution in practice. It's an enabler for research, moreso than a substitute. Personally I'm not sure how I'd justify that claim for AI without importing a whole bunch of background knowledge of the generality of optimization procedures! IIUC this is mostly outside the scope of what your article was about, and we don't disagree on the meat of the matter, so I'm happy to leave this here.
2Daniel Kokotajlo3hI think I agree that it's not clear compute can be used to search for special sauce in general, but in the case of AI it seems pretty clear to me: AIs themselves run in computers, and the capabilities we are interested in (some of them, at least) can be detected on AIs in simulations (no need for e.g. robotic bodies) and so we can do trial-and-error on our AI designs in proportion to how much compute we have. More compute, more trial-and-error. (Except it's more efficient than mere trial-and-error, we have access to all sorts of learning and meta-learning and architecture search algorithms, not to mention human insight). If you had enough compute, you could just simulate the entire history of life evolving on an earth-sized planet for a billion years, in a very detailed and realistic physics environment!

Epistemic status: highly confident (99%+) this is an issue for optimal play with human consequentialist judges. Thoughts on practical implications are more speculative, and involve much hand-waving (70% sure I’m not overlooking a trivial fix, and that this can’t be safely ignored).

Note: I fully expect some readers to find the core of this post almost trivially obvious. If you’re such a reader, please read as “I think [obvious thing] is important”, rather than “I’ve discovered [obvious thing]!!”.


In broad terms, this post concerns human-approval-directed systems generally: there’s a tension between [human approves of solving narrow task X] and [human approves of many other short-term things], such that we can’t say much about what an approval-directed system will do about X, even if you think you’re training an X...

I am unsure as to what the judge's incentive is to select the result that was more useful, given that they still have access to both answers? Is it just because the judge will want to be such that the debaters would expect them to select the useful answer so that the debaters will provide useful answers, and therefore will choose the useful answers?

If that's the reason, I don't think you would need a committed deontologist to get them to choose a correct answer over a useful answer, you could instead just pick someone who doesn't think very hard about cert... (read more)

This is post 5 of 10 in my cryonics signup guide, and the second of five posts on life insurance.

In this post, I'll cover the different types of life insurance policies you might want to use to fund your cryopreservation. This is the most complicated part of this entire sequence and it's taken me many, many hours of confusion to reach even the tenuous understanding I'm presenting here. Please bear with me and let me know if you spot any errors or have any questions.

Note that in addition to being labyrinthine, the life insurance landscape changes fairly often, such that the options that were available to you when you signed up for cryonics ten years ago might no longer be offered. They're always adding new types of...

Buying a cheap term policy makes sense if you expect to be able to self fund later.

2G Gordon Worley III10hSomething I'm just now thinking of: I wonder if cryonics could be funded using a retirement account? It has similar protections to life insurance in terms of being paid to a beneficiary instead of your estate, and I actually have Alcor listed as the beneficiary of my retirement accounts in case I die since that seems like the best use of my money at that point if I'm not going to be using it to fund a retirement. Maybe combine retirement account with term life insurance to get more cost efficient coverage? Possibly requires creating a trust to put those two things together. I'm sure someone has already thought about this and may know reasons why it would or wouldn't work. For example, maybe the funds aren't as protected as with life insurance, which go directly to your beneficiary without having to first pay off debts in your estate.
1Ericf14hIsn't anything with a cash accumulation just combining Insurance with Investment, and charging you ?unknown? extra fees compared to having Insurance from an Insurer, and Investing the money saved in a low cost dedicated Investment option? So, GUL or Term, depending on your relative $ position?
3lincolnquirk15hGreat analysis! I would strengthen the term+self-fund recommendation for readers of LW. You say it only makes sense if you "expect to be very wealthy"; however, it seems to me that it is pretty easy, over the course of 20 years or so, to plan to save up a few hundred thousand $ to self-fund after that point. If that doesn't sound easy then it is not so clear that cryonics is for you; and IUL isn't solving the problem because you still have to pay the money in expectation. It seems like IUL only makes sense if the term insurance gets expensive before you can conceive of a way to save the money on your own. I guess it also makes sense if you aren't conscientious about saving or making term payments. But it seems to me like you pay a lot for this convenience.

Three can keep a secret, if two of them are dead

Benjamin Franklin

First you say, “Someone needs to hang for this as a turn of phrase” and of course you don’t mean that literally. That would be horrific, it’s just a turn of phrase. Indeed you are genuinely horrified. Next it becomes “I wish we could just shoot him”, but of course you weren’t serious and you’d never actually do it. Again, you completely believe this. But before you know it, the palace is in flames and you’re getting ready to string up the king in his pajamas, but despite the illumination you’re still blind to your tendency to decieve yourself.


I've started a (free) Substack, in case anyone is interested.