A Map that Reflects the Territory

The best LessWrong essays from 2018, in a set of physical books

A beautifully designed collection of books, each small enough to fit in your pocket. The book set contains over forty chapters by more than twenty authors including Eliezer Yudkowsky and Scott Alexander. This is a collection of opinionated essays exploring argument, aesthetics, game theory, artificial intelligence, introspection, markets, and more, as part of LessWrong's mission to understand the laws that govern reasoning and decision-making, and build a map that reflects the territory.

Learn More

Recent Discussion

As of today, I've been in full-on, hardcore lockdown for an entire year. I have a lot of feelings – both about the personal social impacts of lockdown and about society being broken – that I won't go into in this public space. What I want to figure out in this post is what rationality-relevant lessons I can draw from what happened in my life this past year. 

(Meta: This post is not well-written and is mostly bullet points, because the first few versions I wrote were unusable but I still wanted to publish it today.)


Some facts about my lockdown:

  • I have spent 99.9% of the year within 1 mile of my house
  • Up until last month I had spent the entire year within 10 miles of my house
  • Between February

A year of lockdown also has a lot of tail risk and the person who had the sepsis death died to tail risk of the lockdown. 

Not consuming health care services and mental health consequences of reduced social interactions both have dangerous tail risks. 

2Vika7mAs a data point, I found it to be a net positive to live in a smallish group house (~5 people) during the pandemic. The negotiations around covid protocols were time-consuming and annoying at times, but still manageable because of the small number of people, and seemed worth it for the benefits of socializing in person to my mental well-being. It also helped that we had been living together for a few years and knew each other pretty well. I can see how this would quickly become overwhelming with more people involved, and result in nothing being allowed if anyone can veto any given activity.
2ChristianKl1hWhile risk of death is clearly relatively low (especially when it gets people to consume medical services that might also reduce risk of death), the risk of long COVID isn't clearly very low.
1emanuele ascani5hBerkeley people have it good. At least they are doing this together. Imagine being a Berkeley person at heart and being in a completely anti-Berkeley environment.

Taffix is a nasal powder spray that builds up a protective mechanical barrier against viruses and allergens in the nasal cavity. The EMA allowed them to write on their packaging insert to advertise it's clinical effects by saying: 

Taffix was found highly effective in blocking several respiratory viruses including SARS-CoV-2 in laboratory studies. 

The idea was conceived in March and they did a study during the Jewish New Year event which was as expect a superspreader event (orthodox Jewish people gathering in close proximity while a lot of them were infected). Among the 83 people who received the intervention only the two people who reported not consistently using the spray (you have to apply it every 5 hours) got infected while in the control group 16 out of 160...

2Richard_Kennaway41mI notice that Taffix is no longer available on Amazon co uk. It's still available on eBay co uk for around twice what it was on Amazon.

On Amazon.de it rose 20% in price since I bought it but is still available: https://www.amazon.de/-/en/Taffix-Nasal-Powder-Spray-Milligram/dp/B08KHR5B4M/ref=sr_1_3?crid=E2NN4USFR7WF&dchild=1&keywords=taffix+nasenspray&qid=1614695012&sprefix=taffix+na%2Caps%2C182&sr=8-3

The phrase "we should raise awareness about " creeps me out. I had trouble identifying exactly why until I read this summary of simulacra levels.

Level 1: “There’s a lion across the river.” = There’s a lion across the river.

Level 2: “There’s a lion across the river.” = I don’t want to go (or have other people go) across the river.

Level 3: “There’s a lion across the river.” = I’m with the popular kids who are too cool to go across the river.

Level 4: “There’s a lion across the river.” = A firm stance against trans-river expansionism focus grouped well with undecided voters in my constituency.

Level 1 states truth about reality. Level 2 manipulates reality. Level 3 states truth about social reality. Level 4 manipulates social reality.

The transition...

I'm not convinced that your new "levels" are actual levels.

The structure of levels 1-4 is: 1 is base reality, 2 is when you see how people react to 1 and try to manipulate it, 3 is when everyone sees people doing 2 and adjusts, 4 is when you see how people react to 3 and try to manipulate it. (Kinda; level 4 is a bit nebulous.)

But your level 5 (news / saying "X" means "X is interesting") doesn't seem like it's built on top of 4 in the same way as 2,3,4 are built on 1,2,3. Exactly what it's built on will vary from story to story, which suggests to me that n... (read more)

3Dale Udall14hEver since reading the book "Virus of the Mind" (Brodie, 1996), I've been wary of holding any opinion I don't fully endorse or otherwise remember why I started believing in the first place. (This hasn't been true of my opinion about politics in general until I typed that sentence, full disclosure.) I've been especially wary of using someone else's phrasing to spread really sticky ideas. I guess I've got a good memetic immune system? I think we should raise awareness of the concept of a memetic immune system.
2Dagon16hI'm not sure I buy this. In many uses, "raise awareness about X" is just a shorthand (or euphemism, since talking about status is low-status) for "increase the status of addressing X". You can certainly argue that addressing X directly is more effective than indirectly in this way. Or that X is appropriately-statused already and you shouldn't try to change it. But I don't think it's a separate class of epistemic or equilibrium mistake.
2ChristianKl42mYes, it's a shorthand. That however doesn't mean that the argument doesn't still stand "increase the status of addressing X" is a higher simulacra level then "addressing X". It also tends to lead to people addressing X in ways that are less effective for actually solving X then when the discussion is more directed at "addressing X".

Summary: Deflation (or so I heard) is considered harmful because it stifles growth. Central banks have been fighting it to keep economies healthy. Cryptocurrencies can be designed to be deflationary. If the market capitalization of cryptocurrencies becomes big, central banks may have no way to contain the deflation. This may be catastrophic in many ways – but might it also slow AGI development and buy safety research more time?

I’ve been wondering what the “crypto endgame” may look like. Crypto may just turn out to have been a bubble or may continue for decades unchanged at its current limited level of significance. But we’re probably sufficiently prepared for those scenarios already.

Instead the crypto market capitalization might take off at a superlinear pace. Bitcoin is currently on rank 14...

Away from proof of work. :-)

1Gerald Monroe12hI don't see what problem is being solved by smart contracts here; at the end of the day you have to interact with the real world to enforce your contracts. The smart contract would specify the identities of other network accounts who will be able to vote on the outcome of the contract. So there could be, say, 10 third party accounts who represent "observers" who vote on whether or not they believe the target was killed. (from reading the paper). These "observers" would have a reputation built up over past predictions and might not actually be human beings who can be arrested. (or they might be living in countries where this action is not considered a crime). This means the hitman can either hunt down each anonymous observer and force them at gunpoint to vote the hitman's way (which other mechanisms would need to zero out their reputation score for this false observation) or kill the target or forfeit the money back to the buyer when the contract expires.
1maximkazhenkov11hYou're just delegating the problem away to an observer reputation system that has the same problem one level deeper. Who actually has incentive to align reputations of observers with what actually happened?
1Gerald Monroe11hThis is a thorny problem, and I'm not working in this space. Having thought a bit about the problem and rejected many other possibilities, what I arrived at is this: day 0, no one has a reputation but n accounts 'volunteer' to be judges Day n, each "judge" has a history log of the (evidence, decision). Automated tools detect a corrupt judge by looking at the log and looking for decisions not justified by the evidence, and then the buyer and the seller agree on a possible list of non-corrupt judges, and a random sampling of them is chosen. (simplest way is to look at a judge making a different decision from another judge, but determining who is "right" when the majority is wrong is a difficult unsolved problem) There are some difficulties with this, namely that a judge can only make decisions on publicly available information. For example, you could in theory use it to place a bet for an event that will later happen, and these judges vote whether or not your bet is good. The incentive that the judges have is the longer the history log of correct decisions, the more that judge is "worth" and the larger the fee they will get.

I love asking children (and adults in some cases) the following question:

Five birds are sitting in a tree. A hunter takes a rifle and shoots one of them. How many birds are left? (If your answer is 'four' - try again!)

This is a system I/system II trap, akin to "which weighs more, a pound of feathers or a pound of gold?" In my experience kids (and adults) usually get this wrong the first time, but kids get a special kick out of something that sounds like a math problem they do for homework but turns out to be a bit more. I've also used the 2, 4, 8 puzzle for impromptu demos of confirmation bias. These are fun and engaging ways to teach kids about cognitive biases...

One for older / more interested kids - the Monty Hall problem.

I remember my uncle spending a long time going through this with me and having to actually run the scenario a few times for me to believe he was right!

5Answer by Vanilla_cabs4hOne I haven't seen anywhere: I go hiking on a mountain. When I start, the water makes up half the total wieght of my backpack. When I reach the summit, I have drunk half the water. What proportion of the backpack weight does it make up now?
2Yoav Ravid3hNice one! Took me a moment :)
3Answer by Yoav Ravid5hYou can use the questions from the Cognitive Reflection Test [https://en.wikipedia.org/wiki/Cognitive_reflection_test]

Many people have wondered about the problem of evil. Why evil, and suffering is probably the most obvious example, exists in the world created by an Omnibenevolent God. You can find some theodicy trying to "justify" the existence of a benevolent God. Relatively recently, dr. Stephen Law popularized the conception of a perfectly evil God, an entirely malevolent being. You could read about it here:  https://en.wikipedia.org/wiki/Evil_God_Challenge
There are also a few videos made by Alex O'Connor on his YouTube channel CosmicSkeptic, https://www.youtube.com/watch?v=xLnsY5io964
Both Good and Evil God challenges seem to be symmetric.

 Line of argumentation we can use trying to make one of the reasonings more probable can be as well used to do so with the second one. 

Translation results It may follow from this that we should attribute equal...

Commercial fit-testing for masks needs quite fancy equipment like the Allegro Saccharin Fit Test Kit 2040 which combines undesireable features of being currently unavailable to order, expensive and having to take up room in my apartment after beign ordered. 

It seems to me that there should be a way to get fit-testing done without as fancy equipment as it's just about exposing oneself to the smell of one of the substances that are are suitable for fit-testing (that have the right molecular size). Has anybody here found a way to do reliable fit-testing for themselves without the commerical equipment?

I tested it and you are right, the mist produced had no effect. 

1Florin9hThe "paranoia" can justified for several reasons. * You live with high-risk people. * You want to avoid long covid. * You want to wait for better vaccines. * You want to avoid vaccines altogether, since you're going to be wearing a face covering even after you get vaccinated. * You're embarrassed by the fact that you should have but didn't prepare for a pandemic (you're a "rationalist" after all and knew about this xrisk stuff!) and don't intend to make that mistake again. You think that much worse pandemics could happen in your lifetime, so you might as well get used to wearing the right gear today. Practice makes perfect. Lugging around oxygen tanks is not as practical or necessary.
1Florin10hTrying to detect leaks by smelling stuff is useless, since smells aren't affected by particulate filters. This is how to do a fit test correctly without using fancy equipment: https://www.youtube.com/watch?v=-5zbj3_ezqE [https://www.youtube.com/watch?v=-5zbj3_ezqE]

This post is a response to SimonM's post, Kelly isn’t (just) about logarithmic utility. It's an edited and extended version of some of my comments there.

To summarize the whole idea of this post: I'm going to argue that any argument in favor of the Kelly formula has to go through an implication that your utility is logarithmic in money, at some point. If it seems not to, it's either:

  • mistaken
  • cleverly hiding the implication
  • some mind-blowing argument I haven't seen before.

Actually, the post I'm responding to already mentioned one argument in this third category, which I'll mention later. But for the most part I think the point still stands: the best reasons to suppose Kelly is a good heuristic go through arguing logarithmic utility.

The main point of this post is...

One possible way to get at the hack of ignoring unlikely possibilities in a reasonable way might be to do something similar to the "typical set" found in information theory. Especially as utility function maximization can be reformulated as relative entropy minimization. 
(Epistemic status: my brain saw a possible connection, i have not spent much time on this idea)

1Vitor3hGreat post, I find it really valuable to engage in this type of meta-modeling, i.e., deriving when and why models are appropriate. I think you're making a mistake in Section 2 though. You argue that a mode optimizer can be pretty terrible (agreed). Then, you argue that any other quantile optimizer can also be pretty terrible (also agreed). However, Kelly doesn't only optimize the mode, or 2% quantile, or whatever other quantile: it maximizes all those quantiles simultaneously! So, is there any distribution for which Kelly itself fails to optimize between meaningfully different states (as in your 2%-quantile with 10% bad outcome example)? I don't think such a distribution exists. (Note: maybe I'm misunderstanding what johnswentworth said here [https://www.lesswrong.com/posts/zmpYKwqfMkWtywkKZ/kelly-isn-t-just-about-logarithmic-utility?commentId=ogmGDhrwBCLkS6dMH#3TYzMCQFbHYxhQqEi] , but if solving for any x%-quantile maximizer always yields Kelly, then Kelly maximizes for all quantiles, correct?)
1bluefalcon5hYou're leaving out geometric growth of successive bets. Kelly maximizes expected geometric growth rate. Therefore over enough bets Kelly maximizes expected, i.e. mean, wealth, not merely median wealth.
3SimonM16hFor sure - both my titles were clickbait compared to what I was saying. I think if I was trying to explain Kelly, I would definitely talk in terms of time-averaging and maximising returns. I (hope) I wouldn't do this as an "argument for" Kelly. I think if I was to make an argument for Kelly which is trying to persuade people it would be something close to my post. (Whereby I would say "Here are a bunch of nice properties Kelly has + it's simple + there are easy modifications if it seems too aggressive" and try to gauge from their reactions what I need to talk about). I will definitely be more careful about how I phrase this stuff though. I think if I wrote both posts again I would think harder about which bits were an "argument" and which bits were guides for intuition. I actually wouldn't make very much of a defence for the Peters stuff. I (personally) put little stock in it. (At least, I haven't found the "Aha!" moment where what they seem to be selling clicks for me). I think the most interesting thing about Kelly (which has definitely come through over our posts) is that Kelly is a very useful lens into preferences and utilities. (Regardless of which perspective you come from).

As a hacker and cryptocurrency liker, I have been hearing for a while about "DeFi" stuff going on in Ethereum without really knowing what it was. I own a bunch of ETH, so I finally decided that enough was enough and spent a few evenings figuring out what was going on. To my pleasant surprise, a lot of it was fascinating, and I thought I would share it with LW in the hopes that other people will be interested too and share their thoughts.

Throughout this post I will assume that the reader has a basic mental model of how Ethereum works. If you don't, you might find this intro & reference useful.

Why should I care about this?

For one thing, it's the coolest, most cypherpunk thing going. Remember...

Solid primer. It is hard to really simplify this subject but I think you did a decent job.

This Sunday, Anna Salamon and Oliver Habryka will discuss whether people who care about x-risk should have children.

A short summary of Oliver's position is:

If we want to reduce existential risk and protect the cosmic commons, we have some really tough challenges ahead and need to really succeed at some ambitious and world-scale plans. I have a sense that once people have children, they tend to stop trying to do ambitious things. It also dramatically reduces the flexibility of what plans or experiments they can try.

And a short summary of Anna's position is:

Most human activity is fake (pretends to be about one thing while being micro-governed by a different and unaligned process, e.g. somebody tries to "work on AI risk" while almost all of the micropulls come from

2Raemon18hThere is not – it was a kinda delicate conversation and we decided it was better for Oli and Anna and everyone else to feel free to speak out loud. I also think much of the value was sort of a meandering sussing out of fuzzy positions that's just actually kinda hard to get if you weren't there at the time. That said, one participant said they might write up some followup thoughts, that serve as something of a distillation.

Oh darn :( I was really looking forward to this event but ended up being busy then. One of the benefits of doing stuff like this on Zoom is to engage a broader community -- why stop at only people who can be there at the appointed time?

Would it have caused you to record it if I had asked nicely ahead of time? (I saw the announcement and knew I would likely be busy then. I could have thought to ask.)