Best of LessWrong 2020

You've probably heard the advice "to be a good listener, reflect back what people tell you." Ben Kuhn argues this is cargo cult advice that misses the point. The real key to good listening is intense curiosity about the details of the other person's situation. 

Customize
Rationality+Rationality+World Modeling+World Modeling+AIAIWorld OptimizationWorld OptimizationPracticalPracticalCommunityCommunity
Personal Blog+
That no one rebuilt old OkCupid updates me a lot about how much the startup world actually makes the world better The prevailing ideology of San Francisco, Silicon Valley, and the broader tech world, is that startups are an engine (maybe even the engine) that drives progress towards a future that's better than the past, by creating new products that add value to people's lives. I now think this is true in a limited way. Software is eating the world, and lots of bureaucracy is being replaced by automation which is generally cheaper, faster, and a better UX. But I now think that this narrative is largely propaganda. It's been 8 years since Match bought and ruined OkCupid and no one, in the whole tech ecosystem, stepped up to make a dating app even as good as old OkC is a huge black mark against the whole SV ideology of technology changing the world for the better. Finding a partner is such a huge, real, pain point for millions of people. The existing solutions are so bad and extractive. A good solution has already been demonstrated. And yet not a single competent founder wanted to solve that problem for planet earth, instead of doing something else, that (arguably) would have been more profitable. At minimum, someone could have forgone venture funding and built this as a cashflow business. It's true that this is a market that depends on economies of scale, because the quality of your product is proportional to the size of your matching pool. But I don't buy that this is insurmountable. Just like with any startup, you start by serving a niche market really well, and then expand outward from there. (The first niche I would try for is by building an amazing match-making experience for female grad students at a particular top university. If you create a great experience for the women, the men will come, and I'd rather build an initial product for relatively smart customers. But there are dozens of niches one could try for.) But it seems like no one tried to recreate
quila2011
0
nothing short of death can stop me from trying to do good. the world could destroy or corrupt EA, but i'd remain an altruist. it could imprison me, but i'd stay focused on alignment, as long as i could communicate to at least one on the outside. even if it tried to kill me, i'd continue in the paths through time where i survived.
It seems the pro-Trump Polymarket whale may have had a real edge after all. Wall Street Journal reports (paywalled link, screenshot) that he’s a former professional trader, who commissioned his own polls from a major polling firm using an alternate methodology—the neighbor method, i.e. asking respondents who they expect their neighbors will vote for—he thought would be less biased by preference falsification. I didn't bet against him, though I strongly considered it; feeling glad this morning that I didn't.
Thomas KwaΩ3170
7
What's the most important technical question in AI safety right now?
How to build a stockpile of testosterone for HRT, as a buffer in case of emergency: * Get your provider to prescribe "single use" vials and make sure they're marking them as "single use" in their prescription notes. These usually contain 200mg, which for most people is two doses or more. By design, you're meant to throw away the leftovers when you don't use all 200mg in a single dose, and that is what pharmacies and insurance companies expect you to do. * Use each vial more than once, just make sure you alcohol swab the top. * Organize your vials by expiration date. This will not necessarily line up with the order in which you receive the vials, so you have to actually check. Use vials with earlier expiration dates first. This will result in access to testosterone at twice the rate you use it, and you'll be able to build up at least a bit of a buffer. I'm not telling anybody to do this. Just that if I were scared of losing access to HRT, this is what I'd do. Edit to add: A friend suggested a couple more details to keep in mind. The "single use" vials lack a bacteriostatic additive like benzyl alcohol, so they pose a greater risk of infection. Extra important to be meticulous about sterilization! Also, sterile syringe filters can be used to save a cored or otherwise problematic vial—either filtering entirely to a new container, or just filtering when drawing.

Popular Comments

Recent Discussion

Epistemic status: splitting hairs. Originally published as a shortform; thanks @Arjun Panickssery for telling me to publish this as a full post.

There’s been a lot of recent work on memory. This is great, but popular communication of that progress consistently mixes up active recall and spaced repetition. That consistently bugged me — hence this piece.

If you already have a good understanding of active recall and spaced repetition, skim sections I and II, then skip to section III.

Note: this piece doesn’t meticulously cite sources, and will probably be slightly out of date in a few years. I link to some great posts that have far more technical substance at the end, if you’re interested in learning more & actually reading the literature.

I. Active Recall

When you want to learn...

Claim: memeticity in a scientific field is mostly determined, not by the most competent researchers in the field, but instead by roughly-median researchers. We’ll call this the “median researcher problem”.

Prototypical example: imagine a scientific field in which the large majority of practitioners have a very poor understanding of statistics, p-hacking, etc. Then lots of work in that field will be highly memetic despite trash statistics, blatant p-hacking, etc. Sure, the most competent people in the field may recognize the problems, but the median researchers don’t, and in aggregate it’s mostly the median researchers who spread the memes.

(Defending that claim isn’t really the main focus of this post, but a couple pieces of legible evidence which are weakly in favor:

...
jimmy20

Claim: memeticity in a scientific field is mostly determined, not by the most competent researchers in the field, but instead by roughly-median researchers. [...] Sure, the most competent people in the field may recognize the problems, but the median researchers don’t, and in aggregate it’s mostly the median researchers who spread the memes.

 

This assumes the median researchers can't recognize who the competent researchers are, or otherwise don't look to them as thought leaders.

I'm not arguing that this isn't often the case, just that it isn't alw... (read more)

4jimmy
There's no norm saying you can't be ignorant of stats and read, or even post about things not requiring an understanding of stats, but there's still a critical mass of people who do understand the topic well enough to enforce norms against actively contributing with that illiteracy. (E.g. how do you expect it to go over if someone makes a post claiming that p=0.05 means that there's a 95% change that the hypothesis is true?) Taking it a step further, I'd say my household "has norms which basically require everyone to speak English", but that doesn't mean the little one is quite there yet or that we're gonna boot her for not already meeting the bar. It just means that she has to work hard to learn how to talk if she wants to be part of what's going on. Lesswrong feels like that to me in that I would feel comfortable posting about things which require statistical literacy to understand, knowing that engagement which fails to meet that bar will be downvoted rather than getting downvoted for expecting to find a statistically literate audience here.
6Ben Pace
Curated. I think this model is pretty useful and well-compressed, and I'm glad to be able to concisely link to it. The policy implications are still much open to debate, for here on LessWrong and for other ecosystems in the world.
2ryan_b
To me memetic normally reads something like "has a high propensity to become a meme" or "is meme-like" I had no trouble interpreting the post from this basis. I push back against trying to hew closely to usages from the field of genetics. Fundamentally I feel like that is not what talking about memes is for; it was an analogy from the start, not meant for the same level of rigor. Further, memes and how meme-like things are is much more widely talked about than genetics, so insofar as we privilege usage considerations I claim switching to one matching genetics would require more inferential work from readers overall because the population of readers conversant with genetics is smaller. I also feel like the value of speaking in terms of memes in the post is that the replication crises is largely the fault of non-rigorous treatment; that is to say in many fields the statistical analysis parts really were/are more of a meme inside the field rather than a rigorous practice. People just read other people's published papers analysis sections, and write something shaped like that, replicability be damned.

(Btw everything I write here about orcas also applies to a slightly lesser extent to pilot whales (especially long finned ones)[1].)

(I'm very very far from an orca expert - basically everything I know about them I learned today.)

I always thought that bigger animals might have bigger brains than humans but not actually more neurons in their neocortex (like elephants) and that number of neurons in the neocortex or prefrontal cortex might be a good inter-species indicator of intelligence for mammalian brains.[2] Yesterday I discovered that orcas actually have 2.05 times as many neurons in their neocortex[3] than humans from this wikipedia list. Interestingly though, given my pretty bad model of how intelligent some species are, the "number of neurons in neocortex" still seems like a proxy that doesn't perform...

6Answer by Towards_Keeperhood
Some of my own observations and considerations: Anecdotal evidence for orca intelligence 1. Intimate cooperation between native australian hunter gatherers and orcas for whale hunting: https://en.wikipedia.org/wiki/Killer_whales_of_Eden,_New_South_Wales 2. Orcas being skillful at turning boats around and even sinking a few vessels[1][2]:  https://en.wikipedia.org/wiki/Iberian_orca_attacks 3. Orcas have a wide variety of cool hunting strategies. (e.g. see videos (1, 2)). I don't know how this compares to human hunter gatherers. (EDIT: Ok I just read Scott Alexander's Book review of "The Secret of our success" and some anecdotes on hunter gatherers there seem much more impressive. (But also plausible to me that other orca hunting techniques are also more sophisticated than the examples but in ways it might not be legible to us.)) (ADDED: Tbc, while this is more advanced than I'd a priory expected from animals, the absence of observations of even more clearly stunning techniques is some counterevidence of orcas being smarter than humans. Though I also don't quite point to an example of what I'd expect to see if orcas were actually 250 IQ but what I don't observe, but I also didn't think for long and maybe there would be sth.) Orca language (Warning: Low confidence. What I say might be wrong.) I didn't look deep into research into orca language (not much more than watching this documentary), my impression is that we don't know much yet. Some observations: 1. Orcas language seems to be learned, not innate. Different regions have different languages and dialects. Scientists seem to analogize it to how humans speak different languages in different countries. 2. For some orca groups that were studied, scientists were able to cluster their calls into 23 or 24 different calls clusters, but still with significant variation of calls within a call cluster. 1. (I do not know how tightly calls are clustered, or whether there often are outliers.) 3. Orcas commun

A few more thoughts:

It's plausible that for both humans and orcas the relevant selection pressure mostly came from social dynamics, and it's plausible that there were different environmental pressures.

Actually my guess would be that it's because intelligence was environmentally adaptive, because my intuitive guess would be that group selection is significant enough over long timescales which would disincentivize intelligence if it's not already (almost) useful enough to warrant the metabolic cost, unless the species has a lot of slack.

So an important quest... (read more)

Introduction

Recently we (Elizabeth Van Nostrand and Alex Altair) started a project investigating chaos theory as an example of field formation.[1] The number one question you get when you tell people you are studying the history of chaos theory is “does that matter in any way?”.[2] Books and articles will list applications, but the same few seem to come up a lot, and when you dig in, application often means “wrote some papers about it” rather than “achieved commercial success”. 

In this post we checked a few commonly cited applications to see if they pan out. We didn’t do deep dives to prove the mathematical dependencies, just sanity checks.

Our findings: Big Chaos has a very good PR team, but the hype isn’t unmerited either. Most of the commonly touted applications never...

The claim that uploaded brains don't work because of chaos turns out not to work so well, because it's usually easier to control the divergence than it is to predict the divergence, because you can use strategies like fast-feedback control to prevent yourself from ever getting into the chaotic region, and more generally a lot of misapplication of chaos theory starts by incorrectly assuming that hardness of prediction equals hardness of controlling it, without other assumptions:

  • I think I might have also once saw this exact example of repeated-bouncing-ba
... (read more)

The cleanest argument that current-day AI models will not cause a catastrophe is probably that they lack the capability to do so.  However, as capabilities improve, we’ll need new tools for ensuring that AI models won’t cause a catastrophe even if we can’t rule out the capability. Anthropic’s Responsible Scaling Policy (RSP) categorizes levels of risk of AI systems into different AI Safety Levels (ASL), and each level has associated commitments aimed at mitigating the risks. Some of these commitments take the form of affirmative safety cases, which are structured arguments that the system is safe to deploy in a given environment. Unfortunately, it is not yet obvious how to make a safety case to rule out certain threats that arise once AIs have sophisticated strategic abilities. The goal

...
17Daniel Kokotajlo
Good point re 2. Re 1, meh, still seems like a meta-argument to me, because when I roll out my mental simulations of the ways the future could go, it really does seem like my If... condition obtaining would cut out about half of the loss-of-control ones. Re 3: point by point: 1. AISIs existing vs. not: Less important; I feel like this changes my p(doom) by more like 10-20% rather than 50%. 2. Big names coming out: idk this also feels like maybe 10-20% rather than 50% 3. I think Anthropic winning the race would be a 40% thing maybe, but being a runner-up doesn't help so much, but yeah p(anthropicwins) has gradually gone up over the last three years... 4. Trump winning seems like a smaller deal to me. 5. Ditto for Elon. 6. Not sure how to think about logical updates, but yeah, probably this should have swung my credence around more than it did. 7. ? This was on the mainline path basically and it happened roughly on schedule. 8. Takeoff speeds matter a ton, I've made various updates but nothing big and confident enough to swing my credence by 50% or anywhere close. Hmm. But yeah I agree that takeoff speeds matter more. 9. Picture here hasn't changed much in three years. 10. Ditto. OK, so I think I directionally agree that my p(doom) should have been oscillating more than it in fact did over the last three years (if I take my own estimates seriously). However I don't go nearly as far as you; most of the things you listed are either (a) imo less important, or (b) things I didn't actually change my mind on over the last three years such that even though they are very important my p(doom) shouldn't have been changing much. I agree with everything except the last sentence -- my claim took this into account, I was specifically imagining something like this playing out and thinking 'yep, seems like this kills about half of the loss-of-control worlds' I agree that's a stronger claim than I was making. However, part of my view here is that the weaker claim I did make has a
1Vlad Mikulik
The reasoning for the indistinguishability criterion is as a precaution against a model sandbagging when used as the red team attack model. If the model can tell it’s in a control eval it could pretend to be worse at attacking, making us think the control measures are better than they actually are. If the red team elicitation is robust to this, I agree this isn’t necessary. (See the note below [P2.4]).
5Buck
I don't think this makes any sense. How are you hoping to get the model to attack except by telling it that it's in a control evaluation and you want it to attack? It seems that you are definitely going to have to handle the sandbagging.
Vlad MikulikΩ110

FWIW I agree with you and wouldn't put it the way it is in Roger's post. Not sure what Roger would say in response.

Summary

There should be more people like Mahatma Gandhi in the AI safety community, so that AI safety is a source of inspiration for both future and current generations. Without nonviolence and benevolence, we may be unable to advocate for AI safety.

Introduction

Mohandas Karamchand Gandhi, also known as Mahatma Gandhi, was an Indian activist who used nonviolence in order to support India's independence from Britain. He is now considered as one of the biggest sources of inspiration from people trying to do the most good.

Picture of Gandhi
Picture of Gandhi in 1931.
(Source: Wikimedia Commons)

Nowadays, it is often argued that Artificial Intelligence is an existential risk. If this were to be correct, we should ensure that AI safety researchers are able to advocate for safety.

The argument of this post is simple: As AI...

To get the best posts emailed to you, create an account! (2-3 posts per week, selected by the LessWrong moderation team.)
Log In Reset Password
...or continue with

As Americans know, the electoral college gives disproportionate influence to swing states, which means a vote in the extremely blue state of California was basically wasted in the 2024 election, as are votes in extremely red states like Texas, Oklahoma, and Louisiana. State legislatures have the Constitutional power to assign their state's electoral votes. So why don't the four states sign a compact to assign all their electoral votes in 2028 and future presidential elections to the winner of the aggregate popular vote in those four states? Would this even be legal?

The population of CA is 39.0M (54 electoral votes), and the population of the three red states is 38.6M (55 electoral votes). The combined bloc would control a massive 109 electoral votes, and would have gone...

A real life use for smart contracts 😆

2Measure
The advantage comes from having the parties care about your particular issues rather than those of the current swing states. This would look like both candidates being more favorable to you even if it's still 50-50 which of them wins (and even if they're still in roughly the same places on the left-right axis).
3Measure
Although possibly the red candidate would care more about CATXOKLA red issues and the blue about CATXOKLA blue issues, so it just increases variance rather than expected satisfaction?
1false
I feel that the goal here is to reproduce the effects of choosing a president by popular vote without abolishing the electoral college. Wouldn't an electoral reform to abolish the electoral college be a more realistic goal?

I just read the wikipedia article on the evolution of human intelligence, and TBH I wasn't super impressed with the quality of the considerations there.

I currently have 3 main (categories of) hypotheses for what caused selection pressure for intelligence in humans. (But please post an answer if you have other hypotheses that seem plausible!):

("H" for "hypothesis")

  • H1: social dynamics
  • H2: ability to deploy more advanced (cooperative) hunting strategies
    • Note: I don't mean group selection
...
2Answer by Ape in the coat
I suspect that runaway sexual selection played a huge part.

thanks. Can you say more about why?

I mean runaway sexual selection is basically H1, which I updated to being less plausible. See my answer here. (You could comment there why you think my update might be wrong or so.)

1Answer by Towards_Keeperhood
Actually I changed my mind. Why I thought this before: H1 seems like a potential runaway-process and is clearly about individual selection which has stronger effects than group selection (and it was mentioned in HPMoR). Why I don't think this anymore: * It would also be incredibly huge coincidence if intelligence mostly evolved because of social dynamics but happened to be useful for all sorts of other survival techniques hunters and gatherers use. See e.g. Scott Alexander's Book review of "The Secret of our success". * If there was only individual benefits for intelligence but it was not very useful otherwise then over long timelines group selection would actually select against smarter humans because their neurons would use up more metabolic energy. I think this is not a coincidence but rather that tool use let humans fall into an attractor basin where payoffs of intelligence were more significant.
6TsviBT
Intelligence also has costs and has components that have to be invented, which explains why not all species are already human-level smart. One of the questions here is which selection pressures were so especially and exceptionally strong in the case of humans, that humans fell off the cliff.

Claude dot Ai did a remarkable job of summarizing my thoughts about the current state of things:

https://medium.com/@tanjent/unedited-chat-with-claude-dot-ai-about-politics-and-acting-against-ones-own-self-interest-b3b805d85015