You've probably heard the advice "to be a good listener, reflect back what people tell you." Ben Kuhn argues this is cargo cult advice that misses the point. The real key to good listening is intense curiosity about the details of the other person's situation.
Epistemic status: splitting hairs. Originally published as a shortform; thanks @Arjun Panickssery for telling me to publish this as a full post.
There’s been a lot of recent work on memory. This is great, but popular communication of that progress consistently mixes up active recall and spaced repetition. That consistently bugged me — hence this piece.
If you already have a good understanding of active recall and spaced repetition, skim sections I and II, then skip to section III.
Note: this piece doesn’t meticulously cite sources, and will probably be slightly out of date in a few years. I link to some great posts that have far more technical substance at the end, if you’re interested in learning more & actually reading the literature.
When you want to learn...
Claim: memeticity in a scientific field is mostly determined, not by the most competent researchers in the field, but instead by roughly-median researchers. We’ll call this the “median researcher problem”.
Prototypical example: imagine a scientific field in which the large majority of practitioners have a very poor understanding of statistics, p-hacking, etc. Then lots of work in that field will be highly memetic despite trash statistics, blatant p-hacking, etc. Sure, the most competent people in the field may recognize the problems, but the median researchers don’t, and in aggregate it’s mostly the median researchers who spread the memes.
(Defending that claim isn’t really the main focus of this post, but a couple pieces of legible evidence which are weakly in favor:
Claim: memeticity in a scientific field is mostly determined, not by the most competent researchers in the field, but instead by roughly-median researchers. [...] Sure, the most competent people in the field may recognize the problems, but the median researchers don’t, and in aggregate it’s mostly the median researchers who spread the memes.
This assumes the median researchers can't recognize who the competent researchers are, or otherwise don't look to them as thought leaders.
I'm not arguing that this isn't often the case, just that it isn't alw...
(Btw everything I write here about orcas also applies to a slightly lesser extent to pilot whales (especially long finned ones)[1].)
(I'm very very far from an orca expert - basically everything I know about them I learned today.)
I always thought that bigger animals might have bigger brains than humans but not actually more neurons in their neocortex (like elephants) and that number of neurons in the neocortex or prefrontal cortex might be a good inter-species indicator of intelligence for mammalian brains.[2] Yesterday I discovered that orcas actually have 2.05 times as many neurons in their neocortex[3] than humans from this wikipedia list. Interestingly though, given my pretty bad model of how intelligent some species are, the "number of neurons in neocortex" still seems like a proxy that doesn't perform...
A few more thoughts:
It's plausible that for both humans and orcas the relevant selection pressure mostly came from social dynamics, and it's plausible that there were different environmental pressures.
Actually my guess would be that it's because intelligence was environmentally adaptive, because my intuitive guess would be that group selection is significant enough over long timescales which would disincentivize intelligence if it's not already (almost) useful enough to warrant the metabolic cost, unless the species has a lot of slack.
So an important quest...
Recently we (Elizabeth Van Nostrand and Alex Altair) started a project investigating chaos theory as an example of field formation.[1] The number one question you get when you tell people you are studying the history of chaos theory is “does that matter in any way?”.[2] Books and articles will list applications, but the same few seem to come up a lot, and when you dig in, application often means “wrote some papers about it” rather than “achieved commercial success”.
In this post we checked a few commonly cited applications to see if they pan out. We didn’t do deep dives to prove the mathematical dependencies, just sanity checks.
Our findings: Big Chaos has a very good PR team, but the hype isn’t unmerited either. Most of the commonly touted applications never...
The claim that uploaded brains don't work because of chaos turns out not to work so well, because it's usually easier to control the divergence than it is to predict the divergence, because you can use strategies like fast-feedback control to prevent yourself from ever getting into the chaotic region, and more generally a lot of misapplication of chaos theory starts by incorrectly assuming that hardness of prediction equals hardness of controlling it, without other assumptions:
...
- I think I might have also once saw this exact example of repeated-bouncing-ba
...The cleanest argument that current-day AI models will not cause a catastrophe is probably that they lack the capability to do so. However, as capabilities improve, we’ll need new tools for ensuring that AI models won’t cause a catastrophe even if we can’t rule out the capability. Anthropic’s Responsible Scaling Policy (RSP) categorizes levels of risk of AI systems into different AI Safety Levels (ASL), and each level has associated commitments aimed at mitigating the risks. Some of these commitments take the form of affirmative safety cases, which are structured arguments that the system is safe to deploy in a given environment. Unfortunately, it is not yet obvious how to make a safety case to rule out certain threats that arise once AIs have sophisticated strategic abilities. The goal
FWIW I agree with you and wouldn't put it the way it is in Roger's post. Not sure what Roger would say in response.
There should be more people like Mahatma Gandhi in the AI safety community, so that AI safety is a source of inspiration for both future and current generations. Without nonviolence and benevolence, we may be unable to advocate for AI safety.
Mohandas Karamchand Gandhi, also known as Mahatma Gandhi, was an Indian activist who used nonviolence in order to support India's independence from Britain. He is now considered as one of the biggest sources of inspiration from people trying to do the most good.
Nowadays, it is often argued that Artificial Intelligence is an existential risk. If this were to be correct, we should ensure that AI safety researchers are able to advocate for safety.
The argument of this post is simple: As AI...
As Americans know, the electoral college gives disproportionate influence to swing states, which means a vote in the extremely blue state of California was basically wasted in the 2024 election, as are votes in extremely red states like Texas, Oklahoma, and Louisiana. State legislatures have the Constitutional power to assign their state's electoral votes. So why don't the four states sign a compact to assign all their electoral votes in 2028 and future presidential elections to the winner of the aggregate popular vote in those four states? Would this even be legal?
The population of CA is 39.0M (54 electoral votes), and the population of the three red states is 38.6M (55 electoral votes). The combined bloc would control a massive 109 electoral votes, and would have gone...
A real life use for smart contracts 😆
I just read the wikipedia article on the evolution of human intelligence, and TBH I wasn't super impressed with the quality of the considerations there.
I currently have 3 main (categories of) hypotheses for what caused selection pressure for intelligence in humans. (But please post an answer if you have other hypotheses that seem plausible!):
("H" for "hypothesis")
thanks. Can you say more about why?
I mean runaway sexual selection is basically H1, which I updated to being less plausible. See my answer here. (You could comment there why you think my update might be wrong or so.)