Since August 2020, I've been recording conversations with brilliant and insightful rationalists, effective altruists, and people adjacent to (or otherwise connected) to those communities. If you're an avid reader of this site, I suspect you will recognize many of the names of those I've spoken to. Today's post contains a whole new selection of the LessWrong-relevant recordings that have come out since my last post about this. I hope you enjoy them! Since last time, we’ve also started adding transcripts for a number of the episodes (thanks to those of you who commented on my last post encouraging us to do this).
The curated list below is organized according to the LessWrong-relevant topics we cover in each conversation. All of these conversations can also be found on our podcast website or by searching for "Clearer Thinking" in just about any podcast app.
If there are other people you'd like to see me record conversations with, please nominate them in the comments! The format of each episode is that I invite each guest to bring four or five "ideas that matter" that they are excited to talk about, and then the aim is to have a fun, intellectual discussion exploring those ideas (rather than an interview).
Rationality and decision-making
How can we apply the theory of measurement accuracy to human judgments? How can cognitive biases affect both the bias term and the noise term in measurement error? How much noise should we expect in judgments of various kinds? Is there reason to think that machines will eventually make better decisions than humans in all domains? How does machine decision-making differ (if at all) from human decision-making? In what domains should we work to reduce variance in decision-making? If machines learn to use human decisions as training data, then to what extent will human biases become "baked into" machine decisions? And can such biases be compensated for? Are there any domains where human judgment will always be preferable to machine judgment? What does the "fragile families" study tell us about the limits of predicting life outcomes? What does good decision "hygiene" look like? Why do people focus more on bias than noise when trying to reduce error? To what extent can people improve their decision-making abilities? How can we recognize good ideas when we have them? Humans aren't fully rational, but are they irrational?
This particular episode is unique in that we’ve also made a Thought Saver deck of flashcards to help you to learn or consolidate the key insights from it. You can see a sample of these below, but you can get the full deck by creating a Thought Saver account here or by clicking through to the end of the sample. And if you want to embed your own flashcards in LessWrong posts (like we did below), here’s a link to a post that describes how to do that.
What is the Great Rationality Debate? What are axiomatic rationality and ecological rationality? How irrational are people anyway? What's the connection between rationality and wisdom? What are some of the paradigms in cognitive science? Why do visual representations of information often communicate their meaning much more effectively than other kinds of representations?
What are "shed" and "cake" projects? And how can you avoid "shed" projects? What is the "jobs to be done" framework? What is the "theory of change" framework? How can people use statistics (or statistical intuition) in everyday life? How accurate are climate change models? How much certainty do scientists have about climate change outcomes? What are some promising strategies for mitigating and reversing climate change?
What is a "wamb"? What are the differences between wambs and nerds? When is it appropriate (or not) to decouple concepts from their context? What are some common characteristics of miscommunications between journalists and writers/thinkers in the EA and Rationalist communities? What are "crony" beliefs? How can you approach discussions of controversial topics without immediately getting labeled as being on one team or another? What sorts of quirks do members of the EA and Rationalist communities typically exhibit in social contexts?
What are "forward-chaining" and "backward-chaining," and how do they connect with theory of change? What sorts of mental habits and heuristics prevent you from brainstorming ideas effectively? How can you harness feedback effectively to sharpen your ideas? From whom should you solicit feedback? How can you view your own products with fresh eyes? What are some common struggles people encounter when starting or changing careers, and how can they be overcome? Why are small experiments so under-used? How can we construct a sustainable work life? What are the best ways to rest and recover from overwork and burnout?
How can we encourage people to increase their critical thinking and reliance on evidence in the current information climate? What types of evidence "count" as valid, useful, or demonstrative? And what are the relative strengths and weaknesses of those types? Could someone reasonably come to believe just about anything, provided that they live through very specific sets of experiences? What does it mean to have a "naturalistic" epistemology? How does a philosophical disorder differ from a moral failure? Historically speaking, where does morality come from? Is moral circle expansion always good or praiseworthy? What sorts of entities deserve moral consideration?
Consciousness and subjective experiences
When (if ever) can suffering be good? Is there an optimal ratio of pleasure to pain? What is motivational pluralism? Can large, positive incentives be coercive? (For example, is it coercive to offer to pay someone enormous amounts of money to do something relatively benign or even painful or immoral?) How can moving from making judgments about a person's actions to making judgments about their character solve certain moral puzzles? Why do we sometimes make seemingly irrational judgments about the relative badness of certain actions? How does the level of controversy around an action factor into how much we publicly disapprove of it? What are the differences between compassion and empathy? Is antisocial personality disorder (AKA psychopathy or sociopathy) defined only by a lack of empathy? How have humans evolved (or not) to detect and mitigate the effects of others who feel no remorse? Is altruism especially vulnerable to remorseless people? What are the differences between narcissists and sociopaths?
What are the advantages of viewing the mind through the multi-agent model as opposed to (say) the rational/optimizing agent model? What is the "global workspace" theory of consciousness? What's going on during concentration meditation according to the global workspace theory? If our brains are composed of multiple sub-agents, then what does it mean when I say, "I believe such-and-such"? Are beliefs context-dependent (i.e., you believe P in one context and not-P in a different context)? What effects do the various therapeutic modalities and meditation practices have on our beliefs? What are the advantages of transformational therapy over other approaches?
What is "awakening"? What is a "stateless" state? What is nonduality? Why and how do some spiritual practitioners experience a dissolution of their sense of self? Do these altered or enlightened states require thousands of hours of practice to achieve, or are they always inside us, waiting to be noticed and accessed at any time? Can these states be accessed through a variety of paths and methods? Is there a certain kind of person that does better or worse at achieving these states?
What is the Internal Family Systems model? What kinds of information do our emotions give us? How many agents live in our heads? And, if there's more than one, how well do those agents cooperate? What is operant conditioning? What is attachment theory? How does parenting differ from animal training? Is decision theory able to unify many different psychological theories?
What is metamodernism? How does metamodernism relate to spiral dynamics? What does it look like to apply a metamodern approach to large-scale problems? What are shadow traits, and what is shadow projection? What do our reactions to others' behavior tell us about ourselves? What's going on psychologically and physiologically when we relive past traumas? What dosages of psychedelics are most effective in a therapeutic context? How soon will psychedelic substances likely be decriminalized or legalized at the state and/or federal level in the United States? How can we enter into blissful, ecstatic, intense, or other less common psychological states without drugs or alcohol? What are the pros and cons of (especially intergenerational) co-living?
Should pleasure and pain be measured on logarithmic scales? How might such a scale affect utilitarian calculations? How do harmonic energy waves in the brain correspond to states of (or intensities of) consciousness? What sorts of conclusions can we draw about brain states given the resolutions and sampling rates of tools like fMRI, EEG, and MEG? What is the symmetry theory of homeostatic regulation, and how does it connect to pleasure and pain? Are uncomfortable or confused mental states computationally useful to us? To what extent can the concepts of musical consonance and dissonance map onto energy states in the brain?
What is "the precipice"? Which kinds of risks (natural or technological) pose the greatest threats to humanity specifically or to life on Earth generally in the near future? What other kinds of existential risks exist beyond mere extinction? What are the differences between catastrophic risks and existential risks? How serious is the threat of climate change on an existential scale? What are the most promising lines of research into the mitigation of existential risks? How should funds be distributed to various projects or organizations working on this front? What would a world with existential security look like? What is differential technological development? What is longtermism? Why should we care about what happens in the very far future?
What is 80,000 Hours, and why is it so important? Does doing the most good in the world require being completely selfless and altruistic? What are the career factors that contribute to impactfulness? How should people choose among the various problem areas on which they could work? What sorts of long-term AI outcomes are possible (besides merely apocalyptic scenarios), and why is it so important to get AI right? How much should we value future generations? How much should we be worried about catastrophic and/or existential risks? Has the 80,000 Hours organizing shifted its emphasis over time to longer-term causes? How many resources should we devote to meta-research into discovering and rating the relative importance of various problems? How important is personal fit in considering a career?
How does GiveWell's approach to charity differ from other charitable organizations? Why does GiveWell list such a small number of recommended charities? How does GiveWell handle the fact that different moral frameworks measure causes differently? Why has GiveWell increased its preference for health-related causes over time? How does GiveWell weight QALYs and DALYs? How much does GiveWell rely on a priori moral philosophy versus people's actual moral intuitions? Why does GiveWell have such low levels of confidence in some of its most highly-recommended charities or interventions? What should someone do if they want to be more confident that their giving is actually having a positive impact? Why do expected values usually tend to drop as more information is gathered? How does GiveWell think about second-order effects? How much good does the median charity do? Why is it so hard to determine how impactful charities are? Many charities report on the effectiveness of individual projects, but why don't more of them report on their effectiveness overall as an organization? Venture capitalists often diversify their portfolios as much as possible because they know that, even though most startups will fail, one unicorn can repay their investments many times over; so, in a similar way, why doesn't GiveWell fund as many projects as possible rather than focusing on a few high performers? Why doesn't GiveWell recommend more animal charities? Does quantification sometimes go too far?
How can people be more effective in their altruism? Is it better for people to give to good causes in urgent situations or on a regular basis? What causes people to donate to less effective charities even when presented with evidence that other charities might be more effective? We can make geographically distant events seem salient locally by (for example) showing them on TV, but how can we make possible future events seem more salient? How much more effective are the most effective charities than the average? How do altruists avoid being exploited (in a game-theoretic sense)? What sorts of norms are common in the EA community?
What kinds of catastrophic risks could drastically impact global food supply or large-scale electricity supply? What kinds of strategies could help mitigate or recover from such outcomes? How can we plan for and incentivize cooperation in catastrophic scenarios? How can catastrophic and existential risks be communicated more effectively to the average person? What factors cause people to cooperate or not in disaster scenarios? Where should we be spending resources right now to prepare for catastrophe? Why does it seem that governments are largely uninterested in these questions?
Researchers in the Effective Altruism movement often view their work through a utilitarian lens, so why haven't they traditionally paid much attention to the psychological research into subjective wellbeing (i.e., people's self-reported levels of happiness, life satisfaction, feelings of purpose and meaning in life, etc.)? Are such subjective measures reliable and accurate? Or rather, which such measures are the most reliable and accurate? What are the pros and cons of using QALYs and DALYs to quantify wellbeing? Why is there sometimes a disconnect between the projected level of subjective wellbeing of a health condition and its actual level (e.g., some people can learn to manage and cope with "major" diseases, but some people with "minor" conditions like depression or anxiety might be in a constant state of agony)? What are some new and promising approaches to quantifying wellbeing? The EA movement typically uses the criteria of scale, neglectedness, and tractability for prioritizing cause areas; is that framework still relevant and useful? How do those criteria apply on a personal level? And how do those criteria taken together differ conceptually from cost-effectiveness? How effective are psychological interventions at improving subjective wellbeing? How well do such interventions work in different cultures? How can subjective wellbeing measures be improved? How can philosophers help us do good better?
The future of society
What is "long" history? Why don't historians usually focus on what happened before recorded human history? What (if anything) is special about agriculture when it comes to the development of civilization? How far back does human civilization go, and why should we care? Have humans always been gardeners? What factors cause civilizations to crumble or thrive? Should we reboot standardized tests and college admissions every few decades so that measures don't become targets? Which destructive factors are particularly salient to modern human civilization? Why is there such a disconnect between our intuition that progress is inevitable and our knowledge that virtually all civilizations have collapsed in the past? In other words, what makes us think that we'll succeed where others have failed? How does a functional social institution differ from a failing one? What is the "great founder" theory?
What is machine learning? What are neural networks? How can humans interpret the meaning or functionality of the various layers of a neural network? What is a transformer, and how does it build on the idea of a neural network? Does a transformer have a conceptual advantage over neural nets, or is a transformer basically the equivalent of neural nets plus a lot of computing power? Why have we started hearing so much about neural nets in just the last few years, even though they've existed conceptually for many decades? What kind of ML model is GPT-3? What learning sub-tasks are encapsulated in the process of learning how to autocomplete text? What is "few-shot" learning? What is the difference between GPT-2 and GPT-3? How big of a deal is GPT-3? Right now, GPT-3's responses are not guaranteed to contain true statements; is there a way to train future GPT or similar models to say only true things (or to indicate levels of confidence in the truthfulness of its statements)? Should people whose jobs revolve around writing or summarizing text be worried about being replaced by GPT-3? What are the relevant copyright issues related to text generation models? A website's "robots.txt" file or a "noindex" HTML attribute in its pages' meta tags tells web crawlers which content they can and cannot access; could a similar solution exist for writers, programmers, and others who want to limit or prevent their text from being used as training data for models like GPT-3? What are some of the scarier features of text generation models? What does the creation of models like GPT-3 tell us (if anything) about how and when we might create artificial general intelligence?
What is progress? How do we (and should we) measure progress? What are the most important questions to ask in progress studies? What are the factors that lead to progress? Why has large-scale progress taken so long (i.e., why did we not see much progress until the Industrial Revolution)? Why did the industrial revolution, scientific revolution, and democratic revolution all seem to start within a relatively short period of time of each other? How can we prevent progress from slowing down, stopping, or even reversing? What factors have contributed to the slowing of progress in the last 50 years? What's the state of progress in nuclear energy? What is the history of attitudes towards progress? And why is it important for people to believe that progress is good?
What are some of the challenges of defining utopia? What should a utopia look like? What are concrete versus sublime utopias? What are some of the failure modes related to various conceptions of utopia? Is it really that hard to create a shared, positive vision of the future? What is the value (or disvalue) of creating new people, especially in relation to the utopic or dystopic state of the world? What is "whole-hearted morality" versus "morality-as-taxes"? How can we encourage people to be more moral without harming them psychologically (e.g., by loading them down with guilt)? Which sorts of worldview changes are reversible? Where does clinging fit into the constellation of concepts like valuing, caring, envying, etc.? How does non-attachment differ from indifference? Is clinging always bad? Is philosophy making tangible progress as a field? Is philosophy's primary function to show us how our questions are confused rather than to give us direct answers to our questions? Has philosophy given us a clearer picture of what consciousness is or isn't?
How can we accelerate learning? Is spaced repetition the best way to absorb information over the long term? Do we always read non-fiction works with the goal of learning? What are some less common but perhaps more valuable types of information that can be put on flashcards? What sorts of things are worth remembering anyway? Why is it important to commit some ideas to memory when so much information is easily findable on the internet? What benefits are derived from being involved in all stages of a project pipeline from concept to execution (as opposed to being involved only in one part, like the research phase)? Why should more researchers be involved in para-academic projects? Where can one find funding for para-academic research?
How can you live your best life? What's a good definition of "wisdom"? What are some possible taxonomies of life outcomes? What are some low-hanging fruit in the realm of self-improvement? What are some useful behavior change frameworks and techniques?
How can we become better leaders? How can we give better feedback to others? How can we be better listeners? How can we give good advice? How do startups (or even existing companies) build great products? What sorts of things do experts actually know? When is it useful to poll customers for feedback?
What is risk-driven development? How should we weigh advice, best practices, and common sense in a domain? What makes some feedback loops better than others? What's the best way to take System 2 knowledge and convert it to System 1 intuition? What are forward-chaining and backward-chaining? When is it best to use one over the other? What are the advantages and disadvantages of centralization and decentralization?
Why might it be the case that "all propositions about real interest rates are wrong"? What, if anything, are most economists wrong about? Does political correctness affect what economists are willing to write about? What are the biggest open questions in economics right now? Is there too much math in economics? How has the loss of the assumption that humans are perfectly rational agents shaped economics? Is Tyler's worldview unusual? Should people hold opinions (even loosely) on topics about which they're relatively ignorant? Why is there "something wrong with everything" (according to Cowen's First Law)? How can we learn how to learn from those who offend us? What does it mean to be a mentor? What do we know and not know about success? What is lookism? Why is raising someone else's aspirations a high-return activity?
How is the economy like a differential equation? Can the economy grow indefinitely? Are there economic attractor states? Or are economic outcomes chaotic and/or extremely sensitive to certain variables? What should we know about progress in genetic engineering? Can you (and should you) do genetic engineering in your garage? What are some common mistakes people make when thinking about AI? Should we expect AI abilities to converge in some domains and diverge in others? Why do we sometimes collectively forget important ideas? Have we as a species grown wiser over the course of our history? How can we form high-trust communities on the internet? In the context of social media, is ease of access at cross-purposes with membership screening and/or costs, or is it possible to have both? What should we make of ephemeral communities that appear briefly, do something huge, and then disappear (like the WallStreetBets subreddit phenomenon)? What are the various types of misinformation being used in the US, Russia, China, and elsewhere?
Is it okay for anyone to have opinions about marginalized communities even if they're not a part of those communities? Do people in marginalized groups have special knowledge (especially tacit knowledge) about their groups that can't be known or experienced from the outside? To what extent can we know and empathize with others' experiences regardless of differences in race, socioeconomic status, gender, sexual orientation, etc.? Do oppression and discrimination tend to be caused more by active bigotry or by mere lack of care and awareness? What information (if any) does intersectionality fail to capture about people? Is describing someone intersectionally an end in itself, or is it just a way of correcting (or over-correcting) for the suppression of marginalized voices? Should ideas be discussed absent their context or implications (see: decoupling norms vs. contextualizing norms)? To what extent should we focus on individuals versus groups when attempting to fix inequities? Are individuals or groups responsible for redressing the atrocities of their ancestors? Should people be "canceled" for their views (including their past views, even if their current views are different)? To what extent is the shifting of moral ground around social justice issues unpredictable and/or disorienting? How can democratic societies balance the need to debate difficult ideas with the risk of giving reprehensible ideas a platform? Should rules about offensiveness be enforced from the top-down (e.g., from a government, a school administration, a company's board of directors, or even parents)? Is offense only "in the eye of the beholder"?
What is cryonics? And how does it work? What do we know right now about reversing death? And what would we have to learn to make resurrection from a cryogenically frozen state feasible? How much does cryonics cost? What incentives would future people have for reviving a cryo-frozen person? How likely is it that a cryo-frozen person will be brought back in the future? Why do people (even pro-cryonics people) "cryoprastinate" and put off considering cryonics for a later time? What sorts of risks are involved in being frozen and later revived? What philosophical and ethical issues are at stake with cryonics? Would a revived person be able to integrate into a future society? Why is there a stigma around cryonics in some cultures?
Some other relevant people that I've already recorded with, but haven't yet released the episodes for, include Matt Goldenberg, Habiba Islam, Leah Edgerton, Katja Grace, Literal Banana, Eric Schwitzgebel, Buck Shlegeris, Cate Hall, Alene Anello, Peter Wildeford, Elizabeth Edwards-Appell, and Aaron Hamlin. Please let me know in the comments who else you think I should record with! :)