Supported by Rethink Priorities

This is part of a weekly series summarizing the top posts on the EA and LW forums - you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.

If you'd like to receive these summaries via email, you can subscribe here.

Podcast version: prefer your summaries in podcast form? A big thanks to Coleman Snell for producing these! Subscribe on your favorite podcast app by searching for 'Effective Altruism Forum Podcast'.

Author's note: this week is a double-week, with an increased karma bar of 70. We'll be back on the regular schedule next week. FTX-related posts are also now separated into their own section, which you can see at the end of the post.

 

Top / Curated Readings

Designed for those without the time to read all the summaries. Everything here is also within the relevant sections later on so feel free to skip if you’re planning to read it all. Posts are picked by the summarizer, and don't reflect the forum 'curated' section.

 

First FDA Cultured Meat Safety Approval

by Ben_West

Linkpost for this article. “In a major first, the U.S. Food and Drug Administration just offered its safety blessing to a cultivated meat product startup. It completed its first pre-market consultation with Upside Foods to examine human food made from the cultured cells of animals, and it concluded that it had “no further questions” related to the way Upside is producing its chicken.”
 

 

Pre-Announcing the 2023 Open Philanthropy AI Worldviews Contest

by Jason Schukraft

Open Philanthropy will run an AI worldviews contest in early 2023. Prizes, judging, and other details will be different from the Future Fund competition, but they expect it will be easy to adapt entries for this contest.
 

 

Short Research Summary: Can insects feel pain? A review of the neural and behavioural evidence by Gibbons et al. 2022

by Meghan Barrett

A short summary of Advances in Insect Physiology by Gibbons et al. (2022), which summarizes  >350 scientific studies to assess the scientific evidence for pain across six orders of insects. It finds strong or substantial evidence for pain in adult insects of five orders.

Trillions of insects are directly impacted by humans each year (farmed, managed, killed, etc.). Significant welfare concerns have been identified as the result of human activities, however insect welfare is completely unregulated and infrequently researched.


 

EA Forum

Philosophy and Methodologies

The elephant in the bednet: the importance of philosophy when choosing between extending and improving lives

by MichaelPlant, JoelMcGuire, Samuel Dupret

Report by the Happier Lives Institute on how to compare the value of extending and improving lives, and implications for resource distribution.

By adjusting (A) the ‘badness’ of death, and relative value of deaths at different ages and (B) the neutral point on the wellbeing scale where a life is neither good nor bad, they show large differences in WELLBYs* per dollar for different global health charities. *A WELLBY is a one-point change in life satisfaction on a 0-10 scale, per person per year.

For instance, AMF is ~1.3x more cost-effective than StrongMinds in terms of WELLBYs if you assume the neutral point is <1/10 on the life satisfaction scale, and that we should prioritize the lives of the youngest. However, StrongMinds is ~12x more cost-effective than AMF if we assume a higher neutral point of ~5/10 on the life satisfaction scale. AMF cost-effectiveness also drops if we morally prioritize older children over infants.

 

Theories of Welfare and Welfare Range Estimates

by Bob Fischer

Summary from the Animal Advocacy biweekly digest: "The third piece in the Moral Weights Project by Rethink Priorities. As the Moral Weights Project assumes hedonism (i.e. that well-being is purely made up of your positive and negative conscious experiences) is true, this post explains how their welfare range estimates might change if they assumed another theory of welfare.

They argue that even though hedonic welfare might not be all of welfare, it’s likely to be a significant portion of it. They use the example of “Tortured Tim”, someone experiencing intense physical suffering who likely won’t be experiencing an overall positive life even with flourishing relationships, strong friendships and gaining knowledge. The end result is that they believe that their welfare ranges are only likely to change by a moderate amount (less than 10x) by assuming a different theory of welfare, such as desire satisfaction or object-list theory."


 

Short Research Summary: Can insects feel pain? A review of the neural and behavioural evidence by Gibbons et al. 2022

by Meghan Barrett

A short summary of Advances in Insect Physiology by Gibbons et al. (2022), which summarizes  >350 scientific studies to assess the scientific evidence for pain across six orders of insects. It finds strong or substantial evidence for pain in adult insects of five orders.

Trillions of insects are directly impacted by humans each year (farmed, managed, killed, etc.). Significant welfare concerns have been identified as the result of human activities, however insect welfare is completely unregulated and infrequently researched.


 

Object Level Interventions / Reviews

Delay, Detect, Defend: Preparing for a Future in which Thousands Can Release New Pandemics by Kevin Esvelt

by Jeremy

Linkpost and key takeaways of the paper in the post title. The paper discusses that “current trends suggest that within a decade, tens of thousands of skilled individuals will be able to access the information required for them to single-handedly cause new pandemics” and how to defend against this. Suggestions include:

Delay - secure and universal DNA synthesis screening, liability and insurance for catastrophic outcomes, and a pandemic test-ban treaty.

Detect - untargeted sequencing to detect exponentially spreading biological threats.

Defend - pandemic proof PPE, durable and comfortable respirators, resilient supply chains, individualized early warning systems, and develop / install germicidal low-wavelength lights.


 

Mass media interventions probably deserve more attention (Founders Pledge)

by Rosie_Bettle

Mass media campaigns promote behavior change via media programming (eg. radio, TV) and have not been prioritized within global health in part due to mixed RCT evidence regarding their effectiveness upon reducing mortality. The author uses power analyses to demonstrate that previous RCTs have been underpowered to detect these effects. Using alternative evidence including RCTs of media’s effect on behavior change more broadly, the author argues mass media campaigns are a risky but highly promising area for philanthropic investment.


 

First FDA Cultured Meat Safety Approval

by Ben_West

Linkpost for this article. “In a major first, the U.S. Food and Drug Administration just offered its safety blessing to a cultivated meat product startup. It completed its first pre-market consultation with Upside Foods to examine human food made from the cultured cells of animals, and it concluded that it had “no further questions” related to the way Upside is producing its chicken.”


 

Friendship Forever (new EA cause area?)

by rogersbacon1

In the 19th century, same sex relationships were often considered the best relationships one had in life outside of one’s parents. Photos from the time show clear comfort and affection (in comparison to photos of men with their wives, which were often more functional arrangements).

Close friendships are in significant decline, with one survey (N=2,019) finding 32% of Americans have 0-2 close friends. Smaller families and remote work may be contributors.

The author argues this area has been almost completely ignored by EA, despite close relationships being often regarded as one of life’s highest intrinsic goods. Possible interventions include ‘friendship benches’, intergenerational housing, or legally recognizing friendships similarly to how we do romantic relationships.


 

Disagreement with bio anchors that lead to shorter timelines

by mariushobbhahn

The author disagrees with some assumptions in the bio anchors report. After adjusting for these, their median estimate for the availability of compute to train TAI is 2036. (Note this is not an estimate for when AI could be dangerous). The three largest changes were:

  • Lowering the FLOP/s (floating point operations per second) needed for TAI compared to Human FLOP/s
    • Because humans were trained inefficiently, and may have needed less compute if we were able to learn on more data or use parallelization.
  • Lowering the doubling time of algorithmic progress
    • Because progress in transformers has been faster than in vision models, and the current model doesn’t capture some important components.
  • Changing the weighing of some anchors
    • The genome anchor and evolution anchor seem too heavily weighted, given translation from bytes in genome to parameters in neural nets seems implausible to the author, SGD is more efficient than evolutionary algorithms, and ML systems can use human knowledge to ‘skip’ parts of evolution.


 

Assessing the case for population growth as a priority

by Charlotte, LuisMota

“Recently, population growth as a cause area has been receiving more attention (MacAskill, 2022, PWI, Jones, 2022a, Bricker and Ibbitson, 2019)."

The authors consider three value propositions of population growth, and argue that it falls short of being a top cause area under the longtermism paradigm.

  1. Long-run population size is likely determined by factors apart from biological population growth rates. (eg. biological reproduction will likely be replaced in large futures)
  2. Population size may impact economic growth. This is the most compelling case, but its effects are still orders of magnitude smaller than top cause areas.
  3. Population size has negligible effects on humanity’s resilience to catastrophes.


 

Does putting kids in school now put money in their pockets later? Revisiting a natural experiment in Indonesia

by droodman

Reanalysis of a study by Esther Duflo in 2001 on the effects of the primary school expansion in Indonesia in the 1970s. The original study finds it causes boys to go to school an average of 0.25 - 0.4 years more over their childhood, and boosts their wages as young adults by 6.8-10.6% per year of extra schooling.

Droodman's reanalysis includes some technical changes, fresh tests, and thoughts on what could be generating the data’s patterns. They conclude that the additional schools probably led to more kids finishing primary school, but didn’t necessarily lift wages in adulthood.


 

Giving Recommendations

You can now discuss this topic in the Effective Giving subforum.

 

Don’t just give well, give WELLBYs: HLI’s 2022 charity recommendation

by MichaelPlant

Happier Lives Institute recommends StrongMinds as the most cost-effective intervention they know of for increasing subjective well-being, after comparing it to Givewell top charities. Next up they plan to analyze a broader range of interventions from this well-being lens. They also highlight two opportunities to get your donation to StrongMinds matched before Dec 31st.


 

Announcing our 2022 charity recommendations

by Animal Charity Evaluators

Animal Charity Evaluators (ACE) evaluated 12 animal advocacy organisations in 2022. The Good Food Institute became a Top recommendation. The Humane League, Wild Animal Initiative, and Faunalytics are carried over as Top recommendations from 2021 and will be re-evaluated next year.

They also have 11 ‘stand-out’ recommendations, 3 of which are new this year. The post includes overviews of all top and standout charities, and a link to comprehensive reviews.


 

Our recommendations for giving in 2022

by GiveWell

Givewell’s new top recommendation is their ‘All Grants Fund’, which is allocated to any need that meets their bar of 10x the cost-effectiveness of cash transfers. They are funding constrained, with room for $900M of funding, and a target of $600M which they are unsure if they will meet. They describe where they expect this funding to go and the example impacts. They also celebrate that funding directed by GiveWell from their inception to the end of 2022 will likely save at least 200,000 lives.


 

"Evaluating the evaluators": GWWC's research direction

by SjirH, Giving What We Can, Michael Townsend

There are over 40 organisations / projects that either try to identify or fundraise for effective charities, but little information on how to select charity evaluators to rely on. The new GWWC research team will focus on connecting evaluators and donors/fundraisers in the effective giving ecosystem in a more effective (higher-quality recommendations) and efficient (lower transaction costs) way. Questions and feedback on these plans are welcomed.


 

Opportunities

Pre-Announcing the 2023 Open Philanthropy AI Worldviews Contest

by Jason Schukraft

Open Philanthropy will run an AI worldviews contest in early 2023. Prizes, judging, and other details will be different from the Future Fund competition, but they expect it will be easy to adapt entries for this contest.


 

Stop Thinking about FTX. Think About Getting Zika Instead.

by jeberts, Daphne Hansell

Human challenge trials can significantly speed up vaccine deployment, but recruitment is often a bottleneck. John Hopkins University is currently recruiting for a Zika human challenge trial in the DC-Baltimore area, open to females aged 18-40 - sign up for screening here. A 2021 review estimated Zika causes 10K - 80K DALYs per year. The post overviews personal risks and benefits of participation, as well as public health and biosecurity benefits.

For those interested who don’t fall in the target group for this trial, upcoming trials elsewhere on other infectious diseases (eg. Malaria) are also available here.


 

Some research ideas in forecasting

by Jaime Sevilla

The author has been involved in forecasting research, and accumulated a list of forecasting research projects which they share here. They likely won’t get to these for months / years and would love for others to take them on.

Project ideas include comparing aggregation and base rate prediction methods, making accessible intros to key theorems, literature reviews of related concepts, theoretical study, and improving existing aggregation methods. They vary substantially in difficulty.


 

AI Forecasting Research Ideas

by Jaime Sevilla, lennart, anson

Linked doc of interesting / valuable AI forecasting research ideas, primarily prepared by Epoch employees. These include historical analysis (eg. what have been the major algorithmic breakthroughs?, how are chips replaced over time?), extending or reviewing key papers / theories (eg. bioanchors, brain emulation, predictable-ness of AI progress on a task), and others. Most are able to be tackled by research interns or students.


 

Want advice on management/organization-building?

by Ben_Kuhn

Ben Kuhn, the CTO of Wave and who helped scale the company up to ~2K employees, is offering 1-1 advice to leaders of organisations experiencing or expecting to experience a bunch of growth. This could include input on hiring, people management, and organizational structure.

 

Community & Media

Take the EA Forum Survey and Help us Improve the Forum

by Sharang Phadke, Ryan Fugate

If you use the forum (even rarely, without an account, or only via these summaries) consider taking this 10 minute survey to help the Forum team understand user needs and adjust their 2023 strategy.


 

Rethink Priorities’ 2022 Impact, 2023 Strategy, and Funding Gaps

by kierangreig

In 2022, Rethink Priorities (RP) worked on ~60 different research projects across animal welfare, global health and development, longtermism (including AI governance and safety), surveys, and EA movement research. Other efforts included incubating projects, message testing, and running coordination forums. The post covers notable accomplishments in each department. Some research is pioneering in its area and may shift huge amounts of funding eg. the Moral Weights Project, which studies different species' capacities for welfare.

In 2023 RP intends to focus on insights to increase the effectiveness of others’ efforts on global priorities, driving progress on promising ways to address global priorities (eg. via accelerating priority projects), strengthening reputation and relations, and scaling significantly to increase impact. They also intend to launch a Worldview Investigations team. Vision, values, area-specific strategies, and reasons for cause area diversification are discussed in the post.

With the discontinuation of the FTX Future Fund, there is a need for new donors. The most urgent funding need is for unrestricted donations, which gives flexibility to react to new opportunities. Funding gaps by area and scenario (no, moderate, or high growth) are shared, in addition to reasons why this funding is likely to be high impact.


 

Jeff Bezos announces donation plans (in response to question)

by david_reinstein

Linkpost for this article. In an interview with CNN, Jeff Bezos stated he plans to give away the majority of his $124 billion net worth during his lifetime - primarily to fighting climate change (~10B is already committed to the Bezos Earth Fund) and supporting people who can unify humanity in the face of social and political divisions. He also mentions trying to do so in a levered way, thinking about it carefully, and avoiding ineffective methods.


 

Introducing the Animal Advocacy Bi-Weekly Digest (Nov 4 - Nov 18)

by James Ozden, Sharang Phadke

This new digest will collate the best research, news, and updates in animal advocacy on a weekly cadence. To start with, it’ll be a 3 month experiment and focus only on content posted on the EA forum. Sign up for the emails here, or read this post for summaries of the top posts over Nov 4 - Nov 18.


 

Eirik Mofoss named "Young leader of the year" in Norway

by Jorgen_Ljones

Author’s summary (lightly edited): Eirik Mofoss was recently named “Young leader of the year” by the largest business newspaper in Norway. He is the co-founder of the Norwegian effective altruism community and—as a member of Giving What We Can—donates 20 percent of his income. The award provided lots of valuable attention and traffic to EA Norway and Gi Effektivt, our donation platform. We wanted to share and celebrate this, not only as a recognition of Eirik, but also for all the Norwegian EAs who made this possible.


 

Brainstorming ways to make EA safer and more inclusive

by richard_ngo

Discussion thread (including anonymous feedback form and its results) for how to make EA spaces safer, more comfortable, and more inclusive for women.

 

EA Organization Updates: November 2022

by Lizka
Monthly summary post including job listings and short updates on successes, recently released work, and plans from 20+ EA organisations.


 

Introducing new leadership in Animal Charity Evaluators’ Research team

by Animal Charity Evaluators

Introduces Elisabeth Ormandy, new Director of Research, and Vince Mak, new Evaluations Program Manager.

Animal Charity Evaluators’ (ACE) will be publishing their updated list of recommended charities next week. They intend to step up the transparency and interaction with forum users - publishing blog posts on their evaluation criteria and what has changed, following up on existing suggestions for improvements (some already implemented), and inviting further feedback on their evaluation methods as they update them over the coming months.


 

Training for Good - Update & Plans for 2023

by Cillian Crosson, Training for Good, SteveThompson, Jan-WillemvanPutten

Training for Good has narrowed their focus to supporting altruistic and talented early-career professionals into the first stage of high impact careers that are unusually difficult to enter. For Sep 2022 - Aug 2023, they will run an EU Tech Policy Fellowship, Tarbell Fellowship (journalism), and one third program still under development. 

This decision is off the back of experimenting with 7 different programmes during their first year, 6 of which are now discontinued. The most promising was the EU Tech Policy Fellowship, which successfully placed 7 fellows into relevant European think tanks focused on emerging tech policy.


 

Announcing the first issue of Asterisk

by Clara Collier

Author’s summary: “Asterisk is a new quarterly journal of clear writing and clear thinking about things that matter.” The first issue is out now.


 

On EA messaging - being a doctor in a poorer country

by Luke Eure

As EA attracts people from all over the world, we should be more careful of “western-as-default” messaging. An example is the ‘don’t be a doctor if you want to help people’ advice - which applies much less in countries that lack doctors, and where other opportunities are more limited.


 

Review: What We Owe The Future

by Kelsey Piper

Linkpost to the author’s review of What we Owe the Future, published in Asterisk magazine. Summary from Fermi–Dirac Distribution in the comments: “In What We Owe the Future, MacAskill agrees with other longtermists about the moral importance of the long-term future, but disagrees with most of them about how best to affect it. Relative to other longtermists, MacAskill thinks that affecting societal values is more important and preventing AI-triggered extinction is less important. Also, MacAskill’s recommendations for how to influence the long-term future seem to have been researched less thoroughly than other parts of the book.”


 

A socialist's view on liberal progressive criticisms of EA

by freedomandutility

The author outlines liberal progressive criticisms of EA that they, as a socialist, disagree with. The responses to the criticisms are indented below them.

  1. EA explicitly prioritizes some issues over global health or prioritizes different things to them in general.
    1. Liberal progressives implicitly prioritize when they choose where to focus attention, and neglectedness as a criteria means EA’s prioritization will usually differ from other movements.
  2. EA doesn’t embrace localism (local resources used for local problems).
    1. This would exacerbate inequality and devalue the lives of foreigners. International wealth inequality is also bigger than most critics realize.
  3. EA isn’t liberal progressive, so it must be conservative. Also see associations with Peter Thiel and Elon Musk.
    1. Not the case, and guilt via association is generally a bad argument.
  4. EA is white saviorism
    1. Some see localism as a solution to this, but the author thinks EA style evidence-based development is a better solution.


 

Didn’t Summarize

Where are you donating this year, and why? (Open thread) by Lizka, MichaelA

AMA: Sean Mayberry, Founder & CEO of StrongMinds by Sean Mayberry (questions will be answered on November 28th)

Free Cloud Automation for your EA Org by JaimeRV, Georg Wind, VPetukhov

 

LW Forum

Meta AI announces Cicero: Human-Level Diplomacy play (with dialogue)

by Jacy Reese Anthis

Cicero is the first AI agent to achieve human-level performance in Diplomacy, a strategy game that emphasizes natural language negotiation and tactical coordination between seven players. Cicero integrates a language model with planning and reinforcement learning algorithms by inferring players' beliefs and intentions from its conversations and generating dialogue in pursuit of its plans. Across 40 games of an anonymous online Diplomacy league, Cicero ranked in the top 10% of participants who played more than one game.


 

When AI solves a game, focus on the game's mechanics, not its theme.

by strawberry calm

When AI solves a game, people sometimes overfocus on the theme over mechanics. For instance, Diplomacy has a war theme, but the mechanics could apply as easily to gardeners negotiating which plants to plant. The mechanics determine what other domains it can apply to. They give a list of mechanics to consider, including if players can communicate and how, randomness of environment, is the game cooperative or adversarial etc.


 

Conjecture: a retrospective after 8 months of work

by Connor Leahy, Sid Black, Gabriel Alfour, Chris Scammell

Conjecture formed in March 2022. Since then they’ve worked on building infrastructure to deploy large language models and do bespoke interpretability research, identified polytopes as a potentially fundamental unit of neural networks (as opposed to neurons), wrote the popular post Simulators on a theoretical framing to understand GPT-like models, ran a pilot of an incubator for independent alignment researchers, and more activities covered in the post.

The authors believe the organization is much stronger now, but that there was no meaningful progress on the alignment problem. After reflection, they’re narrowing their research agenda to areas with the most alignment potential, shifting from deep dives to faster OODA (observe-orient-decide-act) loops, publishing more quickly (not spending time on polish), and making more time for coordination efforts.


 

What I Learned Running Refine

by adamShimi

Refine was an incubator run by Conjecture that aimed to create conceptual alignment researchers with their own radically different agendas (vs. established approaches). The first cohort finished a few weeks ago. No new ones are planned, primarily because SERI MATS is already covering the area and open to suggestions, and other areas of focus for the Conjecture epistemology team are more fundamental and neglected.

The program successfully helped participants build a deep model of the alignment problem, but only 2 / 5 are already working on their own research agendas, and these use fairly established approaches. The author suggests optimizations to the populations advertised to, criteria, and program itself to focus more on radical ideas.


 

Planes are still decades away from displacing most bird jobs

by guzey

Uses planes not being able to accomplish bird-toddler level tasks (eg. flying without refueling or ejecting eggs out of nests) as an analogy to arguments that AI won’t be able to fundamentally change the world if it can’t accomplish all human-toddler level tasks.


 

Current themes in mechanistic interpretability research

by Lee Sharkey, Sid Black, beren

Themes of mechanistic interpretability research, summarized from discussion with several researchers. Four themes are discussed:

  1. Object-level research topics eg. solving superposition, describing learning dynamics in terms of circuits, deep learning theory questions, and automating mechanistic interpretability
  2. Research practices and tools eg. study simpler models, study model systems in depth, or approaches grounded in the theory of causality
  3. Field building and research coordination eg. hiring independent researchers, open source tooling, skill-building programmes
  4. Theories of impact


 

Results from the interpretability hackathon

by Esben Kran, Neel Nanda

25 projects were submitted by ~70 people, with positive feedback on the experience. The four winning projects were:

  1. An algorithm to automatically make the activations of a neuron in a Transformer much more interpretable.
  2. Backup name mover heads from “Interpretability in the Wild” have backup heads and all of these are robust to the ablation distribution.
  3. The specificity benchmark in the ROME and MEMIT memory editing papers does not represent specificity well. A simple modulation shows that factual association editing bleeds into related texts, representing "loud facts".
  4. TCAV used on an RL agent for a connect four game can have its neural activation compared to the provably best solution as a pilot for comparing learned activations more generally to human-made solutions.

     

Will we run out of ML data? Evidence from projecting dataset size trends

by Pablo Villalobos

Based on trends in dataset size and estimates of the total stock of available unlabeled data, the author estimates that we will have exhausted the stock of low-quality language data by 2030 to 2050, high-quality language data before 2026, and vision data by 2030 to 2060. This might slow down ML progress.


 

AI will change the world, but won’t take it over by playing “3-dimensional chess”.

by boazbarak, benedelman

Systems and agents can have short-term goals (eg. build a piece of software, set pricing to maximize revenue, create an artwork) and/or long-term goals (eg. high-level strategy). The authors argue that there are diminishing returns on information processing with longer time horizons, and there is a ‘sweet spot’ of a not-too-long horizon in which AI has the biggest comparative advantage. For instance, they expect AI engineers to dominate human engineers, but not for AI CEOs to dominate human CEOs (particularly if the human is assisted by short-term AIs). 

This makes the “loss of control” scenario where AI systems act in pursuit of long-term goals not aligned to humanity’s interests less likely, and suggests changes to where we focus attention within AI Safety. The authors lay out six claims that underlie these beliefs, and the evidence for them.


 

ARC paper: Formalizing the presumption of independence

by Erik Jenner

Linkpost for Alignment Research Center’s latest report. The report is about finding good formalizations of "heuristic arguments”, which don’t have formal proofs. It briefly mentions alignment in relation to heuristic arguments being useful to let us better estimate the probability of rare failures or to elicit latent knowledge. The post author suggests it might be a useful read for those working on formal verification, ELK, or conceptual interpretability research.

 

 

Tyranny of the Epistemic Majority

by Scott Garrabrant

Kelly betting is maximizing the expected logarithm of your wealth. When deciding resources to allocate onto different scenarios you can pretend that each path is a different version of you, and owns a proportion of your resources in line with your probability that version will happen. Eg. If Kelly believes a coin has a 90% chance of coming up tails, and they get their bet doubled if they bet right / lose it if they bet wrong, they’ll bet 80 dollars on tails (90 dollars on tails, partially nullified by 10 dollars on heads).

This post explains Kelly betting via different scenarios, how it models bayesian updating, how to adjust when payouts aren’t equal, the link to proportional representation, and more risk averse versions of Kelly betting.


 

Elastic Productivity Tools

by Simon Berens

The author finds the most effective productivity tools for them have some elasticity. Eg. an inelastic tool is Blocklist - there is no option to view a blocked site other than disabling the tool. An elastic version would be if you want to visit a blocked site, you need to stare at a blank screen for a minute first.


 

Other

LW Beta Feature: Side-Comments

by jimrandomh

Side-comments can now be turned on in your user settings. They’re automatically placed in the post according to block quotes in regular comments.
 

Didn’t Summarize

Here's the exit. by Valentine

The Geometric Expectation by Scott Garrabrant

 

 

FTX-Related Posts

FTX filed for bankruptcy on Nov 11th. For more background on what happened, see this FAQ by Hamish Doodles.

A collection of support and resources exists here, including funding opportunities, job matching, information on legal concerns, and advice on responding to journalists.

The below categorizes the 70+ karma FTX-related posts of the past 2 weeks, including some short summaries of posts or groups of posts:
 

Summaries / compilations

Sadly, FTX by Zvi

What happened during and before FTX collapse, other people’s thoughts and explanations, media, and some of the key lessons and suggestions put forward. Many links to, summaries, and commentary on other posts.

FTX FAQ by Hamish Doodles

FAQ on the FTX situation, written on 13th November.

What Happened at Alameda Research by Jonathan Yan

Linkpost for a compilation of known information on FTX / Alameda from public & private sources.
 

Media

NY Times on the FTX implosion's impact on EA by AllAmericanBreakfast

The author thinks the article is fair and straightforward about EA, and not overly critical.

Kelsey Piper's recent interview of SBF by Agustín Covarrubias

Twitter interview between Kelsey Piper of Vox’s Future Perfect and SBF.

[Linkpost] Sam Harris on "The Fall of Sam Bankman-Fried" by michel

20 minute podcast by Sam Harris. He had earlier had SBF on his show, and didn’t expect any wrongdoing. He defends EA principles (separately from the EA community) in this situation.

Effective Altruism: Not as bad as you think by James Ozden

Link to James’ blog post, written for a non-EA audience to counter misleading opinion pieces.

A Letter to the Bulletin of Atomic Scientists by John G. Halstead

A letter addressing Emile Torres' latest piece criticizing EA, calling out that a false claim was published after being disproved.

Media attention on EA (again) by Julia_Wise

Short list of what to expect and advice as media attention ramps up.

Clarifications on diminishing returns and risk aversion in giving by Robert_Wiblin

Clarifications to an episode description of an interview with SBF in April, which explained SBF’s views on risk aversion and expected value in a confusing way.

 

Resources

Open Phil is seeking applications from grantees impacted by recent events by Bastian_Stern

For FTX grantees whose funding was affected by recent events. 

Announcing Nonlinear Emergency Funding by Kat Woods, Emerson Spartz, Drew Spartz

For FTX grantees where <$10K of bridge funding would be of substantial help.

AI Safety Microgrant Round by Chris Leong, Damola Morenikeji, David_Kristoffersson

Up to $2K USD grants, with total available funding of $6K.

Effective Peer Support Network in FTX crisis (Update) by Emily, Inga

Includes a table of supporters you can contact for free, as well as a peer support network slack.

Thoughts on legal concerns surrounding the FTX situation by Molly

Thoughts on legal concerns surrounding the FTX situation: document preservation and communications by Molly

The above two posts have thoughts from Open Phil’s managing counsel on the likelihood of clawbacks, of your documents becoming public, and what factors affect this.

 

Organization and personal statements

Why you’re not hearing as much from EA orgs as you’d like by Shakeel Hashim

The relative silence on FTX/SBF is likely the result of sound legal advice by Tyler Whitmer

The above two posts suggest we should expect few statements on FTX, primarily because of legal implications (other reasons include time costs and lack of information).

Some important questions for the EA Leadership by Gideon Futerman

My takes on the FTX situation will (mostly) be cold, not hot by Holden Karnofsky

A personal statement on FTX by William_MacAskill

Rethink Priorities’ Leadership Statement on the FTX situation by abrahamrowe, Peter Wildeford, Marcus_A_Davis

 

Proposals

The FTX crisis highlights a deeper cultural problem within EA - we don't sufficiently value good governance by Fods12

Argues the FTX collapse is part of a broader pattern of governance failures within EA, and suggests better norms around accountability, consideration of stakeholders, conflicts of interest, decision making procedures and power dynamics.

CEA/EV + OP + RP should engage an independent investigator to determine whether key figures in EA knew about the (likely) fraud at FTX by Tyrone-Jay Barugh

EA should blurt by RobBensinger

Argues the processes required to catch bad actors are often similar to correcting innocent errors by good actors, therefore we should ‘blurt out objections’ whenever something seems false.

EA is a global community - but should it be? by Davidmanheim

Suggests considering whether EA should be a community, or just a philosophy / plan of action.

What might FTX mean for effective giving and EA funding by Jack Lewars

Suggests more focus on funding diversity, to always consider optics, and to avoid oversimplifying funding situations (eg. “EA is overfunded”).

 

Meta: how we should discuss the situation

Proposals for reform should come with detailed stories by Eric Neyman

Proposals should be realistic, benefits outweigh the costs, and have a plausible story for how they could have led to better outcomes in the FTX situation.

The FTX Situation: Wait for more information before proposing solutions by D0TheMath

Before proposing solutions / takeaways, EA should try to discuss the object-level events and get more information so we’re working off an accurate narrative of what the problems to solve are.

In favour of compassion, and against bandwagons of outrage by Emrik

Argues EA should be compassionate and avoid bandwagons of outrage / instincts of mob justice, while still condemning unethical behavior.

Does Sam make me want to renounce the actions of the EA community? No. Does your reaction? Absolutely. by GoodEAGoneBad

Argues that EA at its best is a supportive community that learns together, and recently there has been too much airing of grievances with EA only weakly related to the FTX situation.

Wrong lessons from the FTX catastrophe by burner

These include assuming ambition or earning to give is a mistake, and updating too heavily on pieces from long-term EA critics or on philosophical concepts of ethics.
 

Could EA have prevented this?

This set of posts discuss if EA should take any blame for the FTX situation, whether we could have noticed issues ahead of time, and whether SBF’s motives were driven by EA.

If Professional Investors Missed This… by Jeff Kaufman

How could we have avoided this? by Nathan Young

Who's at fault for FTX's wrongdoing by EliezerYudkowsky

Noting an unsubstantiated belief about the FTX disaster by Yitz

SBF, extreme risk-taking, expected value, and effective altruism by vipulnaik
 

Other: reactions, experiences, and advice

Trying to keep my head on straight by ChanaMessinger

Some feelings, and what’s keeping me going by Michelle_Hutchinson

Hubris and coldness within EA (my experience) by James Gough

Selective truth-telling: concerns about EA leadership communication. by tcelferact

A long-termist perspective on EA's current PR crisis by Geoffrey Miller
Moderator Appreciation Thread by Ben_West

New to LessWrong?

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 5:15 PM

I don't have an account on the other website, so I will comment here:

Re: Friendship Forever (new EA cause area?)

If you're a man over 30 and you have time to maintain more than 5 friendships - I mean real friendships - you're either a loser, groomer, or gay.

Despite this being at the top of the article (presumably an inspiration for it), I find it fascinating how both the author and the commenters succeeded to ignore the most straightforward interpretation: that the problem is literally about men with jobs and families literally not having enough free time to maintain 5+ deep friendships.

If you imagine a patriarchal society where men go to work and women take care of children, the men can spend their time after work with their friends. On the other hand, if you imagine a society with equal gender roles, where both men and women go to work and then take care of children, there is not much time left for cultivating deep friendships; both genders effectively work two shifts.

I find it ironic that when this was problem of women only (women already started to have careers, but men did not yet help at home), a lot of feminist writing was produced about how bad it is for the women to work two shifts. Now that the society has changed and both genders contribute to childcare, so now effectively both work two shifts (note that the social norms have also shifted towards "helicopter parenting", so the total amount of childcare has increased), and both genders have the same problem, it became a taboo to talk about it.

(The last person who publicly mentioned the need for "allowing and truly endorsing (as part of our culture) part time work" was James Damore, and it cost him his job. Note that this part is not even mentioned in Wikipedia.)

Suggested new EA cause area: shorter workweek. More time for relations outside work.