This is part of a weekly series summarizing the top posts on the EA and LW forums - you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.

If you'd like to receive these summaries via email, you can subscribe here.

Podcast version: Subscribe on your favorite podcast app by searching for 'EA Forum Podcast (Summaries)'. A big thanks to Coleman Snell for producing these!

 

Philosophy and Methodologies

Who regulates the regulators? We need to go beyond the review-and-approval paradigm

by jasoncrawford

Linkpost for this blog post.

Institutional Review Boards (IRBs) were put in place to review the ethics of medical trials, and initially worked well. However, after a study participant's death, they became more stringent and over-reached (eg. requiring heart attack study participants to read and sign long consent forms during a heart attack). A similar pattern occurred with the FDA, NEPA and NRC. This is due to lopsided incentives - regulators are blamed for anything that goes wrong, but neither blamed nor rewarded for how much they slow down or speed up progress. It’s also harder to remove regulations than  to add them. The same pattern can be seen as corporations grow eg. Google is now very risk-averse and can require 15+ approvals for minor changes.

The author believes this is evidence the review-and-approval model is broken, and we need better ways to mitigate risk and create safety (eg. liability laws).
 

How much do you believe your results?

by Eric Neyman

The performance of an intervention in a trial / study is a combination of its actual effect and random noise. This means when comparing multiple interventions, you should expect the top-performing ones to be a combination of good and lucky, and therefore discount for the luck portion (eg. if it estimates 4 lives per $X, you might expect 2). The author suggests keeping this in mind when considering a study, and working hard to reduce the noise in your measurements when conducting one (eg. by increasing sample size).

A top comment by Karthik Tadepalli notes these results depend on if the true spread of intervention quality is the same order of magnitude as the spread of experimental noise. In cases of fat-tailed distributions of intervention quality, the latter becomes negligible and we don’t need to discount much.
 

Object Level Interventions / Reviews

AI

[Link Post: New York Times] White House Unveils Initiatives to Reduce Risks of A.I.

by Rockwell

Linkpost for this article, which covers an announcement by The White House on 4th May about its new initiatives aimed at AI risk (factsheet here).

These include:

  • $140M in funding to launch seven new National AI Research Institutes (bringing the total to 25). These aim to “pursue transformative AI advances that [...] serve the public good”.
  • A pledge to release draft guidelines for government agencies to ensure their use of AI safeguards “the American people’s rights and safety”. These will be open for public comment this summer.
  • Several AI companies such as Anthropic, Google, OpenAI and Microsoft agreed to participate in a public evaluation of their AI systems at the DEFCON 31 conference.

 

[Linkpost] ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead

by Darius1

Linkpost for this article, which shares that neural networks pioneer Geoffrey Hinton has left Google in order to be able to “talk about the impacts of AI without considering how this impacts Google”. He notes that while Google has been responsible, the tech giants are “locked in a competition that might be impossible to stop” and “will not stop without [...] global regulation”. He is “worried that future versions of the technology pose a threat to humanity” and believes AI smarter than humans is coming sooner than he thought previously (when he thought 30-50 years or more).
 

AGI safety career advice

by richard_ngo

Career advice the author commonly gives to those interested in AGI Safety:

  1. To have big impact, you need a big lever (eg. AGI Safety). How you pull this lever (research, eng, ops, comms etc.) should be based primarily on personal fit.
    1. Try to find fast feedback loops and get hands on to test your fit.
    2. It’s a young field - if you notice something important not happening, try doing it.
  2. If you’re interested in technical alignment research
    1. Prioritize finding a mentor. Consider programs like MLAB and ARENA (more options in the post).
    2. Get hands-on experience quickly, including with neural nets.
    3. Scalable oversight, mechanistic interpretability, and alignment theory are some promising directions. See post for more details and other possible topics.
  3. If you’re interested in governance work
    1. Consider: governance research, lab governance, or policy jobs.
    2. Governance research: become an expert, focus on proposals over analysis. 
    3. Labs: are often amenable to concrete proposals that don’t strongly trade off against their capabilities work. Particularly good fit for people-oriented people with corporate experience.
    4. Policy: see others advice such as here, then consider how to do it faster.
    5. They also have a list of governance topics in the post.

 

How MATS addresses “mass movement building” concerns

by Ryan Kidd

MATS is a program which aims to find and train talented individuals to work on AI alignment. They use this post to address some objections to this approach:

  1. Not enough jobs / funding for all alumni to get hired.
    1.  Some alumni projects are attracting funding and hiring further researchers.
    2. They expect both funding and the number of organizations to grow.
    3. Alumni who return to academia or industry can still be impactful (now or later).
  2. The program gets more people involved in AI/ML, and therefore potentially accelerates capabilities and AI hype.
    1. Most of their participants are PhD / Masters students in related fields (only 10% are undergrads) so would probably have been involved regardless.
    2. Their outreach and selection process focuses on AI risk, and the program is made intentionally less attractive than AI industry programs.
  3. Participants might defer to their mentors, decreasing average epistemic integrity.
    1. Scholars are encouraged to own their project and not unnecessarily defer. They’re required to submit a plan detailing threat models and theory of change they wish to tackle.
    2. They encourage an atmosphere of friendly disagreement and academic rigor.

 

The costs of caution

by Kelsey Piper

The author thinks we should be moving slower on developing powerful AI. However, they also believe a strong objection to this is that AI systems could speed up scientific and economic progress which saves and improves lives. Delaying therefore costs these lives.

 

Clarifying and predicting AGI

by richard_ngo

As we get closer to AGI, it becomes less appropriate to treat it as a binary threshold. The author suggests a framework where a system is ‘t-AGI’ if, on most cognitive tasks, it beats most human experts given time t to perform that task. Eg. a 1-second AGI should beat humans at tasks like basic intuitions on physics and recognizing objects. A 1-month AGI would need to beat them at tasks like carrying out medium-term plans (eg. founding a startup) or supervising large projects. The author makes some predictions for 2025 using this framework.

 

Discussion about AI Safety funding (FB transcript)

by Akash

Summary of a discussion on Facebook on Nonlinear’s new AI safety funding program.

Discussion centered around whether having more individual funders increases the likelihood of missing downside risks:

  • Claire Zabel notes their experience grantmaking is that a substantial fraction of rejected applications in the longtermist space are harmful in expectation. 
  • Caleb Parikh from EA funds also guesses something like this is the case and is interested in seeing examples of good AI projects failing to get funding. (To which Thomas Larsen responds with things like funding CAIS more, funding another evals org, and increasing alignment researcher salaries).
  • Kat Woods (Nonlinear) mentions they’ll create a discussion forum for people to discuss downside risks of specific applications, that big funders can also make those mistakes, and that you need high confidence that something is bad to not let others even consider it.
  • Akash mentions the cost of missed opportunities from barriers to applying, and the role of active grantmaking and lowering barriers to get more people to apply in the first place.

     

Other Existential Risks (eg. Bio, Nuclear)

First clean water, now clean air

by finm

In 1858, the stink from London’s Thames river, in addition to a new theory of germ disease, spurred the creation of a modern sewage system to ensure clean drinking water. A similar story unfolded nearly everywhere in the developed world, which the author estimates has saved at least 130 million lives even just post 1973.

The author suggests it’s now time to do the same for air. Unclear air has major costs:

  • The US spends double-digit billions on direct healthcare costs and foregone wages because of airborne diseases like flu.
  • Some studies show double-digit percentage increases in productivity from getting rid of CO2-rich air.
  • It increases the risk of a catastrophic pandemic with airborne transmission pathways.

Currently almost nowhere adequately treats and monitors air. Possible interventions include:

  • Technologies that either block or slow the spread of pathogens (eg. ventilation, filtration, ultraviolet germicidal irradiation).
  • Standards, monitoring and regulations to capture externalities from unclear air.
  • Major R&D initiatives like prizes, FROs, or advanced market commitments to speed up rollouts of safety-promoting technologies.
     

Air Safety to Combat Global Catastrophic Biorisks [REVISED]

by Gavriel Kleinwaks, Jam Kraprayoon, Alastair Fraser-Urquhart, joshcmorrison

Linkpost for this report by researchers from 1Day Sooner and Rethink Priorities. The report has been revised from its previous version after expert review.

Key points:

  • Most efforts to address indoor air quality (IAQ) do not address airborne pathogen levels.
  • Ideal adoption of indoor air quality interventions like ventilation, filtration, and ultraviolet germicidal irradiation (GUV) in all public buildings in the US would reduce transmission of respiratory illnesses by an estimated ~30-75%.
  • Bottlenecks inhibiting the mass deployment of these technologies include a lack of clear standards, cost of implementation, and difficulty changing regulation/public attitudes.
  • Potential interventions:
    • Funders can support advocacy efforts, reduce cost and manufacturing issues, and support efficacy research for different interventions with contributions ranging from $25,000-$200M.
    • Businesses and nonprofits can become early adopters of GUV technology.
    • Researchers can develop models of population-level effects, and conduct further GUV safety testing and manufacturing research.  Applied research can be conducted on ventilation, filtration, and GUV applications in real settings.

       

Animal Welfare

Introducing Animal Policy International

by Rainer Kravets, Mandy Carter

Animal Policy International is a new organization launched via Charity Entrepreneurship and focused on ensuring that animal welfare standards are upheld in international trade policy. They will initially focus on New Zealand, where differences between local animal welfare requirements and the lower requirements for animal product imports result in over 8 million fish, 330K pigs and 380K chickens suffering inhumane living conditions each year.

They’re looking for: a) people with expertise in international trade, policy work, or WTO laws to answer specific questions, b) to hire a part-time NZ-based expert, c) funding, d) partnerships with other NGOs in animal policy space, e) volunteers knowledgeable in trade law or politics, and f) feedback. You can subscribe to their newsletter here.
 

Introducing Stanford’s new Humane & Sustainable Food Lab

by MMathur

Stanford University’s new Humane & Sustainable Food Lab launched in March 2023 and aims to end factory farming via scientific research.

Their approach involves:

  • Conducting studies on interventions to reduce animal product consumption.
  • Building the academic field of animal welfare.
  • Learning from and collaborating with EA-aligned nonprofits such as Rethink PrioritiesFaunalytics, and Sentience Institute.

Previous research (some before official launch) includes:

  • A meta-analysis of 100 studies on interventions designed to reduce meat consumption by appealing to animal welfare. Results showed consistent and meaningful success.
  • Randomized controlled trials on the effects of a professionally-produced documentary on reasons to reduce animal product consumption. Not effective, but they identified methodological pitfalls that could make it seem like it was.

Upcoming research questions include:

  • Do modern plant-based analogs (eg. impossible burger) replace animal-based foods, or plant-based foods like tofu in people’s diets?
  • Have existing large-scale interventions like documentaries and news items reduced consumption or purchase of animal-based products?

They are looking for additional funding to hire / support PhD students or early-career researchers for their lab - you can donate here.

 

Getting Cats Vegan is Possible and Imperative

by Karthik Sekar

Domesticated cats eat almost as much meat per year in the US as humans do in Canada (~3B kgs). It’s already possible to turn plants into microbial protein carnivores can eat, but vegan cat food is expensive and hard to find, and may cause health issues due to lower acidity. Getting more ingredients approved for use in cat food could change this. The author suggests the following interventions:

  • Funding more ingredients and formulations to be tested.
  • Developing a more streamlined, expedited ingredient approval process.
  • Funding long-term studies on the health of vegan cats.
  • Correcting the assumption that obligate carnivores cannot both have a vegan diet and be healthy. Advocate on this with vets.
  • Rear vegan cats to be an example for others.

A top comment by Elizabeth suggests the studies linked to prove vegan diets are sufficiently healthy for cats are poor quality and mostly focus on vegetarian over vegan diets, and more rigorous RCTs are needed.

 

Here's a comprehensive fact sheet of almost all the ways animals are mistreated in factory farms

by Omnizoid

Linkpost for this blog post, which provides details of different forms of harm in factory farms for each of pigs, broiler chickens, egg-laying hens, turkeys, beef cows, and dairy cows.
 

Opportunities

Prizes for matrix completion problems

by paulfchristiano

Alignment Research Center (ARC) are offering $5K prizes for the completion of either of two self-contained algorithmic questions that have come up in their research. These center on a) the existence of PSD completions and b) fast “approximate squaring”. They are open for three months or until a problem is solved.
 

Upcoming EA conferences in 2023

by OllieBase, Eli_Nathan

Including:

  • EAG London: May 19 - 21 (applications closed)
  • EAG Boston: October 27 - 29 (applications open)
  • EAGx Warsaw: June 9-11 (applications open)
  • EAGxNYC: August 18-20 (applications open)
  • EAGxBerlin: September 8-10
  • EAGxAustralia: September 22-24 (provisional dates)
  • EAGxPhilippines: October 20-22 (provisional dates)
  • EAGxVirtual: November 17-19 (provisional dates)
     

Rationality, Productivity & Life Advice

Test fit for roles / job types / work types, not cause areas

by freedomandutility

Suggests fit should be evaluated on role type, and cause area picked by impact potential. For instance, if you dislike wet-lab research in biosecurity, you’ll probably dislike it in alternative proteins as well. Similarly with other cross-cause roles and tasks like entrepreneurship, operations, and types of research (eg. literature reviews, qualitative, quantitative, clinical trials).
 

Advice for interacting with busy people

by Severin

Suggests the time of central information nodes is valuable, so it’s worth:

  1. Making requests concise and clear.
  2. Leaning toward asking for resources and introductions over opinions.
  3. Preparing ahead of time.
  4. Making it easy to say no, and not taking it personally if they do / if a response is delayed.

If doing all this, lean towards asking and letting the busy person decide if they respond - lots of value can be lost by under-communicating.
 

Community & Media

Legal Priorities Project – Annual Report 2022

by Legal Priorities Project, Alfredo_Parra, Christoph_Winter

In 2022, the Legal Priorities Project had 3.6 FTE researchers and spent ~$1.1M. They produced:

  • 10 peer-reviewed papers either accepted for publication or under review, and one book contract. They also produced some working papers and blog posts.
  • Analyzed ongoing policy efforts, getting positive feedback on their research from policymakers.
  • Ran events for students and academics at top institutions eg. the Legal Priorities Summer Institute, a writing competition, and a multidisciplinary forum on longtermism and law. These had positive feedback and totaled hundreds of applications.

In 2023 they plan to:

  • Shift their research agenda to focus more on risks from AI.
  • Increase non-academic publications (eg. policy / tech reports and blog posts) to make their research more accessible.
  • 1-2 field building programmes such as a summer research fellowship.
  • Raise a minimum of $1.1M to maintain current level of operations for another year, with an ideal goal of hiring 1-3 additional FTE.

You can donate here, or subscribe to their newsletter here.
 

If you’d like to do something about sexual misconduct and don’t know what to do…

by Habiba

A guide for those who want to do something about sexual misconduct and harassment in EA but don’t know where to start. Key suggestions include:

  1. Remember we’re all within the system. Identify “paths of least resistance” that lead to harm in social situations, and don’t take them. Reflect on your own behavior.
  2. Learn about the issue (especially before suggesting improvements).
  3. Act with compassion - assume anyone in the discussion may have been personally affected. Listen to community members empathetically. Support them, and respect their preferences on confidentiality and autonomy over what to do next.
  4. Interject when you see harmful behavior. Take action regarding people who have harmed others (though consult with others first).
  5. Participate in the discussion, in community wide initiatives, and via personal actions like donating / volunteering / raising awareness.
     

Review of The Good It Promises, the Harm It Does

by Richard Y Chappell

Review of The Good It Promises, the Harm It Does: Critical Essays on Effective Altruism. The reviewer didn’t find much value in the book. Their thoughts included:

  • The book focuses on social justice perspectives and EA’s lack of alignment with these.
    Examples quote from the book:
    • “Normative Whiteness is cooked into [EA’s] ideological foundation, because it focuses on maximizing the effectiveness of donors’ resources.”
  • The costs of EA identified aren’t compared to the gains.
  • It focuses on animal advocacy (and has a couple good points there eg. cage-free campaigns can miss the chance to push away from industrial farming altogether).

Top commenters suggest regardless if some articles are poor quality, it’s important to understand the perspectives and challenges that the book offers. David Thorstad shares their blog where they’ve had a go at breaking this down. Dr. David Mathers suggests the key challenge presented is asking why EA hasn’t found more worthwhile in rights movements, or worked to collaborate with them, given their historical successes.
 

Please don’t vote brigade

by Lizka

Requests that forum users don’t ask others to upvote or downvote specific posts. This messes up the ranking of posts, and can result in being banned. If you suspect vote brigading, let forum moderators know.
 

What is effective altruism? How could it be improved?

by MichaelPlant

The author suggests Effective Altruism is like a market where people can buy and sell goods for how best to help others. Centre for Effective Altruism (CEA) staff are the market’s administrators. The issues are:

  • CEA is also a market participant, promoting particular ideas and running key orgs.
  • There is primarily one major buyer in the market (Open Philanthropy).

They suggest that CEA should have its trustees elected by the community, strive to be impartial rather than take a stand on priorities, and that EA be run as an impartial market to attract more large ‘buyers’.

Several top comments disagree with the market analogy / argument, but find some sub-points useful. Commenters discuss ways to increase the voice of the community (eg. AMAs with CEA), and possible distinctions that could or should exist between object-level organizations focusing on cause areas and central organizations supporting them.