This is part of a weekly series summarizing the top posts on the EA and LW forums - you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.
If you'd like to receive these summaries via email, you can subscribe here.
Podcast version: Subscribe on your favorite podcast app by searching for 'EA Forum Podcast (Summaries)'. A big thanks to Coleman Snell for producing these!
Who regulates the regulators? We need to go beyond the review-and-approval paradigm
by jasoncrawford
Linkpost for this blog post.
Institutional Review Boards (IRBs) were put in place to review the ethics of medical trials, and initially worked well. However, after a study participant's death, they became more stringent and over-reached (eg. requiring heart attack study participants to read and sign long consent forms during a heart attack). A similar pattern occurred with the FDA, NEPA and NRC. This is due to lopsided incentives - regulators are blamed for anything that goes wrong, but neither blamed nor rewarded for how much they slow down or speed up progress. It’s also harder to remove regulations than to add them. The same pattern can be seen as corporations grow eg. Google is now very risk-averse and can require 15+ approvals for minor changes.
The author believes this is evidence the review-and-approval model is broken, and we need better ways to mitigate risk and create safety (eg. liability laws).
How much do you believe your results?
by Eric Neyman
The performance of an intervention in a trial / study is a combination of its actual effect and random noise. This means when comparing multiple interventions, you should expect the top-performing ones to be a combination of good and lucky, and therefore discount for the luck portion (eg. if it estimates 4 lives per $X, you might expect 2). The author suggests keeping this in mind when considering a study, and working hard to reduce the noise in your measurements when conducting one (eg. by increasing sample size).
A top comment by Karthik Tadepalli notes these results depend on if the true spread of intervention quality is the same order of magnitude as the spread of experimental noise. In cases of fat-tailed distributions of intervention quality, the latter becomes negligible and we don’t need to discount much.
[Link Post: New York Times] White House Unveils Initiatives to Reduce Risks of A.I.
by Rockwell
Linkpost for this article, which covers an announcement by The White House on 4th May about its new initiatives aimed at AI risk (factsheet here).
These include:
[Linkpost] ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead
by Darius1
Linkpost for this article, which shares that neural networks pioneer Geoffrey Hinton has left Google in order to be able to “talk about the impacts of AI without considering how this impacts Google”. He notes that while Google has been responsible, the tech giants are “locked in a competition that might be impossible to stop” and “will not stop without [...] global regulation”. He is “worried that future versions of the technology pose a threat to humanity” and believes AI smarter than humans is coming sooner than he thought previously (when he thought 30-50 years or more).
by richard_ngo
Career advice the author commonly gives to those interested in AGI Safety:
How MATS addresses “mass movement building” concerns
by Ryan Kidd
MATS is a program which aims to find and train talented individuals to work on AI alignment. They use this post to address some objections to this approach:
by Kelsey Piper
The author thinks we should be moving slower on developing powerful AI. However, they also believe a strong objection to this is that AI systems could speed up scientific and economic progress which saves and improves lives. Delaying therefore costs these lives.
by richard_ngo
As we get closer to AGI, it becomes less appropriate to treat it as a binary threshold. The author suggests a framework where a system is ‘t-AGI’ if, on most cognitive tasks, it beats most human experts given time t to perform that task. Eg. a 1-second AGI should beat humans at tasks like basic intuitions on physics and recognizing objects. A 1-month AGI would need to beat them at tasks like carrying out medium-term plans (eg. founding a startup) or supervising large projects. The author makes some predictions for 2025 using this framework.
Discussion about AI Safety funding (FB transcript)
by Akash
Summary of a discussion on Facebook on Nonlinear’s new AI safety funding program.
Discussion centered around whether having more individual funders increases the likelihood of missing downside risks:
First clean water, now clean air
by finm
In 1858, the stink from London’s Thames river, in addition to a new theory of germ disease, spurred the creation of a modern sewage system to ensure clean drinking water. A similar story unfolded nearly everywhere in the developed world, which the author estimates has saved at least 130 million lives even just post 1973.
The author suggests it’s now time to do the same for air. Unclear air has major costs:
Currently almost nowhere adequately treats and monitors air. Possible interventions include:
Air Safety to Combat Global Catastrophic Biorisks [REVISED]
by Gavriel Kleinwaks, Jam Kraprayoon, Alastair Fraser-Urquhart, joshcmorrison
Linkpost for this report by researchers from 1Day Sooner and Rethink Priorities. The report has been revised from its previous version after expert review.
Key points:
Introducing Animal Policy International
by Rainer Kravets, Mandy Carter
Animal Policy International is a new organization launched via Charity Entrepreneurship and focused on ensuring that animal welfare standards are upheld in international trade policy. They will initially focus on New Zealand, where differences between local animal welfare requirements and the lower requirements for animal product imports result in over 8 million fish, 330K pigs and 380K chickens suffering inhumane living conditions each year.
They’re looking for: a) people with expertise in international trade, policy work, or WTO laws to answer specific questions, b) to hire a part-time NZ-based expert, c) funding, d) partnerships with other NGOs in animal policy space, e) volunteers knowledgeable in trade law or politics, and f) feedback. You can subscribe to their newsletter here.
Introducing Stanford’s new Humane & Sustainable Food Lab
by MMathur
Stanford University’s new Humane & Sustainable Food Lab launched in March 2023 and aims to end factory farming via scientific research.
Their approach involves:
Previous research (some before official launch) includes:
Upcoming research questions include:
They are looking for additional funding to hire / support PhD students or early-career researchers for their lab - you can donate here.
Getting Cats Vegan is Possible and Imperative
by Karthik Sekar
Domesticated cats eat almost as much meat per year in the US as humans do in Canada (~3B kgs). It’s already possible to turn plants into microbial protein carnivores can eat, but vegan cat food is expensive and hard to find, and may cause health issues due to lower acidity. Getting more ingredients approved for use in cat food could change this. The author suggests the following interventions:
A top comment by Elizabeth suggests the studies linked to prove vegan diets are sufficiently healthy for cats are poor quality and mostly focus on vegetarian over vegan diets, and more rigorous RCTs are needed.
Here's a comprehensive fact sheet of almost all the ways animals are mistreated in factory farms
by Omnizoid
Linkpost for this blog post, which provides details of different forms of harm in factory farms for each of pigs, broiler chickens, egg-laying hens, turkeys, beef cows, and dairy cows.
Prizes for matrix completion problems
by paulfchristiano
Alignment Research Center (ARC) are offering $5K prizes for the completion of either of two self-contained algorithmic questions that have come up in their research. These center on a) the existence of PSD completions and b) fast “approximate squaring”. They are open for three months or until a problem is solved.
Upcoming EA conferences in 2023
by OllieBase, Eli_Nathan
Including:
Test fit for roles / job types / work types, not cause areas
by freedomandutility
Suggests fit should be evaluated on role type, and cause area picked by impact potential. For instance, if you dislike wet-lab research in biosecurity, you’ll probably dislike it in alternative proteins as well. Similarly with other cross-cause roles and tasks like entrepreneurship, operations, and types of research (eg. literature reviews, qualitative, quantitative, clinical trials).
Advice for interacting with busy people
by Severin
Suggests the time of central information nodes is valuable, so it’s worth:
If doing all this, lean towards asking and letting the busy person decide if they respond - lots of value can be lost by under-communicating.
Legal Priorities Project – Annual Report 2022
by Legal Priorities Project, Alfredo_Parra, Christoph_Winter
In 2022, the Legal Priorities Project had 3.6 FTE researchers and spent ~$1.1M. They produced:
In 2023 they plan to:
You can donate here, or subscribe to their newsletter here.
If you’d like to do something about sexual misconduct and don’t know what to do…
by Habiba
A guide for those who want to do something about sexual misconduct and harassment in EA but don’t know where to start. Key suggestions include:
Review of The Good It Promises, the Harm It Does
by Richard Y Chappell
Review of The Good It Promises, the Harm It Does: Critical Essays on Effective Altruism. The reviewer didn’t find much value in the book. Their thoughts included:
Top commenters suggest regardless if some articles are poor quality, it’s important to understand the perspectives and challenges that the book offers. David Thorstad shares their blog where they’ve had a go at breaking this down. Dr. David Mathers suggests the key challenge presented is asking why EA hasn’t found more worthwhile in rights movements, or worked to collaborate with them, given their historical successes.
by Lizka
Requests that forum users don’t ask others to upvote or downvote specific posts. This messes up the ranking of posts, and can result in being banned. If you suspect vote brigading, let forum moderators know.
What is effective altruism? How could it be improved?
by MichaelPlant
The author suggests Effective Altruism is like a market where people can buy and sell goods for how best to help others. Centre for Effective Altruism (CEA) staff are the market’s administrators. The issues are:
They suggest that CEA should have its trustees elected by the community, strive to be impartial rather than take a stand on priorities, and that EA be run as an impartial market to attract more large ‘buyers’.
Several top comments disagree with the market analogy / argument, but find some sub-points useful. Commenters discuss ways to increase the voice of the community (eg. AMAs with CEA), and possible distinctions that could or should exist between object-level organizations focusing on cause areas and central organizations supporting them.