LESSWRONG
LW

43
Wikitags

AI Alignment Fieldbuilding

Edited by plex last updated 15th Jun 2022

AI Alignment Fieldbuilding is the effort to improve the alignment ecosystem. Some priorities include introducing new people to the importance of AI risk, on-boarding them by connecting them with key resources and ideas, educating them on existing literature and methods for generating new and valuable research, supporting people who are contributing, and maintaining and improving the funding systems.

There is an invite-only Slack for people working on the alignment ecosystem. If you'd like to join message plex with an overview of your involvement.

Subscribe
Discussion
1
Subscribe
Discussion
1
Posts tagged AI Alignment Fieldbuilding
173The inordinately slow spread of good AGI conversations in ML
Rob Bensinger
3y
62
6Papers to start getting into NLP-focused alignment research
Q
Feraidoon
3y
Q
0
82ML Alignment Theory Program under Evan Hubinger
Ω
ozhang, evhub, Victor W
4y
Ω
3
348Shallow review of live agendas in alignment & safety
Ω
technicalities, Stag
2y
Ω
73
73Takeaways from a survey on AI alignment resources
DanielFilan
3y
10
68Don't Share Information Exfohazardous on Others' AI-Risk Models
Ω
Thane Ruthenis
2y
Ω
11
26Talk: AI safety fieldbuilding at MATS
Ryan Kidd
1y
2
170Transcripts of interviews with AI researchers
Vael Gates
3y
9
168Most People Start With The Same Few Bad Ideas
Ω
johnswentworth
3y
Ω
31
135Apply to the Redwood Research Mechanistic Interpretability Experiment (REMIX), a research program in Berkeley
Ω
maxnadeau, Xander Davies, Buck, Nate Thomas
3y
Ω
14
115Why I funded PIBBSS
Ryan Kidd
1y
21
107Demystifying "Alignment" through a Comic
milanrosko
1y
19
88Qualities that alignment mentors value in junior researchers
Orpheus16
3y
14
87How to Diversify Conceptual Alignment: the Model Behind Refine
Ω
adamShimi
3y
Ω
11
58aisafety.community - A living document of AI safety communities
zeshen, plex
3y
23
Load More (15/270)
Add Posts