LESSWRONG
LW

110
Wikitags

AI Alignment Fieldbuilding

Edited by plex last updated 15th Jun 2022

AI Alignment Fieldbuilding is the effort to improve the alignment ecosystem. Some priorities include introducing new people to the importance of AI risk, on-boarding them by connecting them with key resources and ideas, educating them on existing literature and methods for generating new and valuable research, supporting people who are contributing, and maintaining and improving the funding systems.

There is an invite-only Slack for people working on the alignment ecosystem. If you'd like to join message plex with an overview of your involvement.

Subscribe
Discussion
1
Subscribe
Discussion
1
Posts tagged AI Alignment Fieldbuilding
6
173The inordinately slow spread of good AGI conversations in ML
Rob Bensinger
3y
62
6
6Papers to start getting into NLP-focused alignment research
Q
Feraidoon
3y
Q
0
5
82ML Alignment Theory Program under Evan Hubinger
Ω
ozhang, evhub, Victor W
4y
Ω
3
4
348Shallow review of live agendas in alignment & safety
Ω
technicalities, Stag
2y
Ω
73
4
73Takeaways from a survey on AI alignment resources
DanielFilan
3y
10
4
68Don't Share Information Exfohazardous on Others' AI-Risk Models
Ω
Thane Ruthenis
2y
Ω
11
4
26Talk: AI safety fieldbuilding at MATS
Ryan Kidd
1y
2
3
170Transcripts of interviews with AI researchers
Vael Gates
4y
9
3
168Most People Start With The Same Few Bad Ideas
Ω
johnswentworth
3y
Ω
31
3
135Apply to the Redwood Research Mechanistic Interpretability Experiment (REMIX), a research program in Berkeley
Ω
maxnadeau, Xander Davies, Buck, Nate Thomas
3y
Ω
14
3
115Why I funded PIBBSS
Ryan Kidd
1y
21
3
108Demystifying "Alignment" through a Comic
milanrosko
1y
19
3
88Qualities that alignment mentors value in junior researchers
Orpheus16
3y
14
3
87How to Diversify Conceptual Alignment: the Model Behind Refine
Ω
adamShimi
3y
Ω
11
3
58aisafety.community - A living document of AI safety communities
zeshen, plex
3y
23
Load More (15/272)
Add Posts