LESSWRONG
LW

AE Studio
1265210
Message
Dialogue
Subscribe

AE Studio is a team of 160+ programmers, product designers, and data scientists focused on increasing human agency through neglected high-impact approaches. Originally successful in BCI development and consulting, we're now applying our expertise to AI alignment research, believing that the space of plausible alignment solutions is vast and under-explored.

Our alignment work includes prosociality research on self-modeling in neural systems, with attention schema theory in particular, self-other overlap mechanisms, and various neglected technical and policy approaches. We maintain a profitable consulting business that allows us to fund and pursue promising but overlooked research directions without pressure to expedite AGI development.

Learn more about us and our mission here: 
https://ae.studio/ai-alignment

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
AE Studio is hiring!
AE Studio3mo*20

Thanks Lucius, yes, this was tongue-in-cheek and we actually decided to remove it shortly thereafter once we realized it might not come across in the right way. Totally grant the point, and thanks for calling it out.

Reply
29AE Studio is hiring!
3mo
2
81Mistral Large 2 (123B) seems to exhibit alignment faking
Ω
4mo
Ω
4
156Reducing LLM deception at scale with self-other overlap fine-tuning
Ω
4mo
Ω
43
68Alignment can be the ‘clean energy’ of AI
5mo
8
208Making a conservative case for alignment
8mo
67
100Science advances one funeral at a time
9mo
9
91Self-prediction acts as an emergent regularizer
Ω
9mo
Ω
9
77The case for a negative alignment tax
10mo
20
223Self-Other Overlap: A Neglected Approach to AI Alignment
Ω
1y
Ω
51
27Video Intro to Guaranteed Safe AI
1y
0
Load More