AE Studio is a team of 160+ programmers, product designers, and data scientists focused on increasing human agency through neglected high-impact approaches. Originally successful in BCI development and consulting, we're now applying our expertise to AI alignment research, believing that the space of plausible alignment solutions is vast and under-explored.
Our alignment work includes prosociality research on self-modeling in neural systems, with attention schema theory in particular, self-other overlap mechanisms, and various neglected technical and policy approaches. We maintain a profitable consulting business that allows us to fund and pursue promising but overlooked research directions without pressure to expedite AGI development.
Learn more about us and our mission here:
https://ae.studio/ai-alignment
Thanks Lucius, yes, this was tongue-in-cheek and we actually decided to remove it shortly thereafter once we realized it might not come across in the right way. Totally grant the point, and thanks for calling it out.
To the degree worries of this general shape are legitimate (we think they very much are), seems like it would be wise for the alignment community to more seriously pursue and evaluate tons of neglected approaches that might solve the fundamental underlying alignment problem, rather than investing the vast majority of resources in things like evals and demos of misalignment failure modes in current LLMs, which definitely are nice to have, but almost certainly won't themselves directly yield scalable solutions to robustly aligning AGI/ASI.