We ask the AI to help make us smarter
this may make little to negative sense, if you don't have a lot of context:
thinking about when I've been trying to bring together Love and Truth - Vyas talked about this already in the Upanishads. "Having renounced (the unreal), enjoy (the real). Do not covet the wealth of any man". Having renounced lies, enjoy the truth. And my recent thing has been trying to do more of exactly that - enjoying. And 'do not covet the wealth of any man' includes ourselves. So not being attached to the outcomes of my work, enjoying it as it's own thing - if it succeeds, if it fails, either I can enjoy the present moment. And this doesn't mean just enjoying things no matter what - I'll enjoy a path more if it brings me to success, since that's closer to Truth - and it's enjoying the Real.
the Real truth that each time I do something, I try something, I reach out explore where I'm uncertain, I'm learning more about what's Real. There are words missing here, from me not saying them and being able to say them, from me knowing not how to say them and from me not knowing that they should be here, for my Words to be more True. But either way, I have found enjoyment in writing them.
the way you phrased it there seems fine
Too, too much of the current alignment work is not only not useful, but actively bad and making things worse. The most egregious example of this to me, is capability evals. Capability evals, like any eval, can be useful for seeing which algorithms are more successful in finding optimizers at finding tasks - and in a world where it seems like intelligence generalizes, this means that ever public capability eval like FrontierMath, EnigmaEval, Humanity's Last Exam, etc help AI Capability companies figure out which algorithms are ones to invest more compute in, and test new algorithms.
We need a very, very clear differentiation between what precisely is helping solve alignment and what isn't.
But there will be the response of 'we don't know for sure what is or isn't helping alignment since we don't know what exactly solving alignment looks like!!'.
Having ambiguity and unknowns due to an unsolved problem doesn't mean that literally every single thing has an equal possibility of being useful and I challenge anyone to say so seriously and honestly.
We don't have literally zero information - so we can certainly make some estimates and predictions. And it seems quite a safe prediction to me, that capability evals help capabilities much much more than alignment - and I don't think they give more time for alignment to be solved either, instead, doing the opposite.
To put it bluntly - making a capability eval reduces all of our lifespans.
It should absolutely be possible to make this. Yet it has not been done. We can spend many hours speculating as to why. And I can understand that urge.
But I'd much much rather just solve this.
I will bang my head on the wall again and again and again and again. So help me god, by the end of January, this is going to exist.
I believe it should be obvious why this is useful for alignment being solved and general humanity surviving.
But in case it's not:
Good idea - I advise a higher amount, spread over more people. Up to 8.
- Yep. 'Give good advice to college students and cross subsidize events a bit, plus gentle pressure via norms to be chill about the wealth differences' is my best current answer. Kinda wish I had a better one.
Slight, some, if any nudges toward politics being something that gives people a safety net, so that everyone has the same foundation to fall on? So that even if there are wealth differences, there aren't as much large wealth enabled stresses
Sometime it would be cool to have a conversation about what you mean by this, because I feel the same way much of the time, but I also feel there's so much going on it's impossible to have a strong grasp on what everyone is working on.
Yes! I'm writing a post on that today!! I want this to become something that people can read and fully understand the alignment problem as best as it's currently known, without needing to read a single thing on lesswrong, arbital, etc. V lucky atm, I'm living with a bunch of experienced alignment researchers and learning a lot.
also, happy to just have a call:
Did you see the Shallow review of technical AI safety, 2025? Even just going through the "white-box" and "theory" sections I'm interested in has months worth of content if I were trying to understand it in reasonable depth.
Not properly yet - saw that in the China section, Deepseek's Speciale wasn't mentioned, but it's a safety review, tbf, not a capabilities review. I do like this project a lot in general - thinking of doing more critique-a-thons and reviving the peer review platform, so that we can have more thorough things.
China
Deepseek Special apparently performs at IMO gold level https://huggingface.co/deepseek-ai/DeepSeek-V3.2-Speciale - seems important
Control seems to largely be increasing p doom, imo, by decreasing the chances of canaries.
Working on a meta plan for solving alignment, I'd appreciate feedback & criticism please - the more precise the better. Feel free to use the emojis reactions if writing a reply you'd be happy with feels taxing.
Diagram for visualization - items in tables are just stand-ins, any ratings and numbers are just for illustration, not actual rankings or scores at this moment.

Red and Blue teaming Alignment evals
Make lots of red teaming methods to reward hack alignment evals
Use this to find actually useful alignment evals, then red team and reward hack them too, find better methods of reward hacking and red teaming, then finding better ways of doing alignment evals that are resistant to that and actually find the preferences of the model reliably too - keep doing this constantly to have better and better alignment evals and alignment evals reward hacking methods. Also hosting events for other people to do this and learn to do this.
Alignment methods
then doing these different post training methods, implementing lots of different alignment methods, seeing which ones score how highly across the alignment evals we know are robust, using this to find patterns in which types seem to work more/less,
Theory
then doing theory work, to try to figure out/guess why those methods are working more/less, make some hypothesises about new methods that willl work better and new methods that will work worse, based on this,
then ranking the theory work based on how minimal the assumptions are, how well predicted the implementations are and how high they score on peer review
End goal:
Alignment evals that no one can find a way to reward hack/red team and that really precisely measure the preferences of the models.
Alignment methods that seem to score a high score on these evals.
A very strong theoretical understanding for what makes this alignment method work, *why* it actually learns the preferences, theory understanding how those preferences will/won't scale to result in futures where everyone dies or lives. The theoretical work should have as few assumptions as possible. Aim is a mathematical proof with minimal assumptions, that's written very clearly and is easy to understand so that lots and lots of people can understand it and criticize it - robustness through obfuscation is a method of deception, intentional or not.
Current Work:
Hosting Alignment Evals hackathons and making Alignment Evals guides, to make better alignment evals and red teaming methods.
Making lots of Qwen model versions, whose only difference is the post training method.