One obvious problem is that this turns getting funded into a popularity contest, which makes Goodhart kick in. It might work fine as a one-off thing, but in the long run, it will predictably get gamed, and will likely have negative effects on the whole LW discussion ecosystem by setting up perverse incentives for engaging with it (and, unless the list of eligible people is frozen forever, attracting new people who are only interested in promoting themselves to get money).
What should be the amount? Thiel gave 200k. Is it too much for 2 years? Too little?
You should almost certainly have some mechanism for deciding the amount to pay on a case-by-case basis, rather than having it be flat.
Could there be an entirely different approach to finding fellows? How would you do it?
What I would want to experiment with is using prediction markets to "amplify" the judgement of well-known people with unusually good AGI Ruin models who are otherwise too busy to review thousands of mostly-terrible-by-their-lights proposals (e. g., Eliezer or John Wentworth). Fund the top N proposals the market expects the "amplified individual" to consider most promising, subject to their veto.
This would be notably harder to game than a straightforward popularity contest, especially if the amplifee is high-percentile disagreeable (as my suggested picks are).
Unconditional Grants to Worthy Individuals Are GreatThe process of applying for grants, raising money, and justifying your existence sucks.
A lot.
It especially sucks for many of the creatives and nerds that do a lot of the best work.
If you have to periodically go through this process, and are forced to continuously worry about making your work legible and how others will judge it, that will substantially hurt your true productivity. At best it is a constant distraction. By default, it is a severe warping effect. A version of this phenomenon is doing huge damage to academic science.
Compelled by this, I'm considering funding ~three people for two years each to work on whatever they see fit, much like the the Thiel Fellowship, but with an AI Alignment angle.
I want to find people who are excited to work on existential risk, but are currently spending much of their time working on something else due to financial reasons.
Instead of delegating the choice to some set of grant makers, I think that aggregating the opinion of the crowd could work better (at least as good at finding talent, but with less overall time spent)
The best system I can think of at the moment would be to give every member of the alignment forum one vote with the ability to delegate it. Let everybody nominate any person in the world, including themselves, and award grants to the top 3.
I'm asking for feedback and advice: