Preface: Most of my predictions have great uncertainty when it comes to AI development and alignment, and I have great respect for the predictions of others.
However, I thought it might be fun and useful to write a plan based solely on predictions I would make if I assumed my predictions were more accurate than everyone else's (including betting markets).
Lastly, I point out why detailed plans for alignment are difficult and why broader and more flexible strategies are preferable.
Plan Summary:
- Make 100M USD by founding an AI startup
- Pivot into interpretability research and lobbying
- Incentivize AI labs/politicians to aim for creating a "limited AGI" such as an oracle AGI that solves alignment for us, ideally in
... (read 1751 more words →)
Let me see if I understand correctly, do you mean making 100M could cause distruption by:
- I would need to spend almost all my time managing the company, partly due to external pressure
- I would probably have difficulty proritizing AI Alignment since there would be other ways I can get more short term gratification
- I would become overconfident due to the success