Edit: Why was this downvoted? I genuinely don't understand.

New to LessWrong?

New Answer
New Comment

1 Answers sorted by

Daniel Kokotajlo

Aug 13, 2023

88

I'd say it depends on who you are and what your credentials are.

I expect that there'll be orders of magnitude more funding going to AI safety and alignment research and anything to do with AI, basically. But I expect that funding to mostly go to the people it typically goes to, i.e. people who have prestigious credentials and who are good at writing grant applications etc. The funding from traditional AI safety donors will continue to be more open-minded and discerning, but it'll probably continue at roughly the same levels as today rather than be orders of magnitude bigger.

(Unless... actually, I could see that being false, perhaps OpenPhil will pivot more to AI stuff and also just generally start spending a higher fraction of the money they have, enough to make a big difference?)

One thing I'm attempting to figure out is whether in that future, where government/traditional academic funding becomes dominant, there remains significant neglectedness in important subproblems because of how those systems tend to operate. I could see an OpenPhil pivot covering some of this, but it'd sure be nice to nail down at least a few more things when choosing between going all in on risky high EV ETG versus direct safety work.

There are some historical examples that might be informative, but it's difficult for me to judge.

[-][anonymous]9mo10

Thanks for the response.
I've also been wondering if OpenAI might use some of the $10B Microsoft investment to fund external alignment researchers, tbh.

3 comments, sorted by Click to highlight new comments since: Today at 11:15 PM

My guess about why this was downvoted would be that the downvoters, somewhat presumptuously, are assuming from the "If we're alive in 5 years" part that you have very short timelines, they think this is foolish, and for that reason dislike the post.

FWIW, if that is in fact what happened, I very much disapprove of it. It isn't crazy to have such short timelines. Plenty of reputable people take seriously the idea that timelines can be this short. I think it is reasonable to downvote people for posting about ideas that are incredibly implausible, but I think it is extremely hard to argue that such short timelines are that implausible.

[-][anonymous]7mo10

I thought so too. I'm actually not sure about timelines, I just wanted to not rule out the possibility.

I don't think the overall EA and tech-venture funding levels are coming back anytime soon - the low-interest-rate environment is over for at least a decade.  Whether it comes back after that is unknown.

There is today, and will likely continue to be, funding available for some amount of promising endeavors.  As the practical uses of LLMs and AI ramp up, there will be a LOT MORE money in "prosaic AI".  But likely never again the large amounts for speculative/very-long-term reasons it was a few years ago.