I'd rather not go into the details about which billionaires are which, so if it's actually 4 and 3 or 6 and 4, then that may or may not be debatable. I'm much more worried about whether MIRI survives the decade.

It seems to me like this is a good place to figure out ways to handle the contingency where the final billionaire donor crashes and burns, not how to get more billionaires or retain existing ones (and certainly not preventing them from getting bumped off). Signalling unlimited wealth might be valuable for charisma, but at the end of the day it's better to admit that resources are not infinite if it means living longer than 30 years.

I've met some people who were recently funded to start a group house in a rural town in Vermont to do AI safety work, since rents in Vermont are incredibly low, which makes it one of the most cost-effective places in the US to research AI safety. Ultimately, the goal is that people with smaller and smaller amounts of savings could take a sabbatical to a Vermont group house and do research for free for 2-10 years, without working full-time or even part-time at some random software engineering job.

The main problem here is network effects. I don't remember the details, but they will have to drive at least 3 hours to Boston once a month (and probably more like 5-6 hours). Otherwise, they will be effectively alone in the middle of nowhere, totally dependent on the internet to exchange and verify ideas with other AI safety-minded people (and all the risks entailed by filtering most of your human connection through the internet). 

The main problem with the Vermont group house is that there's currently only three of them. If there were ten really smart people in Vermont researching existential risk, then it would be easier to handle the isolation with, say, shoulder advisors. Plus, if it were up to me, they'd be in rural Virginia (or parts of West Virginia) 5-6 hours away from Washington, D.C., not Boston, although the people who picked Vermont and funded it might know things I don't (disclaimer: they had the idea first, not me, I only discovered the brilliance behind it after meeting a Vermont person).

Ultimately, though, it's obviously better for AI-safety-affiliated people to be located within the metropolitan areas of San Francisco, New York, Boston, Washington D.C., and London. New people and new conversations are the lifeblood of any organization and endeavor. But the reality of the situation is that we don't live the kind of world where all the people at MIRI get tech-worker salaries, just because they should; that money has to come from someone, and the human tendency to refuse to seriously think about contingencies just because they're "unthinkably horrible" is the entire reason why a bunch of hobbyists from SF are humanity's first line of defense in the first place. We could absolutely end up in a situation where MIRI needs to relocate from Berkeley to rural Vermont. It would be better than having them work part-time training AI for random firms (or, god forbid, working full time as ordinary software engineers).

So right now seems like the perfect time to start exchanging tips for saving money, setting up group houses in the best possible places, and the prioritization tradeoffs between scenarios where everyone becomes much poorer (e.g. from a second Cold War or a 2008-style economic megafailure upending economic status quos far further than anything in 2020 or 2022) and scenarios where current living conditions are maintained. Because it can always, always get worse.

New Comment
10 comments, sorted by Click to highlight new comments since:

I've met some people who were recently funded to start a group house in a rural town in Vermont to do AI safety work, since rents in Vermont are incredibly low, which makes it one of the most cost-effective places in the US to research AI safety. Ultimately, the goal is that people with smaller and smaller amounts of savings could take a sabbatical to a Vermont group house and do research for free for 2-10 years, without working full-time or even part-time at some random software engineering job.

 

Woah, here in Vermont and very interested in this. Would love to hear more.

I understand the current scheme is that funders "commit" money, i.e. promise to donate them in the future. Can't they instead donate money upfront so it sits somewhere in a bank account / in index funds, until it's time to spend it? That way it won't disappear if their business crashes.

This is plausibly worth pursuing, but right now my understanding is that the only billionaires funding AI safety were equity billionaires, meaning that if they tried to sell all their equity at once then the sell offers would outnumber ordinary trading by orders of magnitude, crashing the stock and preventing them from getting much money even if they did manage to sell all of it.

tl;dr unless they're somehow billionaires in cash or really liquid assets, they always need to sell gradually.

I thought that's kinda the point that the money is at this moment fictional... and it depends on luck whether it ever becomes real money (like, possibly yes, but also possibly no)... so the spending strategy should acknowledge this fact.

For example, we should not think about $1M of fictional money as having $1M real money, but maybe as having $300K real money (adjust the ratio based on previous experience).

If you're at liberty to disclose... I and many others would also be interested in the "extremely-cheap-rent AI safety researcher community in Vermont" thing.

Ultimately, the goal is that people with smaller and smaller amounts of savings could take a sabbatical to a Vermont group house and do research for free for 2-10 years, without working full-time or even part-time at some random software engineering job.

This is precisely the goal, expressed in an efficient number of words. Given that I'm currently in the "full-time... random software engineering job" phase, about to build up savings, I have a particular interest in getting to "work full-time on alignment" quickly.

We've just announced ourselves (Cavendish Labs) here! We plan on having a visiting scholars program that will allow those currently working full-time elsewhere to try out work on alignment for a couple weeks or so; more on that later.

Disclaimer: 

the human tendency to refuse to seriously think about contingencies just because they're "unthinkably horrible" is the entire reason why a bunch of hobbyists from SF are humanity's first line of defense in the first place

This is not necessarily true. There are some very solid alternative explanations for why it ended up like this.

You might want to join this Facebook group: New EA hub search and planning

[-]mjt10

I’m on East Coast US somewhat near VT, and would also love to hear more if you can disclose.