Tl;dr: I’ve set up a database of domains at which are free to a good home, to prevent them being squatted and blocked for use. You can add domains you control to it using this form.

Since my well received post on setting up an Anti-squatted AI x-risk domains index, I’ve been picking up more and more domains, talking to other domain holders, and building an interface for viewing them. I’ve also put a[1] few of them[2] to good use[3] already!

Also a big thanks to Ben West for sharing 215 of CEA's parked domains, which EA projects are welcome to request use of by emailing He also offered for CEA to be the custodian of the domains I bought, with the condition that they will be pointed to nameservers I control and that they will return ownership of them for EA-aligned use on request, unless they’re in active use by another EA project. This will save me from having to pay upkeep, which will help make this more sustainable. He's open to extending this offer to holders of other relevant domains.

If you'd like to use one of these domains, then message the contact person specified. They each have different policies for handing over the domain, but a good standard is that they'll point the domain towards your servers on request, and hand it over for free if and when you have built something useful at that location.

Here’s the top 40 domains I’m most excited about, but go check the full list:

DomainPossible useContact
aisafety.globalAI Safety
existential-risks.orgHigh quality explanation?
ontological.techNew org?
existential.devNew org?
epistemic.devNew org?
aisafety.toolsDirectory of resources?
agenty.orgNew org?
effectivealtruism.venturesEA entrepreneurs or impact investing group?
aisafety.degreeAI Safety PhD
alignment.careers80k said they were happy for others to join the careers advice
xrisk.fundx-risk specific funding organization?
aisafety.careers80k said they were happy for others to join the careers advice
aisafety.fundAIS-specific funding org?
animalwelfare.dayDays to do a coordinated push for cause areas?
globalhealth.dayDays to do a coordinated push for cause areas?
alignment.dayDays to do a coordinated push for cause areas?
aisafety.dayDays to do a coordinated push for cause areas?
cause-x.dayDays to do a coordinated push for cause areas?
biosecurity.dayDays to do a coordinated push for cause areas? Anti-GOF?
rationality.dayDays to do a coordinated push for cause areas?
aisafety.questProject Euler for AIS?
aisafety.coachAn org which specializes in coaching AI safety people?
aisafety.instituteResearch organization?
aisafety.observerArticles on news on the AI safety space?
aisafety.venturesEntrepreneurs org?
aisafety.groupPeer-to-peer study groups for skilling up maybe?
bountiedrationality.orgWebsite to pair with the BR facebook
aisafety.foundationAIS-specific funding org?
xrisk.foundationx-risk specific funding organization?
alignment.coursesList of all training programs (using Stampy answer as backend)?
aisafety.networkPeer-to-peer researchers something?
aisafety.devSWEs for AI safety org?

My theory is that in order to align incentives well, I’m going to go ahead and build useful things without requesting funding and hope that someone thinks I’m doing good work and retrofunds me. I’ve spent $544.14 on domains listed on, and $1,542.62[4] on other directly EA domains for various projects not listed here, along with at least a few dozen hours researching, buying, and setting up domains as well as the website.

I’d be encouraged to see my theory confirmed, and to set an example so that other people try the build-first strategy if they have the means. I’ll list retrofunders who step up in this post and the footer of the website, if they are happy with that.

There are more domains I’m excited to add to the collection, but they’re a bit more expensive (e.g. one which I think would make a great org name is $99/y. Not naming publicly to avoid it being squatted, but happy to tell people who want to buy it for in private, or use some retrofunds for it).

  1. ^ - A frontend for AI Safety Support's comprehensive list of training programs, conferences, and other events.

  2. ^ - A database of AI safety communities.

  3. ^ - A collection, soon to be map, of all notable orgs in the AI x-safety space, collaborating with Hamish who will be claiming a Superlinear prize I wrote and Nonlinear funded.

  4. ^

    Accounting available on request.


New to LessWrong?