Review


Nonlinear spoke to dozens of earn-to-givers and a common sentiment was, "I want to fund good AI safety-related projects, but I don't know where to find them." At the same time, applicants don’t know how to find them either. And would-be applicants are often aware of just one or two funders - some think it’s “LTFF or bust” - causing many to give up before they’ve started, demoralized, because fundraising seems too hard.

As a result, we’re trying an experiment to help folks get in front of donors and vice versa. In brief: 

Looking for funding? 

Why apply to just one funder when you can apply to dozens? 

If you've already applied for EA funding, simply paste your existing application. We’ll share it with relevant funders (~30 so far) in our network. 

You can apply if you’re still waiting to hear from other funders. This way, instead of having to awkwardly ask dozens of people and get rejected dozens of times (if you can even find the funders), you can just send in the application you already made.  

We’re also accepting non-technical projects relevant to AI safety (e.g. meta, forecasting, field-building, etc.)

Application deadline: May 17, 2023. [Edit: new deadline is the 24th, to accommodate the NeurIPS deadline]

Looking for projects to fund?

Apply to join the funding round by May 24, 2023. Soon after, we'll share access to a database of applications relevant to your interests (e.g. interpretability, moonshots, forecasting, field-building, novel research directions, etc).

If you'd like to fund any projects, you can reach out to applicants directly, or we can help coordinate. This way, you avoid the awkwardness of directly rejecting applicants, and don’t get inundated by people trying to “sell” you.

Inspiration for this project

When the FTX crisis broke, we quickly spun up the Nonlinear Emergency Fund to help provide bridge grants to tide people over until the larger funders could step in. 

Instead of making all the funding decisions ourselves, we put out a call to other funders/earn-to-givers. Scott Alexander connected us with a few dozen funders who reached out to help, and we created a Slack to collaborate.

We shared every application (that consented) in an Airtable with around 30 other donors.  This led to a flurry of activity as funders investigated applications. They collaborated on diligence, and grants were made that otherwise wouldn’t have happened. 

Some funders, like Scott, after seeing our recommendations, preferred to delegate decisions to us, but others preferred to make their own decisions. Collectively, we rapidly deployed roughly $500,000 - far more than we initially expected.

The biggest lesson we learned: openly sharing applications with funders was high leverage - possibly leading to four times as many people receiving funding and 10 times more donations than would have happened if we hadn’t shared.

If you’ve been thinking about raising money for your project idea, we encourage you to do it now. Push through your imposter syndrome because, as Leopold Aschenbrenner said, nobody’s on the ball on AGI alignment.

Another reason to apply: we’ve heard from EA funders that they don’t get enough applications, so you should have a low bar for applying - many fund over 50% of applications they receive (SFFLTFFEAIF).

Since the Nonlinear Network is a diverse set of funders, you can apply for a grant size anywhere between single digit thousands to single digit millions.

Note: We’re aware of many valid critiques of this idea, but we’re keeping this post short so we actually ship it. We’re starting with projects related to AI safety because our timelines are short, but if this is successful, we plan to expand to the other cause areas.

Apply here.

Reminder that you can listen to LessWrong and EA Forum posts like this on your podcast player using the Nonlinear Library.

New Comment
13 comments, sorted by Click to highlight new comments since:

FWIW: I volunteered for Nonlinear in summer 2021, and the people behind it are pretty on-top-of-things!

good context, reviews from people whose identity I know like this are what make me feel less uncertain after an iffy one!

edit: have heard more negative things; my opinion continues to more or less match my other comment - I want to see evidence at every turn that nonlinear is unable to be a threat, and if I fail to get that evidence at any point, I will consider it "likely enough to be confirmation of the concerns raised that I should respond as if it is definite confirmation" and drop the connection. I would encourage others to do the same until such time as reliable evidence comes out that they're trustworthy.

This is a good idea; unfortunately, based on discussions on the EA Forum, Nonlinear is not an organization I would trust to handle it. (Note, as external evidence, that the Glassdoor reviews of Emerson's previous company frequently mention a toxic upper management culture of exactly the sort that the commenter alleges at Nonlinear, and have a 0% rating of him as CEO.)

[EDITED TO ADD: The second comment quotes reviews written after the Spartz era (although I'm sure many of them were present during it), which is misleading; moreover, the second commenter was banned for having a sockpuppet. My criticisms are thereby reduced but not eliminated.]

Hi, thanks for saying you liked the idea, and also appreciate the chance to clear up some things here. As a reminder, we’re not making funding decisions. We’re just helping funders and applicants find each other. 

Some updates on that thread you might not have seen: the EA Forum moderators investigated and banned two users for creating ~8 fake sockpuppet accounts. This has possibly led to information cascades about things “lots of people are saying.”

Another thing you might not be aware of: the Glassdoor CEO rating of 0% was actually not Emerson (who left in early 2017), but his successor. Emerson’s reviews were actually above average for GlassDoor. 

Regardless, I don’t think that this should matter anyway, because we’re just connecting folks with funders because we think it will help the ecosystem.

It's unclear to me whether there's a credible accusation against y'all. So, in the interest of wanting to not have to worry about such a thing when and if I apply to stuff through nonlinear -

What are your plans for how to remove yourself from the equation by nature of providing a mechanically checkable tool that does not permit you to intervene on who can apply to who? In general, that's what I expect a good networking tool to do. I wouldn't want uncertainty about the validity of the nonlinear group to compromise an application, especially if this is at risk of turning out to be another scam like FTX turned out to be; I imagine it'd be a smaller issue than that, but of course you wouldn't want to promise it's not a scam, as that promise is vacuous, adding no information to the external view of whether it is one. The only way to verify such a thing is to design for mechanistic auditability, that is to say, processes that do not have a step on which a shaky reputationed person can exert influence, such as an open source application tool.

With that in mind, I am in fact interested in applying to some sort of funding process. I just don't want to be accepting unreasonable reputational risk by depending on a challenged reputation with no mechanistic safeguarding against the hypothesized behaviors represented by reputational concern level. I'd ask others to do as much with any org I was at.

Thanks for the info about sockpuppeting, will edit my first comment accordingly.

Re: Glassdoor, the most devastating reviews were indeed after 2017, but it's still the case that nobody rated the CEO above average among the ~30 people who worked in the Spartz era.

Thanks for updating! LessWrong at it’s best :) 

I went through and added up all of the reviews from when Emerson was in charge and the org averaged a 3.9 rating. You can check my math if you’d like (5+3+5+4+1+4+5+5+5+5+5+5+5+1+5+5+3+5+5+5+3+1+2+4+5+3+1)/27

For reference, Meta has a 4 star rating on GlassDoor and has won one of their prizes for Best Place to Work for 12 years straight. (2022 (#47), 2021 (#11), 2020 (#23), 2019 (#7), 2018 (#1), 2017 (#2), 2016 (#5), 2015 (#13), 2014 (#5), 2013 (#1), 2012 (#3), 2011 (#1))

Not diving into it super thoroughly, but when I google “what's the average glassdoor rating”, the first three results I see are: 3.53.3, and 3.3. So I think this counts as being above average on Glassdoor. 

For the reviews of the CEO, it seems they added that feature after Emerson was CEO. There’s only one CEO approval rating before 2017. If however you read the qualitative reviews and look at the overall rating of the org, you’ll find it’s above average. 

Just for the record, I currently believe this statement to be true, though not very confidently. It matches with what I heard about Dose from a bunch of different sources: 

All of these super positive reviews are being commissioned by upper management. That is the first thing you should know about Spartz, and I think that gives a pretty good idea of the company's priorities.

I don't know the exact date that Emerson left, but there are really a lot of negative reviews right at the beginning of 2017, none of them mentioning a major restructuring. I think the highly negative glassdoor reviews are still a quite major warning flag, even if a lot of them happened after Emerson left (though it does definitely also matter that they were made after Emerson left)

[+][comment deleted]10

I'm a smart undergrad in comp sci with published research in machine learning. I have a few related ideas for making LLMs existentially safer that I think are worth a shot and that I'd like to devote some time over the next year to pursuing. Are these grants for people like me? I just want to do personal research on this and publish the results, I don't expect to create a new org or anything like that.

Yes, some funders are more interested in funding individuals and some are more interested in existing organizations.

If you apply here I'd also recommend applying to the Long Term Future Fund because they're always looking for more good applications.

Hello there,

Are you interested of funding this theory of mine that I submitted to AI alignment awards? I am able to make this work in GPT2 and now writing the results. I was able to make GPT2 shutdown itself (100% of the time) even if it's aware of the shutdown instruction called "the Gauntlet" embedded through fine-tuning an artificially generated archetype called "the Guardian" essentially solving corrigibility, outer and inner alignment.

https://twitter.com/whitehatStoic/status/1646429585133776898?t=WymUs_YmEH8h_HC1yqc_jw&s=19

Let me know if you guys are interested. I want to test it in higher parameter models like Llama and Alpaca but don't have the means to finance the equipment.

I also found out that there is a weird setting in the temperature for GPT2 where in the range of .498 to .50 my shutdown code works really well, I still don't know why though. But yeah I believe that there is an incentive to review what's happening inside the transformer architecture.

Here was my original proposal: https://www.whitehatstoic.com/p/research-proposal-leveraging-jungian

I'll post my paper for the corrigibility solution too once finished probably next week but if you wish to contact me, just reply here or email me at migueldeguzmandev@gmail.com.

If you want to see my meeting schedule, You can find it here: https://calendly.com/migueldeguzmandev/60min

Looking forward to hearing from you.

Best regards,

Miguel

Update: Already sent an application, I didn't saw that in my first read. Thank you.

Is this going to run again?