Marcus also nudged his friend Ege Erdil to start Mechanize, and offered them their first investment.
I'm not sure why this is under the "AI Safety regranting record" section since Mechanize focuses on capabilities research and has been skeptical of safety efforts. Take for example their section from a early post:
Full automation is desirable
Even if you accept the inevitability of full automation, you might still think that we should delay this outcome in order to keep human labor relevant as long as possible. This sentiment is understandable but ultimately misguided. The upside of automating all jobs in the economy will likely far exceed the costs, making it desirable to accelerate, rather than delay, the inevitable.
I wouldn't say I "nudged" him. He was doing it. I invested since I thought it was a good investment (it has been). They had no problem raising money, and my investment replaced part of one of the other investors' cheques.
I wouldn't have included this, especially since it's a private investment, but Austin really wanted to.
I have donated a lot of money recently to animal welfare (~$450k in the last 5 months). I would have donated less if I had not had this investment.
Mechanize sells environments to AI labs (this is where all revenue comes from) and so if you think investing in the labs is ok, so should investing in Mechanize.
I included this story as a short anecdote about Marcus's ability to spot talent, make active investments, and convince founders to take the leap, all of which I expect to transfer into helping start great AI x Animal orgs. I understand that different people in EA/AI safety have different takes about whether Mechanize specifically is good or bad -- I happen to think good or at least neutral.
(And I take responsibility for any factual errors with this specific anecdote. Talking to Marcus just now, it seems like his main nudge was to convince Ege/Matthew/Tamay that the nonprofit structure was wrong for what they wanted to accomplish.)
Some things I believe:
There are a lot of people on LessWrong who have a better picture of what might be relevant to ASI, and I'd like to see comments from them on what sort of direction they'd want to see for Falcon Fund or for orgs in the space.
This is something I've written about before (e.g. Which types of AI alignment research are most likely to be good for all sentient beings?) but there are LessWrong regulars who could provide much better insight than I can.
Manifund is launching a new animal welfare fund, led by regrantor Marcus Abramovitch. We make rapid (<1 week), early-stage ($25k–$150k) grants across animal welfare, with a particular interest in the intersection of animals and transformative AI.
Reach out to marcus.s.abramovitch@gmail.com if you’d like to donate!
Why AI x animals?
Many EAs take seriously both the welfare of animals, and the possibility of short AI timelines. But EA funders currently consider these in isolation. AI safety grants mostly ignore potential outcomes for non-human beings. And animal welfare grants assume business-as-usual, that our world in 10 years mostly looks like the world today.
We don’t expect this to be the case. One major goal of the fund will be to identify and create opportunities so that transformative AI secures good outcomes for animals. Some example projects we’d like to fund:
(We also expect to place some bets on non-AI opportunities that are unusually strong.)
Why rapid?
One of the top complaints among grantees is the glacial pace of funding decisions. To a founder deciding to leave their job or making their first hire, a quick response can be make-or-break. In other domains, Tyler Cowen’s Fast Grants and Jueyan Zhang’s AISTOF show that multi-month-long reviews don’t have to be the default. In the for profit world, VCs similarly make decisions incredibly quickly.
By having one directly responsible individual for this fund, we eschew the overheads in typical grantmaking. As a Manifund regrantor on AI safety, Marcus has turned around funding decisions <1 week; Manifund is able to wire funds in <3 days after that. We’re bringing this speed to the animal welfare space to serve early-stage orgs.
Why Marcus?
This fund represents a bet on Marcus’s taste and execution. He’s already funded many successful early-stage projects, and is fluent in both animal welfare as well as AI/AI safety issues.
Marcus has been a hardcore earn-to-give EA. He's personally donated ~$1.5m, representing >60% of his lifetime earnings, primarily to animal welfare. He earned this money through poker, cryptocurrency/quant trading, prediction markets, and advising a family office. (He was, for a time until he quit, the #1 trader on Manifold by all-time profit.)
Animal track record. Marcus has been an early backer of many projects that are now considered standout animal welfare charities, including:
AI safety regranting record. This highlights Marcus’s eye for talent and understanding of frontier AI development. From a $100k Manifund regranting budget in 2023, Marcus funded:
Marcus also nudged his friend Ege Erdil to start Mechanize, and offered them their first investment.
Compared to other funders
We're fans of the EA Animal Welfare Fund, the Navigation Fund, CG Farmed Animal Welfare and others in this space. We’re starting this fund as an alternative, for several reasons:
First, AI x animals. Others don’t currently prioritize interventions that focus on a transformative AI world. We’re much more AI-pilled and expect there’s a lot of low-hanging fruit for this reason. The AI x Animals RFP and SFF's 2026 round seem good, but neither are currently fundraising.
Second, speed of deployment. We think that there is a need for much faster deployment of funds given our timelines for transformative AI. Especially when it comes to piloting new projects and starting new orgs, we need to move as fast as the AI landscape is moving to support effective interventions.
Third, transparency. As with other grants on Manifund, every grant and rationale by this fund will be made in public on our site, in real time. Donors and grantees will be able to evaluate our decisions for themselves. We think this is good for the ecosystem as a public benefit to build trust, share information and give potential donors a much better insight to what we are doing.
Fourth, active grantmaking. Marcus plans on reaching out to promising individuals rather than primarily taking inbound applications. He has a wide network to draw upon, across the animal welfare, AI, and AI safety ecosystems.
How to donate
Reach out to marcus.s.abramovitch@gmail.com if you’d like to donate, or book a call here.
We’re targeting an initial $2m raise by May 15. Marcus is taking no salary; Manifund runs ops and fiscal sponsorship with a 5% overhead.
Manifund is a 501c3 registered charity (officially “Manifold for Charity Inc.”, EIN 88-3668801; we can accept donations through DAFs, direct wire/bank transfer, crypto, and credit card.