LESSWRONG
LW

238
jaan
1315Ω276420
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Anthropic's leading researchers acted as moderate accelerationists
jaan9d112

1. i agree. as wei explicitly mentions, signalling approval was a big reason why he did not invest, and it definitely gave me a pause, too (i had a call with nate & eliezer on this topic around that time). still, if i try to imagine a world where i declined to invest, i don't see it being obviously better (ofc it's possible that the difference is still yet to reveal itself).

concerns about startups being net negative are extremely rare (outside of AI, i can't remember any other case -- though it's possible that i'm forgetting some). i believe this is the main reason why VCs and SV technologists tend to be AI xrisk deniers (another being that it's harder to fundraise as a VC/technologist if you have sign uncertainty) -- their prior is too strong to consider AI an exception. a couple of years ago i was at an event in SF where top tech CEOs talked about wanting to create "lots of externalties", implying that externalities can only be positive.

2. yeah, the priorities page is now more than a year old and in bad need of an update. thanks for the criticism -- fwded to the people drafting the update.

Reply1
Anthropic's leading researchers acted as moderate accelerationists
jaan9d72

ah, sorry about mis-framing your comment! i tend to use the term "FDT" casually to refer to "instead of individual acts, try to think about policies and how would they apply to agents in my reference class(es)" (which i think does apply here, as i consider us sharing a plausible reference class).

Reply
Anthropic's leading researchers acted as moderate accelerationists
jaan11d112

There is a question about whether the safety efforts your money supported at or around the companies ended up compensating for the developments

yes. more generally, sign uncertainty sucks (and is a recurring discussion topic in SFF round debates).

It seems that if Dustin and you had not funded Series A of Anthropic, they would have had a harder time starting up.

they certainly would not have had harder time setting up the company nor getting the equivalent level of funding (perhaps even at a better valuation). it’s plausible that pointing to “aligned” investors helped with initial recruiting — but that’s unclear to me. my model of dario/founders just did not want the VC profit-motive to play a big part in the initial strategy.

Does this have to do with liquidity issues or something else?

yup, liquidity (also see the comments below), crypto prices, and about half of my philanthropy not being listed on that page. also SFF s-process works with aggregated marginal value functions, so there is no hard cutoff (hence the “evaluators could not make grants that they wanted to” sentence makes less sense than in traditional “chunky and discretionary” philanthropic context).

Reply1
Anthropic's leading researchers acted as moderate accelerationists
jaan11d130

indeed, illiquidity is a big constraint to my philanthropy, so in very short timelines my “invest (in startups) and redistribute” policy does not work too well.

Reply
Anthropic's leading researchers acted as moderate accelerationists
jaan12d13032

These investors were Dustin Moskovitz, Jaan Tallinn and Sam Bankman-Fried

nitpick: SBF/FTX did not participate in the initial round - they bought $500M worth of non-voting shares later, after the company was well on its way.

more importantly, i often get the criticism that "if you're concerned with AI then why do you invest in it". even though the critics usually (and incorrectly) imply that the AI would not happen (at least not nearly as fast) if i did not invest, i acknowledge that this is a fair criticism from the FDT perspective (as witnessed by wei dai's recent comment how he declined the opportunity to invest in anthropic).

i'm open to improving my policy (which is - empirically - also correllated with the respective policies of dustin as well as FLI) of - roughly - "invest in AI and spend the proceeds on AI safety" -- but the improvements need to take into account that a) prominent AI founders have no trouble raising funds (in most of the alternative worlds anthropic is VC funded from the start, like several other openAI offshoots), b) the volume of my philanthropy is correllated with my net worth, and c) my philanthropy is more needed in the worlds where AI progresses faster.

EDIT: i appreciate the post otherwise -- upvoted!

Reply32
Anthropic's leading researchers acted as moderate accelerationists
jaan12d370

DeepMind was funded by Jaan Tallinn and Peter Thiel

i did not participate in DM's first round (series A) -- my investment fund invested in series B and series C, and ended up with about 1% stake in the company. this sentence is therefore moderately misleading.

Reply1
The Best Resources To Build Any Intuition
jaan20d134

the video that made FFT finally click for me: 

Reply1
Love stays loved (formerly "Skin")
jaan2mo1610

this was good.

Reply6
Our Reality: A Simulation Run by a Paperclip Maximizer
jaan5mo100

my most fun talk made a similar claim:

Reply
Jaan Tallinn's 2024 Philanthropy Overview
jaan5mo142

no plan, my timelines are quite uncertain (and even if i knew for sure that money will stop mattering in 2 years, it’s not obvious at all what to spend it on).

Reply
Load More
227Jaan Tallinn's 2024 Philanthropy Overview
5mo
8
203Jaan Tallinn's 2023 Philanthropy Overview
1y
5
64Jaan Tallinn's 2022 Philanthropy Overview
2y
2
71Jaan Tallinn's 2021 Philanthropy Overview
3y
2
121Soares, Tallinn, and Yudkowsky discuss AGI cognition
Ω
4y
Ω
39
113Jaan Tallinn's 2020 Philanthropy Overview
4y
4
78Jaan Tallinn's Philanthropic Pledge
6y
1