LESSWRONG
LW

275
peterbarnett
3163Ω68201050
Message
Dialogue
Subscribe

Researcher at MIRI

https://peterbarnett.org/

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
My AI Risk Model
3peterbarnett's Shortform
4y
90
No wikitag contributions to display.
Contra Collier on IABIED
peterbarnett1mo122

I think that the proposal in the book would "tank the global economy", as defined by a >10% drop in the S&P 500, and similar index funds, and I think this is a kinda reasonable definition. But I also think that other proposals for us not all dying probably have similar (probably less severe) impacts because they also involve stopping or slowing AI progress (eg Redwood's proposed "get to 30x AI R&D and then stop capabilities progress until we solve alignment" plan[1]).

  1. ^

    I think this is an accurate short description of the plan, but it might have changed last I heard. 

Reply
Eric Neyman's Shortform
peterbarnett1mo199

I think it’s useful to think about the causation here.

Is it:

Intervention -> Obvious bad effect -> Good effect

For example: Terrible economic policies -> Economy crashes -> AI capability progress slows

Or is it:

Obvious bad effect <- Intervention -> Good effect

For example: Patient survivably poisoned <- Chemotherapy -> Cancer gets poisoned to death

Reply1
boazbarak's Shortform
peterbarnett1mo60

The Arbital link (Yudkowsky, E. – "AGI Take-off Speeds" (Arbital 2016)) in there is dead, I briefly looked at the LW wiki to try find the page but didn't see it. @Ruby? 

Reply
peterbarnett's Shortform
peterbarnett2mo50

I first saw it in the this aug 10 WSJ article: https://archive.ph/84l4H
I think it might have been less public knowledge for like a year

Reply
peterbarnett's Shortform
peterbarnett2mo8913

Carl Shulman is working for Leopold Aschenbrenner's "Situational Awareness" hedge fund as the Director of Research. https://whalewisdom.com/filer/situational-awareness-lp 

Reply842
peterbarnett's Shortform
peterbarnett2mo7638

For people who like Yudkowsky's fiction, I recommend reading his story Kindness to Kin. I think it's my favorite of his stories. It's both genuinely moving, and an interesting thought experiment about evolutionary selection pressures and kindness. See also this related tweet thread.

Reply2
tlevin's Shortform
peterbarnett3mo30

6-pair pack of good and super-affordable socks $4 off (I personally endorse this in particular; see my previous enthusiasm for bulk sock-buying in general and these in particular here)

I purchased these socks and approve 

Reply
benwr's unpolished thoughts
peterbarnett3mo30

Eryngrq: uggcf://fvqrjnlf-ivrj.pbz/2018/06/07/zrffntrf-gb-gur-shgher/

Reply
A case for courage, when speaking of AI danger
peterbarnett3mo4521

Maybe it’s hard to communicate nuance, but it seems like there's a crazy thing going on where many people in the AI x-risk community think something like “Well obviously I wish it would stop, and the current situation does seem crazy and unacceptable by any normal standards of risk management. But there’s a lot of nuance in what I actually think we should do, and I don’t want to advocate for a harmful stop.”

And these people end up communicating to external people something like “Stopping is a naive strategy, and continuing (maybe with some safeguards etc) is my preferred strategy  for now.”

This seems to miss out the really important part where they would actually want to stop if we could, but it seems hard and difficult/nuanced to get right.

Reply
Curing PMDD with Hair Loss Pills
peterbarnett4mo40

Is there a side-effect of unwanted hair growth? 

Reply1
Load More
37AI Generated Podcast of the 2021 MIRI Conversations
1mo
0
105AI Governance to Avoid Extinction: The Strategic Landscape and Actionable Research Questions
6mo
7
161Without fundamental advances, misalignment and catastrophe are the default outcomes of training powerful AI
Ω
2y
Ω
60
23Trying to align humans with inclusive genetic fitness
2y
5
215Labs should be explicit about why they are building AGI
2y
18
174Thomas Kwa's MIRI research experience
2y
53
14Doing oversight from the very start of training seems hard
Ω
3y
Ω
3
22Confusions in My Model of AI Risk
3y
9
117Scott Aaronson is joining OpenAI to work on AI safety
3y
31
24A Story of AI Risk: InstructGPT-N
3y
0
Load More