LESSWRONG
LW

302
peterbarnett
3179Ω68201070
Message
Dialogue
Subscribe

Researcher at MIRI

https://peterbarnett.org/

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
3peterbarnett's Shortform
4y
90
My AI Risk Model
AISLE discovered three new OpenSSL vulnerabilities
peterbarnett10h170

I would love for someone to tell me how big a deal these vulnerabilities are, and how hard people had previously been trying to catch them. The blog post says that two were severity "Moderate", and one was "Low", but I don't really know how to interpret this. 

Reply
Introducing the Epoch Capabilities Index (ECI)
peterbarnett1d30

I would guess that this is mainly due to there being much more limited FLOP data for the closed models (especially for recent models), and for closed models focusing much less on small training FLOP models (eg <1e25 FLOP)

Reply
Contra Collier on IABIED
peterbarnett1mo122

I think that the proposal in the book would "tank the global economy", as defined by a >10% drop in the S&P 500, and similar index funds, and I think this is a kinda reasonable definition. But I also think that other proposals for us not all dying probably have similar (probably less severe) impacts because they also involve stopping or slowing AI progress (eg Redwood's proposed "get to 30x AI R&D and then stop capabilities progress until we solve alignment" plan[1]).

  1. ^

    I think this is an accurate short description of the plan, but it might have changed last I heard. 

Reply
Eric Neyman's Shortform
peterbarnett2mo199

I think it’s useful to think about the causation here.

Is it:

Intervention -> Obvious bad effect -> Good effect

For example: Terrible economic policies -> Economy crashes -> AI capability progress slows

Or is it:

Obvious bad effect <- Intervention -> Good effect

For example: Patient survivably poisoned <- Chemotherapy -> Cancer gets poisoned to death

Reply1
boazbarak's Shortform
peterbarnett2mo60

The Arbital link (Yudkowsky, E. – "AGI Take-off Speeds" (Arbital 2016)) in there is dead, I briefly looked at the LW wiki to try find the page but didn't see it. @Ruby? 

Reply
peterbarnett's Shortform
peterbarnett2mo50

I first saw it in the this aug 10 WSJ article: https://archive.ph/84l4H
I think it might have been less public knowledge for like a year

Reply
peterbarnett's Shortform
peterbarnett2mo8913

Carl Shulman is working for Leopold Aschenbrenner's "Situational Awareness" hedge fund as the Director of Research. https://whalewisdom.com/filer/situational-awareness-lp 

Reply842
peterbarnett's Shortform
peterbarnett2mo7638

For people who like Yudkowsky's fiction, I recommend reading his story Kindness to Kin. I think it's my favorite of his stories. It's both genuinely moving, and an interesting thought experiment about evolutionary selection pressures and kindness. See also this related tweet thread.

Reply2
tlevin's Shortform
peterbarnett3mo30

6-pair pack of good and super-affordable socks $4 off (I personally endorse this in particular; see my previous enthusiasm for bulk sock-buying in general and these in particular here)

I purchased these socks and approve 

Reply
benwr's unpolished thoughts
peterbarnett3mo30

Eryngrq: uggcf://fvqrjnlf-ivrj.pbz/2018/06/07/zrffntrf-gb-gur-shgher/

Reply
Load More
37AI Generated Podcast of the 2021 MIRI Conversations
2mo
0
105AI Governance to Avoid Extinction: The Strategic Landscape and Actionable Research Questions
6mo
7
161Without fundamental advances, misalignment and catastrophe are the default outcomes of training powerful AI
Ω
2y
Ω
60
23Trying to align humans with inclusive genetic fitness
2y
5
215Labs should be explicit about why they are building AGI
2y
18
174Thomas Kwa's MIRI research experience
2y
53
14Doing oversight from the very start of training seems hard
Ω
3y
Ω
3
22Confusions in My Model of AI Risk
3y
9
117Scott Aaronson is joining OpenAI to work on AI safety
3y
31
24A Story of AI Risk: InstructGPT-N
3y
0
Load More