LESSWRONG
LW

Nicholas Kross
2053Ω2315200
Message
Dialogue
Subscribe

Theoretical AI alignment (and relevant upskilling) in my free time.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
5NicholasKross's Shortform
2y
19
No wikitag contributions to display.
ZY's Shortform
Nicholas Kross1mo20

https://www.lesswrong.com/posts/tpZciMYCXN49FYWnS/nicholaskross-s-shortform?commentId=f4PxFp8LkKKxdCXxh

Reply1
Eliezer and I wrote a book: If Anyone Builds It, Everyone Dies
Nicholas Kross1mo2-1

From the MIRI announcement:

Our big ask for you is: If you have any way to help this book do shockingly, absurdly well— in ways that prompt a serious and sober response from the world — then now is the time.

sober response from the world

sober response

Uh... this is debatably a lot to ask of the world right now.

Reply
Rationalist Movie Reviews
Nicholas Kross5mo20

I said "one of the best movies about", not "one of the best movies showing you how to".

Reply
NicholasKross's Shortform
Nicholas Kross7mo31

The punchline is "alignment could productively use more funding". Many of us already know that, but I felt like putting a mildly-opinionated spin on what kind of things, at the margin, may help top researchers. (Also I spent several minutes editing/hedging the joke)

Reply
NicholasKross's Shortform
Nicholas Kross7mo31

Virgin 2030s [sic] MIRI fellow:
- is cared for so they can focus on research
- has staff to do their laundry
- soyboys who don't know *real* struggle
- 3 LDT-level alignment breakthroughs per week

CHAD 2010s Yudkowsky:
- founded a whole movement to support himself
- "IN A CAVE, WITH A BOX OF SCRAPS"
- walked uphill both ways to Lightcone offices.
- alpha who knows *real* struggle
- 1 LDT-level alignment breakthrough per decade

Reply1
An AI crash is our best bet for restricting AI
Nicholas Kross8mo31

EDIT: Due to the incoming administration's ties to tech investors, I no longer think an AI crash is so likely. Several signs IMHO point to "they're gonna go all-in on racing for AI, regardless of how 'needed' it actually is".

Reply
Load More
16Rationalist Movie Reviews
5mo
2
9Is principled mass-outreach possible, for AGI X-risk?
1y
5
16How to Get Rationalist Feedback
2y
0
-13Musk, Starlink, and Crimea
2y
0
28Incentives affecting alignment-researcher encouragement
Q
2y
Q
3
11Build knowledge base first, or backchain?
Q
2y
Q
5
14Rationality, Pedagogy, and "Vibes": Quick Thoughts
2y
1
16How to Search Multiple Websites Quickly
2y
1
41Why I'm Not (Yet) A Full-Time Technical Alignment Researcher
2y
21
263[SEE NEW EDITS] No, *You* Need to Write Clearer
2y
65
Load More