LESSWRONG
LW

remember
698Ω681070
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Full Transcript: Eliezer Yudkowsky on the Bankless podcast
remember2y10

Thank you so much for doing this! Andrea and I both missed this when you first posted it, I'm really sorry I missed your response then. But I've updated it now! 

Reply
Podcast Transcript: Daniela and Dario Amodei on Anthropic
remember2y30

Yes, good call! Added it.

Reply
Full Transcript: Eliezer Yudkowsky on the Bankless podcast
remember3y40

thanks, fixed!!!

Reply
Bankless Podcast: 159 - We’re All Gonna Die with Eliezer Yudkowsky
remember3y110

I just posted a full transcript on LW here!

Reply
Bankless Podcast: 159 - We’re All Gonna Die with Eliezer Yudkowsky
remember3y110

Since there was no full transcript of the podcast, I just made one.  You can find it here.

Reply
Don't accelerate problems you're trying to solve
remember3y65

I think that Anthropic's work also accelerates AI arrival, but it is much easier for it to come out ahead on a cost-benefit: they have significantly smaller effects on acceleration, and a more credible case that they will be safer than alternative AI developers. I have significant unease about this kind of plan, partly for the kinds of reasons you list and also a broader set of moral intuitions. As a result it's not something I would do personally.

From the outside perspective of someone quite new to the AI safety field and with no contact with the Bay Area scene, the reasoning behind this plan is completely illegible to me. What is only visible instead is that they’re working ChatGPT-like systems and capabilities, as well as some empirical work on evaluations and interpretability.The only system more powerful than ChatGPT I’ve seen so far is the unnamed one behind Bing, and I’ve personally heard rumours that both Anthropic and OpenAI are already working on systems beyond ChatGPT/GPT-3.5 level.

Reply
Elicit: Language Models as Research Assistants
remember3y20

We'd love to get feedback on how to make Elicit more useful for LW and to get thoughts on our plans more generally.

A lot of alignment is on lesswrong and alignmentforum, and as far as I can tell elicit doesn't support those. I could be missing something, but if they aren't supported it would be great to have them in Elicit! I use elicit from time to time when I'm doing background research, and it definitely feels far more useful for general ML/capabilities stuff than alignment (to the point I kinda stopped trying for alignment after a few searches turned up nothing).

Reply
20The Gabian History of Mathematics
3d
9
46Podcast Transcript: Daniela and Dario Amodei on Anthropic
2y
2
24[Simulators seminar sequence] #2 Semiotic physics - revamped
Ω
3y
Ω
23
138Full Transcript: Eliezer Yudkowsky on the Bankless podcast
Ω
3y
Ω
89
33Human decision processes are not well factored
3y
3
100Don't accelerate problems you're trying to solve
Ω
3y
Ω
27
39FLI Podcast: Connor Leahy on AI Progress, Chimps, Memes, and Markets (Part 1/3)
Ω
3y
Ω
0
83Book Review: Worlds of Flow
3y
3
50 [Simulators seminar sequence] #1 Background & shared assumptions
Ω
3y
Ω
4
34Mental acceptance and reflection
3y
1
Load More