Ben Pace

I'm an admin of this site; I work full-time on trying to help people on LessWrong refine the art of human rationality.

Longer bio: www.lesswrong.com/posts/aG74jJkiPccqdkK3c/the-lesswrong-team-page-under-construction#Ben_Pace___Benito

Sequences

AI Alignment Writing Day 2019
Transcript of Eric Weinstein / Peter Thiel Conversation
AI Alignment Writing Day 2018
Share Models, Not Beliefs

Wiki Contributions

Load More

Comments

Yudkowsky and Christiano discuss "Takeoff Speeds"

FWIW "yeah this confirms what we already thought" makes no sense to me. I heard someone say this the other day, and I was a bit floored. Who knew that Eliezer would respond with a long list of examples that didn't look like continuous progress at the time, and said this more than 3 days ago? 

I feel like I got a much better sense of Eliezer's perspective reading this. One key element is whether AI progress is surprising, which it often is even if you can make trend-line arguments after-the-fact, people basically don't, and when they do they often get it wrong. (Here's an example of Dario Amodei + Danny Hernandez finding a trend in AI, that apparently immediately stopped trending as soon as they reported it.) There's also lots of details about what the chimps-to-humans transition shows, and various other points (like regulation preventing most AI progress from showing up in GDP). 

I do think I could've gotten a lot of this understanding earlier by more carefully reading IEM, and now that I'm rereading it I get it much better. But nobody seems to have engaged with the arguments in it and tried to connect them to Paul's post that I can see. Perhaps someone did, and I'd be pretty interested to read that now with the benefit of hindsight.

Christiano, Cotra, and Yudkowsky on AI progress

Wow thanks for pulling that up. I've gotta say, having records of people's predictions is pretty sweet. Similarly, solid find on the Bostrom quote.

Do you think that might be the 20% number that Eliezer is remembering? Eliezer, interested in whether you have a recollection of this or not. [Added: It seems from a comment upthread that EY was talking about superforecasters in Feb 2016, which is after Fan Hui.]

Christiano, Cotra, and Yudkowsky on AI progress

Adding my recollection of that period: some people made the relevant updates when DeepMind's system beat the European Champion Fan Hui (in October 2015). My hazy recollection is that beating Fan Hui started some people going "Oh huh, I think this is going to happen" and then when AlphaGo beat Lee Sedol (in March 2016) everyone said "Now it is happening".

Yudkowsky and Christiano discuss "Takeoff Speeds"

FWIW I also don’t like the phrasing of my comment very much either. I came back thinking to remove it but saw you’d already replied :P

Yudkowsky and Christiano discuss "Takeoff Speeds"

Why would it move toward Paul? He made almost no arguments, and Eliezer made lots. When Paul entered the chat it was focused on describing what each of them believe in order to find a bet, not communicating why they believe it.

Base Rates and Reference Classes

I'll clarify two things, let me know if your problem is not addressed.

For automatic crossposting, the posts link back to the original blog (not blogpost) in the place shown here:

Note that this does not appear on mobile, because space is very limited and we didn't figure out how to fit it into the UI.

If a person makes a linkpost by adding a link to the small field at the top of the editor, then you get a link to a specific post. That looks like this:

This process is not automatic, linkposts are only made manually.

AI Safety Needs Great Engineers

Can someone briefly describe what empirical AI safety work Cohere is doing? I hadn't heard of them until this post.

Discussion with Eliezer Yudkowsky on AGI interventions

Thank you for this follow-up comment Adam, I appreciate it.

Attempted Gears Analysis of AGI Intervention Discussion With Eliezer

This has been quite confusing even to me from the outside.

Load More