Ben Pace

I'm an admin of this site; I work full-time on trying to help people on LessWrong refine the art of human rationality. (Longer bio.)

Sequences

AI Alignment Writing Day 2019
Transcript of Eric Weinstein / Peter Thiel Conversation
AI Alignment Writing Day 2018
Share Models, Not Beliefs

Wiki Contributions

Load More

Comments

I think this means "I assume if OpenAI staff expected that users would writing insulting things in the comments, then they may not have participated at all".

This feels like it wants to be a poll to me.

My first idea is to just have a poll like the other two we've had recently, where there's 9 entries and you agree/disagree with whether each statement is a lie.

I'm interested in any other suggestions for how to set up a poll.

Mod feedback: This post would majorly benefit from a tl;dr, it took me a long time to find out that this post's content was a bill for how tech companies should deal with the accounts of deceased users. Something about the writing style also seems a bit overly elaborate, perhaps that's what the language model thinks essayists sound like.

I am a bit confused how to relate to covertly breaking social norms.

In general I think you can't always tell whether a norm is dumb just by looking at the moral character of the people breaking it. I think sometimes silly norms are only violated by reckless and impulsive people with little ability to self-regulate and little care for ethics, and in some cases it isn't worth the cost to general norm-following behavior.

But as I say, still confused about the issue.

(Clarification: I didn't mean to say that this banner succeeded. I meant to say it was a worthwhile thing to attempt.)

I think it's relevant that Vitalik is 29, Bostrom is 50, and Yudkowsky is 44 (plus he has major chronic health issues).

I'd also say that the broader society has been much more supportive of Vitalik than it has been of Bostrom and Yudkowsky (billionaire, TIME cover, 5M Twitter followers, etc), putting him in a better place personally to try to do the ~political work of uniting people. He is also far more respected by the folks in the accelerationist camp making it more worthwhile for him to invest in an intellectual account that includes their dreams of the future (which he largely shares).

It is a good thing to actually try to find a banner to unite all the peoples...

That was fun to watch. But I would appreciate someone spelling out the implied connection to mechanistic interpretability.

I would encourage you to include ~any of your content in the body of the LW post, as I expect ~most people do not click through to read links, especially with very little idea of what's in the link.

Curated. This is an interesting, thoughtful, and very engagingly written discussion of the things people are incentivized to hide, and how to reason about them.

I would have considered this for curation, but I slightly more wanted to curate it given that it's Duncan's last post on LessWrong (which he assigns 75% to). I hope he continues to write many excellent essays somewhere on the public internet, and personally I have signed up as a paying subscriber to his substack to support him doing so there.

Load More