Posts

Sorted by New

Wiki Contributions

Comments

"I'm pretty sure that most EAs I know have ~100% confidence that what they're doing is net positive for the long-term future"

Fwiw, I think this is probably true for very few if any of the EAs I've worked with, though that's a biased sample.

I wonder if the thing giving you this vibe might be they they actually think something like "I'm not that confident that my work is net positive for the LTF but my best guess is that it's net positive in expectation. If what I'm doing is not positive, there's no cheap way for me to figure it out, so I am confident (though not ~100%) that my work will keep seeming positive EV to me for the near future." One informal way to describe this is that they are confident that their work is net positive in expectation/ex ante but not that it will be net positive ex post

I think this can look a lot like somebody being ~sure that what they're doing is net positive even if in fact they are pretty uncertain.

Fyi - this series of posts caused me to get a blood test for nutritional deficiencies, learn that I have insufficient vitamin D and folic acid, and take supplements on a bunch of days that I otherwise would not have (though less often than I should given knowledge of a deficiency). Thanks!

I haven't kept up with it so can't really vouch for it but Rohin's alignment newsletter should also be on your radar. https://rohinshah.com/alignment-newsletter/

[This comment is no longer endorsed by its author]Reply

Thanks for this! I found this much more approachable than other writing on this topic, which I've generally had trouble engaging with because it's felt like it's (implicitly or explicitly) claiming that: 1) this mindset is right for ~everyone; and 2) there are ~no tradeoffs (at least in the medium-term) for (almost?) anyone.

Had a few questions:

Your goals and strategies might change, even if your values remain the same.

Have your values in fact remained the same?

For example, as I walked down the self-love path I felt my external obligations start to drop away. 

What is your current relationship to external obligations? Do they feel like they exist for you now (whatever that means)?

While things are clearly better now, I’m still figuring out how to be internally motivated and also get shit done, and for a while I got less shit done than when I was able to coerce myself.

Do you now feel as able to get things done as you did when you were able to coerce yourself? What do you expect will be the medium-to-long run effect on your ability to get things done? How confident do you feel in that?

***

More broadly, I'm curious whether this has felt like an unamibiguously positive change by the lights of Charlie from 1-3 years ago (whatever seems like the relevant time period)? In the long-run do you expect it to be a Pareto improvement by past Charlie's lights?

Someone's paraphrase of the article: "I actually think they're worse than before, but being mean is bad so I retract that part"

 

Weyl's response: "I didn’t call it an apology for this reason."

https://twitter.com/glenweyl/status/1446337463442575361

First of all, I think the books are beautiful. This seems like a great project to me and I'm really glad you all put it together.

I didn't think of this on my own but now that Ozzie raised it, I do think it's misleading not to mention (or at least suggest) that this is selecting the best posts from a particular year in a salient way on the cover.[1] This isn't really because anybody cares whether it's from 2018 or 2019. It's because I think most reasonable readers looking at a curated collection of LessWrong posts titled "Epistemology," "Agency," or "Alignment" would assume that this was a collection of the best ever LW[2] posts on that topic as of ~date of publication. That's a higher bar than 'one of the best posts on epistemology on LW in 2018' and many (most?) readers might prefer it.

Counterargument: maybe all of your customers already know about the project and are sufficiently informed about what this is that putting it on the cover isn't necessary.

Apologies if the ship's already sailed on this and feedback is counterproductive at this point. Overall, I don't think this is a huge deal.

[1] Though not intentionally so.

[2] Maybe people think of LW 2.0 as a sufficient break that they wouldn't be surprised if it was restricted to that.

"As far as I can tell, it does not net profits against losses before calculating these fees."
 

I can confirm this is the case based on the time I lost money on an arbitrage because I assumed the fees were on net profits.

On the documents:

Unfortunately I read them nearly a year ago so my memory's hazy. But (3) goes over most of the main arguments we talked about in the podcast step by step, though it's just slides so you may have similar complaints about the lack of close analysis of the original texts.

(1) is a pretty detailed write up of Ben's thoughts on discontinuities, sudden emergence, and explosive aftermath. To the extent that you were concerned about those bits in particular, I'd guess you'll find what you're looking for there.

Thanks! Agree that it'd would've been useful to push on that point some more.

I know Ben was writing up some additional parts of his argument at some point but I don't know whether finishing that up is still something he's working on.

Load More