Howie Lempel


Sorted by New

Wiki Contributions


Thanks for this! I found this much more approachable than other writing on this topic, which I've generally had trouble engaging with because it's felt like it's (implicitly or explicitly) claiming that: 1) this mindset is right for ~everyone; and 2) there are ~no tradeoffs (at least in the medium-term) for (almost?) anyone.

Had a few questions:

Your goals and strategies might change, even if your values remain the same.

Have your values in fact remained the same?

For example, as I walked down the self-love path I felt my external obligations start to drop away. 

What is your current relationship to external obligations? Do they feel like they exist for you now (whatever that means)?

While things are clearly better now, I’m still figuring out how to be internally motivated and also get shit done, and for a while I got less shit done than when I was able to coerce myself.

Do you now feel as able to get things done as you did when you were able to coerce yourself? What do you expect will be the medium-to-long run effect on your ability to get things done? How confident do you feel in that?


More broadly, I'm curious whether this has felt like an unamibiguously positive change by the lights of Charlie from 1-3 years ago (whatever seems like the relevant time period)? In the long-run do you expect it to be a Pareto improvement by past Charlie's lights?

Someone's paraphrase of the article: "I actually think they're worse than before, but being mean is bad so I retract that part"


Weyl's response: "I didn’t call it an apology for this reason."

First of all, I think the books are beautiful. This seems like a great project to me and I'm really glad you all put it together.

I didn't think of this on my own but now that Ozzie raised it, I do think it's misleading not to mention (or at least suggest) that this is selecting the best posts from a particular year in a salient way on the cover.[1] This isn't really because anybody cares whether it's from 2018 or 2019. It's because I think most reasonable readers looking at a curated collection of LessWrong posts titled "Epistemology," "Agency," or "Alignment" would assume that this was a collection of the best ever LW[2] posts on that topic as of ~date of publication. That's a higher bar than 'one of the best posts on epistemology on LW in 2018' and many (most?) readers might prefer it.

Counterargument: maybe all of your customers already know about the project and are sufficiently informed about what this is that putting it on the cover isn't necessary.

Apologies if the ship's already sailed on this and feedback is counterproductive at this point. Overall, I don't think this is a huge deal.

[1] Though not intentionally so.

[2] Maybe people think of LW 2.0 as a sufficient break that they wouldn't be surprised if it was restricted to that.

"As far as I can tell, it does not net profits against losses before calculating these fees."

I can confirm this is the case based on the time I lost money on an arbitrage because I assumed the fees were on net profits.

On the documents:

Unfortunately I read them nearly a year ago so my memory's hazy. But (3) goes over most of the main arguments we talked about in the podcast step by step, though it's just slides so you may have similar complaints about the lack of close analysis of the original texts.

(1) is a pretty detailed write up of Ben's thoughts on discontinuities, sudden emergence, and explosive aftermath. To the extent that you were concerned about those bits in particular, I'd guess you'll find what you're looking for there.

Thanks! Agree that it'd would've been useful to push on that point some more.

I know Ben was writing up some additional parts of his argument at some point but I don't know whether finishing that up is still something he's working on.

The Podcast/Interview format is less well suited for critical text analysis, compared to a formal article or a LessWrong post, for 3 reasons:

Lack of precision. It is a difficult skill to place each qualifier carefully and deliberately when speaking, and at several points I was uncertain if I was parsing Ben's sentences correctly.

Lack of references. The "Classic AI Risk Arguments" are expansive, and critical text analysis require clear pointers to the specific arguments that are being criticized.

Expansiveness. There are a lot of arguments presented, and many of them deserve formal answers. Unfortunately, this is a large task, and I hope you will forgive me for replying in the form of a video.

tl;dw: A number of the arguments Ben Garfinkel criticize are in fact not present in "Superintelligence" and "The AI Foom Debate". (This summary is incomplete.)

Hi Soren,

I agree that podcasts/interviews have some major disadvantages, though they also have several advantages. 

Just wanted to link to Ben's written versions of some (but not all) of these arguments in case you haven't seen them. I don't know whether they address the specific things you're concerned about. We linked to these in the show notes and if we didn't explicitly flag that these existed during the episode, we should have.[1] 

  1. On Classic Arguments for AI Discontinuities
  2. Imagining the Future of AI: Some Incomplete but Overlong Notes
  3. Slide deck: Unpacking Classic AI Risk Arguments
  4. Slide deck: Potential Existential Risks from Artificial Intelligence

(1) and (3) are most relevant to the things we talked about on the podcast. My memory's hazy but I think (2) and (4) also have some relevant sections. 

Unfortunately, I probably won't have time to watch your videos though I'd really like to.[2] If you happen to have any easy-to-write-down thoughts on how I could've made the interview better (including, for example, parts of the interview where I should've pushed back more), I'd find that helpful. 

[1] JTBC, I think we should expect that most listeners are going to absorb whatever's said on the show and not do any additional reading.

[2] ETA: Oh - I just noticed that youtube now has an 'open transcript' feature which makes it possible I'll able to get to this.

Do you still think there's a >80% chance that this was a lab release?

[I'm not an expert.]

My understanding is that SARS-CoV-1 is generally treated as a BSL-3 pathogen or a BSL-2 pathogen (for routine diagnostics and other relatively safe work) and not BSL-4. At the time of the outbreak, SARS-CoV-2 would have been a random animal coronavirus that hadn't yet infected humans, so I'd be surprised if it had more stringent requirements.

Your OP currently states: "a lab studying that class of viruses, of which there is currently only one." If I'm right that you're not currently confident this is the case, it might be worth adding some kind of caveat or epistemic status flag or something.


Some evidence:

I used to play Innovation online here - dunno if it still works.

Also looks like you can play here:

Load More