Howie Lempel


Sorted by New


The LessWrong 2018 Book is Available for Pre-order

First of all, I think the books are beautiful. This seems like a great project to me and I'm really glad you all put it together.

I didn't think of this on my own but now that Ozzie raised it, I do think it's misleading not to mention (or at least suggest) that this is selecting the best posts from a particular year in a salient way on the cover.[1] This isn't really because anybody cares whether it's from 2018 or 2019. It's because I think most reasonable readers looking at a curated collection of LessWrong posts titled "Epistemology," "Agency," or "Alignment" would assume that this was a collection of the best ever LW[2] posts on that topic as of ~date of publication. That's a higher bar than 'one of the best posts on epistemology on LW in 2018' and many (most?) readers might prefer it.

Counterargument: maybe all of your customers already know about the project and are sufficiently informed about what this is that putting it on the cover isn't necessary.

Apologies if the ship's already sailed on this and feedback is counterproductive at this point. Overall, I don't think this is a huge deal.

[1] Though not intentionally so.

[2] Maybe people think of LW 2.0 as a sufficient break that they wouldn't be surprised if it was restricted to that.

Limits of Current US Prediction Markets (PredictIt Case Study)

"As far as I can tell, it does not net profits against losses before calculating these fees."

I can confirm this is the case based on the time I lost money on an arbitrage because I assumed the fees were on net profits.

A long reply to Ben Garfinkel on Scrutinizing Classic AI Risk Arguments

On the documents:

Unfortunately I read them nearly a year ago so my memory's hazy. But (3) goes over most of the main arguments we talked about in the podcast step by step, though it's just slides so you may have similar complaints about the lack of close analysis of the original texts.

(1) is a pretty detailed write up of Ben's thoughts on discontinuities, sudden emergence, and explosive aftermath. To the extent that you were concerned about those bits in particular, I'd guess you'll find what you're looking for there.

A long reply to Ben Garfinkel on Scrutinizing Classic AI Risk Arguments

Thanks! Agree that it'd would've been useful to push on that point some more.

I know Ben was writing up some additional parts of his argument at some point but I don't know whether finishing that up is still something he's working on.

A long reply to Ben Garfinkel on Scrutinizing Classic AI Risk Arguments

The Podcast/Interview format is less well suited for critical text analysis, compared to a formal article or a LessWrong post, for 3 reasons:

Lack of precision. It is a difficult skill to place each qualifier carefully and deliberately when speaking, and at several points I was uncertain if I was parsing Ben's sentences correctly.

Lack of references. The "Classic AI Risk Arguments" are expansive, and critical text analysis require clear pointers to the specific arguments that are being criticized.

Expansiveness. There are a lot of arguments presented, and many of them deserve formal answers. Unfortunately, this is a large task, and I hope you will forgive me for replying in the form of a video.

tl;dw: A number of the arguments Ben Garfinkel criticize are in fact not present in "Superintelligence" and "The AI Foom Debate". (This summary is incomplete.)

Hi Soren,

I agree that podcasts/interviews have some major disadvantages, though they also have several advantages. 

Just wanted to link to Ben's written versions of some (but not all) of these arguments in case you haven't seen them. I don't know whether they address the specific things you're concerned about. We linked to these in the show notes and if we didn't explicitly flag that these existed during the episode, we should have.[1] 

  1. On Classic Arguments for AI Discontinuities
  2. Imagining the Future of AI: Some Incomplete but Overlong Notes
  3. Slide deck: Unpacking Classic AI Risk Arguments
  4. Slide deck: Potential Existential Risks from Artificial Intelligence

(1) and (3) are most relevant to the things we talked about on the podcast. My memory's hazy but I think (2) and (4) also have some relevant sections. 

Unfortunately, I probably won't have time to watch your videos though I'd really like to.[2] If you happen to have any easy-to-write-down thoughts on how I could've made the interview better (including, for example, parts of the interview where I should've pushed back more), I'd find that helpful. 

[1] JTBC, I think we should expect that most listeners are going to absorb whatever's said on the show and not do any additional reading.

[2] ETA: Oh - I just noticed that youtube now has an 'open transcript' feature which makes it possible I'll able to get to this.

Jimrandomh's Shortform

Do you still think there's a >80% chance that this was a lab release?

Jimrandomh's Shortform

[I'm not an expert.]

My understanding is that SARS-CoV-1 is generally treated as a BSL-3 pathogen or a BSL-2 pathogen (for routine diagnostics and other relatively safe work) and not BSL-4. At the time of the outbreak, SARS-CoV-2 would have been a random animal coronavirus that hadn't yet infected humans, so I'd be surprised if it had more stringent requirements.

Your OP currently states: "a lab studying that class of viruses, of which there is currently only one." If I'm right that you're not currently confident this is the case, it might be worth adding some kind of caveat or epistemic status flag or something.


Some evidence:

How to have a happy quarantine

I used to play Innovation online here - dunno if it still works.

Also looks like you can play here:

What will be the big-picture implications of the coronavirus, assuming it eventually infects >10% of the world?

Thanks for confirming!

How ill do they have to be? If a contact is feeling under the weather in a nonspecific way and has a cough, is that enough for them to get tested?

Do you feel like you have any insight into whether underreporting of mild/minimally symptomatic/asymptomatic cases?

How to fly safely right now?

I was able to buy hand sanitizer after going through security at JFK on Sunday but I wouldn't count on that. Fwiw, Purell bottles small enough to take through security seem pretty common.

Load More